Generative ModelsCollection of generative models, e.g. GAN, VAE in Pytorch and Tensorflow.
Stars: ✭ 6,701 (+10209.23%)
benchmark VAEUnifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Stars: ✭ 1,211 (+1763.08%)
generative deep learningGenerative Deep Learning Sessions led by Anugraha Sinha (Machine Learning Tokyo)
Stars: ✭ 24 (-63.08%)
Generative-ModelRepository for implementation of generative models with Tensorflow 1.x
Stars: ✭ 66 (+1.54%)
char-VAEInspired by the neural style algorithm in the computer vision field, we propose a high-level language model with the aim of adapting the linguistic style.
Stars: ✭ 18 (-72.31%)
srVAEVAE with RealNVP prior and Super-Resolution VAE in PyTorch. Code release for https://arxiv.org/abs/2006.05218.
Stars: ✭ 56 (-13.85%)
interactive-spectrogram-inpaintingImplementation of the framework described in the paper Spectrogram Inpainting for Interactive Generation of Instrument Sounds published at the 2020 Joint Conference on AI Music Creativity.
Stars: ✭ 26 (-60%)
Awesome VaesA curated list of awesome work on VAEs, disentanglement, representation learning, and generative models.
Stars: ✭ 418 (+543.08%)
DiffuseVAEA combination of VAE's and Diffusion Models for efficient, controllable and high-fidelity generation from low-dimensional latents
Stars: ✭ 81 (+24.62%)
JukeboxCode for the paper "Jukebox: A Generative Model for Music"
Stars: ✭ 4,863 (+7381.54%)
Vae protein functionProtein function prediction using a variational autoencoder
Stars: ✭ 57 (-12.31%)
Tf VqvaeTensorflow Implementation of the paper [Neural Discrete Representation Learning](https://arxiv.org/abs/1711.00937) (VQ-VAE).
Stars: ✭ 226 (+247.69%)
Sentence VaePyTorch Re-Implementation of "Generating Sentences from a Continuous Space" by Bowman et al 2015 https://arxiv.org/abs/1511.06349
Stars: ✭ 462 (+610.77%)
InpaintNetCode accompanying ISMIR'19 paper titled "Learning to Traverse Latent Spaces for Musical Score Inpaintning"
Stars: ✭ 48 (-26.15%)
style-vaeImplementation of VAE and Style-GAN Architecture Achieving State of the Art Reconstruction
Stars: ✭ 25 (-61.54%)
Dfc VaeVariational Autoencoder trained by Feature Perceputal Loss
Stars: ✭ 74 (+13.85%)
Vae For Image GenerationImplemented Variational Autoencoder generative model in Keras for image generation and its latent space visualization on MNIST and CIFAR10 datasets
Stars: ✭ 87 (+33.85%)
eccv16 attr2imgTorch Implemention of ECCV'16 paper: Attribute2Image
Stars: ✭ 93 (+43.08%)
mix-stageOfficial Repository for the paper Style Transfer for Co-Speech Gesture Animation: A Multi-Speaker Conditional-Mixture Approach published in ECCV 2020 (https://arxiv.org/abs/2007.12553)
Stars: ✭ 22 (-66.15%)
concept-based-xaiLibrary implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (-36.92%)
PREREQ-IAAI-19Inferring Concept Prerequisite Relations from Online Educational Resources (IAAI-19)
Stars: ✭ 22 (-66.15%)
Gumbel-CRFImplementation of NeurIPS 20 paper: Latent Template Induction with Gumbel-CRFs
Stars: ✭ 51 (-21.54%)
GatedPixelCNNPyTorchPyTorch implementation of "Conditional Image Generation with PixelCNN Decoders" by van den Oord et al. 2016
Stars: ✭ 68 (+4.62%)
deepgttDeepGTT: Learning Travel Time Distributions with Deep Generative Model
Stars: ✭ 30 (-53.85%)
feed forward vqgan clipFeed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
Stars: ✭ 135 (+107.69%)
RAVEOfficial implementation of the RAVE model: a Realtime Audio Variational autoEncoder
Stars: ✭ 564 (+767.69%)
AC-VRNNPyTorch code for CVIU paper "AC-VRNN: Attentive Conditional-VRNN for Multi-Future Trajectory Prediction"
Stars: ✭ 21 (-67.69%)
nvaeAn unofficial toy implementation for NVAE 《A Deep Hierarchical Variational Autoencoder》
Stars: ✭ 83 (+27.69%)
caffe-simnetsThe SimNets Architecture's Implementation in Caffe
Stars: ✭ 13 (-80%)
EVEOfficial repository for the paper "Large-scale clinical interpretation of genetic variants using evolutionary data and deep learning". Joint collaboration between the Marks lab and the OATML group.
Stars: ✭ 37 (-43.08%)
vae-concreteKeras implementation of a Variational Auto Encoder with a Concrete Latent Distribution
Stars: ✭ 51 (-21.54%)
continuous-time-flow-processPyTorch code of "Modeling Continuous Stochastic Processes with Dynamic Normalizing Flows" (NeurIPS 2020)
Stars: ✭ 34 (-47.69%)
pyroVEDInvariant representation learning from imaging and spectral data
Stars: ✭ 23 (-64.62%)
trVAEConditional out-of-distribution prediction
Stars: ✭ 47 (-27.69%)
worldsBuilding Virtual Reality Worlds using Three.js
Stars: ✭ 23 (-64.62%)
vae captioningImplementation of Diverse and Accurate Image Description Using a Variational Auto-Encoder with an Additive Gaussian Encoding Space
Stars: ✭ 58 (-10.77%)
naruNeural Relation Understanding: neural cardinality estimators for tabular data
Stars: ✭ 76 (+16.92%)
Advanced Models여러가지 유명한 신경망 모델들을 제공합니다. (DCGAN, VAE, Resnet 등등)
Stars: ✭ 48 (-26.15%)
simpleganTensorflow-based framework to ease training of generative models
Stars: ✭ 19 (-70.77%)
latent-pose-reenactmentThe authors' implementation of the "Neural Head Reenactment with Latent Pose Descriptors" (CVPR 2020) paper.
Stars: ✭ 132 (+103.08%)
soft-intro-vae-pytorch[CVPR 2021 Oral] Official PyTorch implementation of Soft-IntroVAE from the paper "Soft-IntroVAE: Analyzing and Improving Introspective Variational Autoencoders"
Stars: ✭ 170 (+161.54%)
Fun-with-MNISTPlaying with MNIST. Machine Learning. Generative Models.
Stars: ✭ 23 (-64.62%)
glico-learning-small-sampleGenerative Latent Implicit Conditional Optimization when Learning from Small Sample ICPR 20'
Stars: ✭ 20 (-69.23%)
probabilistic nlgTensorflow Implementation of Stochastic Wasserstein Autoencoder for Probabilistic Sentence Generation (NAACL 2019).
Stars: ✭ 28 (-56.92%)
cygenCodes for CyGen, the novel generative modeling framework proposed in "On the Generative Utility of Cyclic Conditionals" (NeurIPS-21)
Stars: ✭ 44 (-32.31%)
language-modelsKeras implementations of three language models: character-level RNN, word-level RNN and Sentence VAE (Bowman, Vilnis et al 2016).
Stars: ✭ 39 (-40%)
MIDI-VAENo description or website provided.
Stars: ✭ 56 (-13.85%)
MMD-GANImproving MMD-GAN training with repulsive loss function
Stars: ✭ 82 (+26.15%)
DeepSSM SysIDOfficial PyTorch implementation of "Deep State Space Models for Nonlinear System Identification", 2020.
Stars: ✭ 62 (-4.62%)
SganStacked Generative Adversarial Networks
Stars: ✭ 240 (+269.23%)