normalizing-flowsPyTorch implementation of normalizing flow models
Stars: ✭ 271 (+118.55%)
Vae Cvae MnistVariational Autoencoder and Conditional Variational Autoencoder on MNIST in PyTorch
Stars: ✭ 229 (+84.68%)
continuous BernoulliThere are C language computer programs about the simulator, transformation, and test statistic of continuous Bernoulli distribution. More than that, the book contains continuous Binomial distribution and continuous Trinomial distribution.
Stars: ✭ 22 (-82.26%)
AC-VRNNPyTorch code for CVIU paper "AC-VRNN: Attentive Conditional-VRNN for Multi-Future Trajectory Prediction"
Stars: ✭ 21 (-83.06%)
benchmark VAEUnifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Stars: ✭ 1,211 (+876.61%)
Pytorch VaeA CNN Variational Autoencoder (CNN-VAE) implemented in PyTorch
Stars: ✭ 181 (+45.97%)
lego-face-VAEVariational autoencoder for Lego minifig faces
Stars: ✭ 15 (-87.9%)
Neuraldialog LarlPyTorch implementation of latent space reinforcement learning for E2E dialog published at NAACL 2019. It is released by Tiancheng Zhao (Tony) from Dialog Research Center, LTI, CMU
Stars: ✭ 127 (+2.42%)
multimodal-vae-publicA PyTorch implementation of "Multimodal Generative Models for Scalable Weakly-Supervised Learning" (https://arxiv.org/abs/1802.05335)
Stars: ✭ 98 (-20.97%)
cocoon-demoCocoon – a flow-based workflow automation, data mining and visual analytics tool.
Stars: ✭ 19 (-84.68%)
svae cf[ WSDM '19 ] Sequential Variational Autoencoders for Collaborative Filtering
Stars: ✭ 38 (-69.35%)
STEPSpatial Temporal Graph Convolutional Networks for Emotion Perception from Gaits
Stars: ✭ 39 (-68.55%)
MIDI-VAENo description or website provided.
Stars: ✭ 56 (-54.84%)
CHyVAECode for our paper -- Hyperprior Induced Unsupervised Disentanglement of Latent Representations (AAAI 2019)
Stars: ✭ 18 (-85.48%)
Cada Vae PytorchOfficial implementation of the paper "Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders" (CVPR 2019)
Stars: ✭ 198 (+59.68%)
playing with vaeComparing FC VAE / FCN VAE / PCA / UMAP on MNIST / FMNIST
Stars: ✭ 53 (-57.26%)
VdeVariational Autoencoder for Dimensionality Reduction of Time-Series
Stars: ✭ 148 (+19.35%)
haskell-vaeLearning about Haskell with Variational Autoencoders
Stars: ✭ 18 (-85.48%)
BnafPytorch implementation of Block Neural Autoregressive Flow
Stars: ✭ 138 (+11.29%)
eccv16 attr2imgTorch Implemention of ECCV'16 paper: Attribute2Image
Stars: ✭ 93 (-25%)
TybaltTraining and evaluating a variational autoencoder for pan-cancer gene expression data
Stars: ✭ 126 (+1.61%)
vaeganAn implementation of VAEGAN (variational autoencoder + generative adversarial network).
Stars: ✭ 88 (-29.03%)
Variational-NMTVariational Neural Machine Translation System
Stars: ✭ 37 (-70.16%)
deep-blueberryIf you've always wanted to learn about deep-learning but don't know where to start, then you might have stumbled upon the right place!
Stars: ✭ 17 (-86.29%)
vae-torchVariational autoencoder for anomaly detection (in PyTorch).
Stars: ✭ 38 (-69.35%)
vae-concreteKeras implementation of a Variational Auto Encoder with a Concrete Latent Distribution
Stars: ✭ 51 (-58.87%)
tt-vae-ganTimbre transfer with variational autoencoding and cycle-consistent adversarial networks. Able to transfer the timbre of an audio source to that of another.
Stars: ✭ 37 (-70.16%)
SIVIUsing neural network to build expressive hierarchical distribution; A variational method to accurately estimate posterior uncertainty; A fast and general method for Bayesian inference. (ICML 2018)
Stars: ✭ 49 (-60.48%)
adVAEImplementation of 'Self-Adversarial Variational Autoencoder with Gaussian Anomaly Prior Distribution for Anomaly Detection'
Stars: ✭ 17 (-86.29%)
VAE-Latent-Space-ExplorerInteractive exploration of MNIST variational autoencoder latent space with React and tensorflow.js.
Stars: ✭ 30 (-75.81%)
CIKM18-LCVACode for CIKM'18 paper, Linked Causal Variational Autoencoder for Inferring Paired Spillover Effects.
Stars: ✭ 13 (-89.52%)
soft-intro-vae-pytorch[CVPR 2021 Oral] Official PyTorch implementation of Soft-IntroVAE from the paper "Soft-IntroVAE: Analyzing and Improving Introspective Variational Autoencoders"
Stars: ✭ 170 (+37.1%)
linguistic-style-transfer-pytorchImplementation of "Disentangled Representation Learning for Non-Parallel Text Style Transfer(ACL 2019)" in Pytorch
Stars: ✭ 55 (-55.65%)
calc2.0CALC2.0: Combining Appearance, Semantic and Geometric Information for Robust and Efficient Visual Loop Closure
Stars: ✭ 70 (-43.55%)
CVAE DialCVAE_XGate model in paper "Xu, Dusek, Konstas, Rieser. Better Conversations by Modeling, Filtering, and Optimizing for Coherence and Diversity"
Stars: ✭ 16 (-87.1%)
S Vae TfTensorflow implementation of Hyperspherical Variational Auto-Encoders
Stars: ✭ 198 (+59.68%)
Vae CelebaVariational auto-encoder trained on celebA . All rights reserved.
Stars: ✭ 160 (+29.03%)
Synthesize3dviadepthorsil[CVPR 2017] Generation and reconstruction of 3D shapes via modeling multi-view depth maps or silhouettes
Stars: ✭ 141 (+13.71%)
AutoEncodersVariational autoencoder, denoising autoencoder and other variations of autoencoders implementation in keras
Stars: ✭ 14 (-88.71%)
Tensorflow Mnist CvaeTensorflow implementation of conditional variational auto-encoder for MNIST
Stars: ✭ 139 (+12.1%)
OCDVAEContinualLearningOpen-source code for our paper: Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition
Stars: ✭ 56 (-54.84%)
Deep Learning With PythonExample projects I completed to understand Deep Learning techniques with Tensorflow. Please note that I do no longer maintain this repository.
Stars: ✭ 134 (+8.06%)
vae-pytorchAE and VAE Playground in PyTorch
Stars: ✭ 53 (-57.26%)
pyroVEDInvariant representation learning from imaging and spectral data
Stars: ✭ 23 (-81.45%)
classifying-vae-lstmmusic generation with a classifying variational autoencoder (VAE) and LSTM
Stars: ✭ 27 (-78.23%)
srVAEVAE with RealNVP prior and Super-Resolution VAE in PyTorch. Code release for https://arxiv.org/abs/2006.05218.
Stars: ✭ 56 (-54.84%)
lagvaeLagrangian VAE
Stars: ✭ 27 (-78.23%)
VAE-Gumbel-SoftmaxAn implementation of a Variational-Autoencoder using the Gumbel-Softmax reparametrization trick in TensorFlow (tested on r1.5 CPU and GPU) in ICLR 2017.
Stars: ✭ 66 (-46.77%)
BagelIPCCC 2018: Robust and Unsupervised KPI Anomaly Detection Based on Conditional Variational Autoencoder
Stars: ✭ 45 (-63.71%)