srVAEVAE with RealNVP prior and Super-Resolution VAE in PyTorch. Code release for https://arxiv.org/abs/2006.05218.
Stars: ✭ 56 (-86.6%)
Variational AutoencoderVariational autoencoder implemented in tensorflow and pytorch (including inverse autoregressive flow)
Stars: ✭ 807 (+93.06%)
Disentangling VaeExperiments for understanding disentanglement in VAE latent representations
Stars: ✭ 398 (-4.78%)
Tensorflow Mnist CvaeTensorflow implementation of conditional variational auto-encoder for MNIST
Stars: ✭ 139 (-66.75%)
Generative models tutorial with demoGenerative Models Tutorial with Demo: Bayesian Classifier Sampling, Variational Auto Encoder (VAE), Generative Adversial Networks (GANs), Popular GANs Architectures, Auto-Regressive Models, Important Generative Model Papers, Courses, etc..
Stars: ✭ 276 (-33.97%)
Vae protein functionProtein function prediction using a variational autoencoder
Stars: ✭ 57 (-86.36%)
Vae For Image GenerationImplemented Variational Autoencoder generative model in Keras for image generation and its latent space visualization on MNIST and CIFAR10 datasets
Stars: ✭ 87 (-79.19%)
InpaintNetCode accompanying ISMIR'19 paper titled "Learning to Traverse Latent Spaces for Musical Score Inpaintning"
Stars: ✭ 48 (-88.52%)
normalizing-flowsPyTorch implementation of normalizing flow models
Stars: ✭ 271 (-35.17%)
VQ-APCVector Quantized Autoregressive Predictive Coding (VQ-APC)
Stars: ✭ 34 (-91.87%)
benchmark VAEUnifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Stars: ✭ 1,211 (+189.71%)
FUSIONPyTorch code for NeurIPSW 2020 paper (4th Workshop on Meta-Learning) "Few-Shot Unsupervised Continual Learning through Meta-Examples"
Stars: ✭ 18 (-95.69%)
vae-concreteKeras implementation of a Variational Auto Encoder with a Concrete Latent Distribution
Stars: ✭ 51 (-87.8%)
Neuraldialog CvaeTensorflow Implementation of Knowledge-Guided CVAE for dialog generation ACL 2017. It is released by Tiancheng Zhao (Tony) from Dialog Research Center, LTI, CMU
Stars: ✭ 279 (-33.25%)
M-NMFAn implementation of "Community Preserving Network Embedding" (AAAI 2017)
Stars: ✭ 119 (-71.53%)
AI Learning HubAI Learning Hub for Machine Learning, Deep Learning, Computer Vision and Statistics
Stars: ✭ 53 (-87.32%)
amrOfficial adversarial mixup resynthesis repository
Stars: ✭ 31 (-92.58%)
vqvae-2PyTorch implementation of VQ-VAE-2 from "Generating Diverse High-Fidelity Images with VQ-VAE-2"
Stars: ✭ 65 (-84.45%)
adaptive-f-divergenceA tensorflow implementation of the NIPS 2018 paper "Variational Inference with Tail-adaptive f-Divergence"
Stars: ✭ 20 (-95.22%)
SimCLRPytorch implementation of "A Simple Framework for Contrastive Learning of Visual Representations"
Stars: ✭ 65 (-84.45%)
SIVIUsing neural network to build expressive hierarchical distribution; A variational method to accurately estimate posterior uncertainty; A fast and general method for Bayesian inference. (ICML 2018)
Stars: ✭ 49 (-88.28%)
Revisiting-Contrastive-SSLRevisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [NeurIPS 2021]
Stars: ✭ 81 (-80.62%)
SimclrPyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations by T. Chen et al.
Stars: ✭ 293 (-29.9%)
naruNeural Relation Understanding: neural cardinality estimators for tabular data
Stars: ✭ 76 (-81.82%)
AC-VRNNPyTorch code for CVIU paper "AC-VRNN: Attentive Conditional-VRNN for Multi-Future Trajectory Prediction"
Stars: ✭ 21 (-94.98%)
char-VAEInspired by the neural style algorithm in the computer vision field, we propose a high-level language model with the aim of adapting the linguistic style.
Stars: ✭ 18 (-95.69%)
Beta VaePytorch implementation of β-VAE
Stars: ✭ 326 (-22.01%)
pyroVEDInvariant representation learning from imaging and spectral data
Stars: ✭ 23 (-94.5%)
BagelIPCCC 2018: Robust and Unsupervised KPI Anomaly Detection Based on Conditional Variational Autoencoder
Stars: ✭ 45 (-89.23%)
eccv16 attr2imgTorch Implemention of ECCV'16 paper: Attribute2Image
Stars: ✭ 93 (-77.75%)
VAENAR-TTSPyTorch Implementation of VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis.
Stars: ✭ 66 (-84.21%)
haskell-vaeLearning about Haskell with Variational Autoencoders
Stars: ✭ 18 (-95.69%)
ShapeFormerOfficial repository for the ShapeFormer Project
Stars: ✭ 97 (-76.79%)
soft-intro-vae-pytorch[CVPR 2021 Oral] Official PyTorch implementation of Soft-IntroVAE from the paper "Soft-IntroVAE: Analyzing and Improving Introspective Variational Autoencoders"
Stars: ✭ 170 (-59.33%)
protoProto-RL: Reinforcement Learning with Prototypical Representations
Stars: ✭ 67 (-83.97%)
vae-torchVariational autoencoder for anomaly detection (in PyTorch).
Stars: ✭ 38 (-90.91%)
rl singing voiceUnsupervised Representation Learning for Singing Voice Separation
Stars: ✭ 18 (-95.69%)
S Vae PytorchPytorch implementation of Hyperspherical Variational Auto-Encoders
Stars: ✭ 255 (-39%)
VAE-Gumbel-SoftmaxAn implementation of a Variational-Autoencoder using the Gumbel-Softmax reparametrization trick in TensorFlow (tested on r1.5 CPU and GPU) in ICLR 2017.
Stars: ✭ 66 (-84.21%)
continuous BernoulliThere are C language computer programs about the simulator, transformation, and test statistic of continuous Bernoulli distribution. More than that, the book contains continuous Binomial distribution and continuous Trinomial distribution.
Stars: ✭ 22 (-94.74%)
sqairImplementation of Sequential Attend, Infer, Repeat (SQAIR)
Stars: ✭ 96 (-77.03%)
generative deep learningGenerative Deep Learning Sessions led by Anugraha Sinha (Machine Learning Tokyo)
Stars: ✭ 24 (-94.26%)
CIKM18-LCVACode for CIKM'18 paper, Linked Causal Variational Autoencoder for Inferring Paired Spillover Effects.
Stars: ✭ 13 (-96.89%)
lagvaeLagrangian VAE
Stars: ✭ 27 (-93.54%)
CHyVAECode for our paper -- Hyperprior Induced Unsupervised Disentanglement of Latent Representations (AAAI 2019)
Stars: ✭ 18 (-95.69%)
DiffuseVAEA combination of VAE's and Diffusion Models for efficient, controllable and high-fidelity generation from low-dimensional latents
Stars: ✭ 81 (-80.62%)
RG-FlowThis is project page for the paper "RG-Flow: a hierarchical and explainable flow model based on renormalization group and sparse prior". Paper link: https://arxiv.org/abs/2010.00029
Stars: ✭ 58 (-86.12%)
disent🧶 Modular VAE disentanglement framework for python built with PyTorch Lightning ▸ Including metrics and datasets ▸ With strongly supervised, weakly supervised and unsupervised methods ▸ Easily configured and run with Hydra config ▸ Inspired by disentanglement_lib
Stars: ✭ 41 (-90.19%)