Vae SeqVariational Auto-Encoders in a Sequential Setting.
Stars: ✭ 145 (+150%)
Numpy MlMachine learning, in numpy
Stars: ✭ 11,100 (+19037.93%)
Tf VqvaeTensorflow Implementation of the paper [Neural Discrete Representation Learning](https://arxiv.org/abs/1711.00937) (VQ-VAE).
Stars: ✭ 226 (+289.66%)
Vae Lagging EncoderPyTorch implementation of "Lagging Inference Networks and Posterior Collapse in Variational Autoencoders" (ICLR 2019)
Stars: ✭ 153 (+163.79%)
Dfc VaeVariational Autoencoder trained by Feature Perceputal Loss
Stars: ✭ 74 (+27.59%)
Srl ZooState Representation Learning (SRL) zoo with PyTorch - Part of S-RL Toolbox
Stars: ✭ 125 (+115.52%)
InpaintNetCode accompanying ISMIR'19 paper titled "Learning to Traverse Latent Spaces for Musical Score Inpaintning"
Stars: ✭ 48 (-17.24%)
S Vae TfTensorflow implementation of Hyperspherical Variational Auto-Encoders
Stars: ✭ 198 (+241.38%)
FactorvaePytorch implementation of FactorVAE proposed in Disentangling by Factorising(http://arxiv.org/abs/1802.05983)
Stars: ✭ 176 (+203.45%)
MIDI-VAENo description or website provided.
Stars: ✭ 56 (-3.45%)
Pytorch VaeA Collection of Variational Autoencoders (VAE) in PyTorch.
Stars: ✭ 2,704 (+4562.07%)
deepgttDeepGTT: Learning Travel Time Distributions with Deep Generative Model
Stars: ✭ 30 (-48.28%)
Tensorflow Mnist CvaeTensorflow implementation of conditional variational auto-encoder for MNIST
Stars: ✭ 139 (+139.66%)
Vae Cvae MnistVariational Autoencoder and Conditional Variational Autoencoder on MNIST in PyTorch
Stars: ✭ 229 (+294.83%)
O GanO-GAN: Extremely Concise Approach for Auto-Encoding Generative Adversarial Networks
Stars: ✭ 117 (+101.72%)
tensorflow-mnist-AAETensorflow implementation of adversarial auto-encoder for MNIST
Stars: ✭ 86 (+48.28%)
Cross Lingual Voice CloningTacotron 2 - PyTorch implementation with faster-than-realtime inference modified to enable cross lingual voice cloning.
Stars: ✭ 106 (+82.76%)
Pytorch Vq VaePyTorch implementation of VQ-VAE by Aäron van den Oord et al.
Stars: ✭ 204 (+251.72%)
Vae For Image GenerationImplemented Variational Autoencoder generative model in Keras for image generation and its latent space visualization on MNIST and CIFAR10 datasets
Stars: ✭ 87 (+50%)
benchmark VAEUnifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Stars: ✭ 1,211 (+1987.93%)
ElliotComprehensive and Rigorous Framework for Reproducible Recommender Systems Evaluation
Stars: ✭ 49 (-15.52%)
Disentangled vaeReplicating "Understanding disentangling in β-VAE"
Stars: ✭ 188 (+224.14%)
OptimusOptimus: the first large-scale pre-trained VAE language model
Stars: ✭ 180 (+210.34%)
language-modelsKeras implementations of three language models: character-level RNN, word-level RNN and Sentence VAE (Bowman, Vilnis et al 2016).
Stars: ✭ 39 (-32.76%)
DeepSSM SysIDOfficial PyTorch implementation of "Deep State Space Models for Nonlinear System Identification", 2020.
Stars: ✭ 62 (+6.9%)
Beat BlenderBlend beats using machine learning to create music in a fun new way.
Stars: ✭ 147 (+153.45%)
BagelIPCCC 2018: Robust and Unsupervised KPI Anomaly Detection Based on Conditional Variational Autoencoder
Stars: ✭ 45 (-22.41%)
Vmf vae nlpCode for EMNLP18 paper "Spherical Latent Spaces for Stable Variational Autoencoders"
Stars: ✭ 140 (+141.38%)
Deep Learning With PythonExample projects I completed to understand Deep Learning techniques with Tensorflow. Please note that I do no longer maintain this repository.
Stars: ✭ 134 (+131.03%)
vae-concreteKeras implementation of a Variational Auto Encoder with a Concrete Latent Distribution
Stars: ✭ 51 (-12.07%)
Vae TensorflowA Tensorflow implementation of a Variational Autoencoder for the deep learning course at the University of Southern California (USC).
Stars: ✭ 117 (+101.72%)
Pytorch cppDeep Learning sample programs using PyTorch in C++
Stars: ✭ 114 (+96.55%)
MojitalkCode for "MojiTalk: Generating Emotional Responses at Scale" https://arxiv.org/abs/1711.04090
Stars: ✭ 107 (+84.48%)
Vq VaeMinimalist implementation of VQ-VAE in Pytorch
Stars: ✭ 224 (+286.21%)
SmrtHandle class imbalance intelligently by using variational auto-encoders to generate synthetic observations of your minority class.
Stars: ✭ 102 (+75.86%)
Cada Vae PytorchOfficial implementation of the paper "Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders" (CVPR 2019)
Stars: ✭ 198 (+241.38%)
Vae protein functionProtein function prediction using a variational autoencoder
Stars: ✭ 57 (-1.72%)
soft-intro-vae-pytorch[CVPR 2021 Oral] Official PyTorch implementation of Soft-IntroVAE from the paper "Soft-IntroVAE: Analyzing and Improving Introspective Variational Autoencoders"
Stars: ✭ 170 (+193.1%)
Adversarial video summaryUnofficial PyTorch Implementation of SUM-GAN from "Unsupervised Video Summarization with Adversarial LSTM Networks" (CVPR 2017)
Stars: ✭ 187 (+222.41%)
molecular-VAEImplementation of the paper - Automatic chemical design using a data-driven continuous representation of molecules
Stars: ✭ 36 (-37.93%)
concept-based-xaiLibrary implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (-29.31%)
Fun-with-MNISTPlaying with MNIST. Machine Learning. Generative Models.
Stars: ✭ 23 (-60.34%)
Pytorch VaeA CNN Variational Autoencoder (CNN-VAE) implemented in PyTorch
Stars: ✭ 181 (+212.07%)