reproducible-continual-learningContinual learning baselines and strategies from popular papers, using Avalanche. We include EWC, SI, GEM, AGEM, LwF, iCarl, GDumb, and other strategies.
Stars: ✭ 118 (+293.33%)
OCDVAEContinualLearningOpen-source code for our paper: Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition
Stars: ✭ 56 (+86.67%)
playing with vaeComparing FC VAE / FCN VAE / PCA / UMAP on MNIST / FMNIST
Stars: ✭ 53 (+76.67%)
MetaLifelongLanguageRepository containing code for the paper "Meta-Learning with Sparse Experience Replay for Lifelong Language Learning".
Stars: ✭ 21 (-30%)
lagvaeLagrangian VAE
Stars: ✭ 27 (-10%)
ADER(RecSys 2020) Adaptively Distilled Exemplar Replay towards Continual Learning for Session-based Recommendation [Best Short Paper]
Stars: ✭ 28 (-6.67%)
django-music-publisherSoftware for managing music metadata, registration/licencing of musical works and royalty processing.
Stars: ✭ 46 (+53.33%)
CVAE DialCVAE_XGate model in paper "Xu, Dusek, Konstas, Rieser. Better Conversations by Modeling, Filtering, and Optimizing for Coherence and Diversity"
Stars: ✭ 16 (-46.67%)
vae-pytorchAE and VAE Playground in PyTorch
Stars: ✭ 53 (+76.67%)
lego-face-VAEVariational autoencoder for Lego minifig faces
Stars: ✭ 15 (-50%)
AutoEncodersVariational autoencoder, denoising autoencoder and other variations of autoencoders implementation in keras
Stars: ✭ 14 (-53.33%)
multimodal-vae-publicA PyTorch implementation of "Multimodal Generative Models for Scalable Weakly-Supervised Learning" (https://arxiv.org/abs/1802.05335)
Stars: ✭ 98 (+226.67%)
Adam-NSCLPyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"
Stars: ✭ 34 (+13.33%)
calc2.0CALC2.0: Combining Appearance, Semantic and Geometric Information for Robust and Efficient Visual Loop Closure
Stars: ✭ 70 (+133.33%)
FUSIONPyTorch code for NeurIPSW 2020 paper (4th Workshop on Meta-Learning) "Few-Shot Unsupervised Continual Learning through Meta-Examples"
Stars: ✭ 18 (-40%)
CIKM18-LCVACode for CIKM'18 paper, Linked Causal Variational Autoencoder for Inferring Paired Spillover Effects.
Stars: ✭ 13 (-56.67%)
STEPSpatial Temporal Graph Convolutional Networks for Emotion Perception from Gaits
Stars: ✭ 39 (+30%)
normalizing-flowsPyTorch implementation of normalizing flow models
Stars: ✭ 271 (+803.33%)
svae cf[ WSDM '19 ] Sequential Variational Autoencoders for Collaborative Filtering
Stars: ✭ 38 (+26.67%)
CVPR2021 PLOPOfficial code of CVPR 2021's PLOP: Learning without Forgetting for Continual Semantic Segmentation
Stars: ✭ 102 (+240%)
haskell-vaeLearning about Haskell with Variational Autoencoders
Stars: ✭ 18 (-40%)
BLIPOfficial Implementation of CVPR2021 paper: Continual Learning via Bit-Level Information Preserving
Stars: ✭ 33 (+10%)
CHyVAECode for our paper -- Hyperprior Induced Unsupervised Disentanglement of Latent Representations (AAAI 2019)
Stars: ✭ 18 (-40%)
VAE-Gumbel-SoftmaxAn implementation of a Variational-Autoencoder using the Gumbel-Softmax reparametrization trick in TensorFlow (tested on r1.5 CPU and GPU) in ICLR 2017.
Stars: ✭ 66 (+120%)
linguistic-style-transfer-pytorchImplementation of "Disentangled Representation Learning for Non-Parallel Text Style Transfer(ACL 2019)" in Pytorch
Stars: ✭ 55 (+83.33%)
benchmark VAEUnifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Stars: ✭ 1,211 (+3936.67%)
vaeganAn implementation of VAEGAN (variational autoencoder + generative adversarial network).
Stars: ✭ 88 (+193.33%)
pyroVEDInvariant representation learning from imaging and spectral data
Stars: ✭ 23 (-23.33%)
CPGSteven C. Y. Hung, Cheng-Hao Tu, Cheng-En Wu, Chien-Hung Chen, Yi-Ming Chan, and Chu-Song Chen, "Compacting, Picking and Growing for Unforgetting Continual Learning," Thirty-third Conference on Neural Information Processing Systems, NeurIPS 2019
Stars: ✭ 91 (+203.33%)
BagelIPCCC 2018: Robust and Unsupervised KPI Anomaly Detection Based on Conditional Variational Autoencoder
Stars: ✭ 45 (+50%)
vae-torchVariational autoencoder for anomaly detection (in PyTorch).
Stars: ✭ 38 (+26.67%)
eccv16 attr2imgTorch Implemention of ECCV'16 paper: Attribute2Image
Stars: ✭ 93 (+210%)
SIGIR2021 ConureOne Person, One Model, One World: Learning Continual User Representation without Forgetting
Stars: ✭ 23 (-23.33%)
soft-intro-vae-pytorch[CVPR 2021 Oral] Official PyTorch implementation of Soft-IntroVAE from the paper "Soft-IntroVAE: Analyzing and Improving Introspective Variational Autoencoders"
Stars: ✭ 170 (+466.67%)
AC-VRNNPyTorch code for CVIU paper "AC-VRNN: Attentive Conditional-VRNN for Multi-Future Trajectory Prediction"
Stars: ✭ 21 (-30%)
Variational-NMTVariational Neural Machine Translation System
Stars: ✭ 37 (+23.33%)
deep-blueberryIf you've always wanted to learn about deep-learning but don't know where to start, then you might have stumbled upon the right place!
Stars: ✭ 17 (-43.33%)
adVAEImplementation of 'Self-Adversarial Variational Autoencoder with Gaussian Anomaly Prior Distribution for Anomaly Detection'
Stars: ✭ 17 (-43.33%)
vae-concreteKeras implementation of a Variational Auto Encoder with a Concrete Latent Distribution
Stars: ✭ 51 (+70%)
srVAEVAE with RealNVP prior and Super-Resolution VAE in PyTorch. Code release for https://arxiv.org/abs/2006.05218.
Stars: ✭ 56 (+86.67%)
SIVIUsing neural network to build expressive hierarchical distribution; A variational method to accurately estimate posterior uncertainty; A fast and general method for Bayesian inference. (ICML 2018)
Stars: ✭ 49 (+63.33%)
CVPR21 PASSPyTorch implementation of our CVPR2021 (oral) paper "Prototype Augmentation and Self-Supervision for Incremental Learning"
Stars: ✭ 55 (+83.33%)
VAE-Latent-Space-ExplorerInteractive exploration of MNIST variational autoencoder latent space with React and tensorflow.js.
Stars: ✭ 30 (+0%)
continuous BernoulliThere are C language computer programs about the simulator, transformation, and test statistic of continuous Bernoulli distribution. More than that, the book contains continuous Binomial distribution and continuous Trinomial distribution.
Stars: ✭ 22 (-26.67%)
intro dgmAn Introduction to Deep Generative Modeling: Examples
Stars: ✭ 124 (+313.33%)
classifying-vae-lstmmusic generation with a classifying variational autoencoder (VAE) and LSTM
Stars: ✭ 27 (-10%)
tt-vae-ganTimbre transfer with variational autoencoding and cycle-consistent adversarial networks. Able to transfer the timbre of an audio source to that of another.
Stars: ✭ 37 (+23.33%)
GPMOfficial Code Repository for "Gradient Projection Memory for Continual Learning"
Stars: ✭ 50 (+66.67%)