All Projects → BorealisAI → continuous-time-flow-process

BorealisAI / continuous-time-flow-process

Licence: other
PyTorch code of "Modeling Continuous Stochastic Processes with Dynamic Normalizing Flows" (NeurIPS 2020)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to continuous-time-flow-process

eccv16 attr2img
Torch Implemention of ECCV'16 paper: Attribute2Image
Stars: ✭ 93 (+173.53%)
Mutual labels:  generative-model
GatedPixelCNNPyTorch
PyTorch implementation of "Conditional Image Generation with PixelCNN Decoders" by van den Oord et al. 2016
Stars: ✭ 68 (+100%)
Mutual labels:  generative-model
MongeAmpereFlow
Continuous-time gradient flow for generative modeling and variational inference
Stars: ✭ 29 (-14.71%)
Mutual labels:  normalizing-flows
3D-PV-Locator
Repo for "3D-PV-Locator: Large-scale detection of rooftop-mounted photovoltaic systems in 3D" based on Applied Energy publication.
Stars: ✭ 35 (+2.94%)
Mutual labels:  neurips-2020
cygen
Codes for CyGen, the novel generative modeling framework proposed in "On the Generative Utility of Cyclic Conditionals" (NeurIPS-21)
Stars: ✭ 44 (+29.41%)
Mutual labels:  generative-model
mix-stage
Official Repository for the paper Style Transfer for Co-Speech Gesture Animation: A Multi-Speaker Conditional-Mixture Approach published in ECCV 2020 (https://arxiv.org/abs/2007.12553)
Stars: ✭ 22 (-35.29%)
Mutual labels:  generative-model
PREREQ-IAAI-19
Inferring Concept Prerequisite Relations from Online Educational Resources (IAAI-19)
Stars: ✭ 22 (-35.29%)
Mutual labels:  generative-model
Cross-Speaker-Emotion-Transfer
PyTorch Implementation of ByteDance's Cross-speaker Emotion Transfer Based on Speaker Condition Layer Normalization and Semi-Supervised Training in Text-To-Speech
Stars: ✭ 107 (+214.71%)
Mutual labels:  generative-model
EVE
Official repository for the paper "Large-scale clinical interpretation of genetic variants using evolutionary data and deep learning". Joint collaboration between the Marks lab and the OATML group.
Stars: ✭ 37 (+8.82%)
Mutual labels:  generative-model
CondGen
Conditional Structure Generation through Graph Variational Generative Adversarial Nets, NeurIPS 2019.
Stars: ✭ 46 (+35.29%)
Mutual labels:  generative-model
texturize
🤖🖌️ Generate photo-realistic textures based on source images. Remix, remake, mashup! Useful if you want to create variations on a theme or elaborate on an existing texture.
Stars: ✭ 495 (+1355.88%)
Mutual labels:  generative-model
MMD-GAN
Improving MMD-GAN training with repulsive loss function
Stars: ✭ 82 (+141.18%)
Mutual labels:  generative-model
deeprob-kit
A Python Library for Deep Probabilistic Modeling
Stars: ✭ 32 (-5.88%)
Mutual labels:  normalizing-flows
feed forward vqgan clip
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
Stars: ✭ 135 (+297.06%)
Mutual labels:  generative-model
coursera-gan-specialization
Programming assignments and quizzes from all courses within the GANs specialization offered by deeplearning.ai
Stars: ✭ 277 (+714.71%)
Mutual labels:  generative-model
AC-VRNN
PyTorch code for CVIU paper "AC-VRNN: Attentive Conditional-VRNN for Multi-Future Trajectory Prediction"
Stars: ✭ 21 (-38.24%)
Mutual labels:  generative-model
SDETools
Matlab Toolbox for the Numerical Solution of Stochastic Differential Equations
Stars: ✭ 80 (+135.29%)
Mutual labels:  stochastic-processes
latent-pose-reenactment
The authors' implementation of the "Neural Head Reenactment with Latent Pose Descriptors" (CVPR 2020) paper.
Stars: ✭ 132 (+288.24%)
Mutual labels:  generative-model
DISCOTRESS
🦜 DISCOTRESS 🦜 is a software package to simulate and analyse the dynamics on arbitrary Markov chains
Stars: ✭ 20 (-41.18%)
Mutual labels:  stochastic-processes
RAVE
Official implementation of the RAVE model: a Realtime Audio Variational autoEncoder
Stars: ✭ 564 (+1558.82%)
Mutual labels:  generative-model

Modeling Continuous Stochastic Process with Dynamic Normalizing Flow

Code for the paper

Ruizhi Deng, Bo Chang, Marcus Brubaker, Greg Mori, Andreas Lehrmann. "Modeling Continuous Stochastic Process with Dynamic Normalizing Flow" (2020) [arxiv]

Dependency installment

Install the dependencies in requirements.txt with

pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html

We use a new version of PyTorch (1.4.0) for the released code. The experiments in the paper use an older version of PyTorch, which could lead to slightly different results. Please read the PyTorch documentation for more information.

Acknowledgements

The code make uses of code from the following two projects: https://github.com/YuliaRubanova/latent_ode for the paper

Yulia Rubanova, Ricky Chen, David Duvenaud. "Latent ODEs for Irregularly-Sampled Time Series" (2019) [arxiv]

https://github.com/rtqichen/ffjord for the paper

Will Grathwohl*, Ricky T. Q. Chen*, Jesse Bettencourt, Ilya Sutskever, David Duvenaud. "FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models." International Conference on Learning Representations (2019). [arxiv] [bibtex]

We make use the following files from the code of FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models: train_misc.py, lib/layers, lib/utils.py, lib/spectral_norm.py. We make changes to the file train_misc.py, lib/layers/ode_func.py, and lib/utils.py.

We use lib/encoder_decoder.py, lib/ode_func.py, lib/diffeq_solver.py from the code of Latent ODEs for Irregularly-Sampled Time Series. We make changes to the file lib/encoder_decoder.py.

Command for training the model

We train the models using $\lambda=2$

Training CTFP model on GBM Process

python train_ctfp.py --batch_size 100 --test_batch_size 100 --num_blocks 1 --save ctfp_gbm --log_freq 1 --num_workers 2 --layer_type concat --dims 32,64,64,32 --nonlinearity tanh --lr 5e-4 --num_epochs 100 --data_path data/gbm_2.pkl

Training latent CTFP model on GBM Process

python train_latent_ctfp.py --batch_size 50 --test_batch_size 5 --num_blocks 1 --save latent_ctfp_gbm --log_freq 1 --num_workers 2 --layer_type concat --dims 32,64,64,32 --nonlinearity tanh --lr 5e-4 --num_epochs 100 --data_path data/gbm_2.pkl

Training CTFP model on OU Process

python train_ctfp.py --batch_size 100 --test_batch_size 100 --num_blocks 1 --save ctfp_ou --log_freq 1 --num_workers 2 --layer_type concat --dims 32,64,64,32 --nonlinearity tanh --lr 5e-4 --num_epochs 100 --data_path data/ou_2.pkl --activation identity

Training latent CTFP model on OU Process

python train_latent_ctfp.py --batch_size 50 --test_batch_size 5 --num_blocks 1 --save latent_ctfp_ou --log_freq 1 --num_workers 2 --layer_type concat --dims 32,64,64,32 --nonlinearity tanh --lr 5e-4 --num_epochs 300 --data_path data/ou_2.pkl --activation identity --aggressive

Command for evaluating the code

We evaluate the models on data sampled by a observation process with $\lambda=2$

Evaluating CTFP model on GBM Process

python eval_ctfp.py --test_batch_size 100 --num_blocks 1 --save ctfp_gbm --num_workers 2 --layer_type concat --dims 32,64,64,32 --nonlinearity tanh --lr 5e-4 --num_epochs 100 --resume experiments/ctfp_gbm/pretrained.pth --data_path data/gbm_2.pkl

Evaluating latent CTFP model on GBM Process

python eval_latent_ctfp.py --test_batch_size 5 --num_blocks 1 --save latent_ctfp_gbm --num_workers 2 --layer_type concat --dims 32,64,64,32 --nonlinearity tanh --lr 5e-4 --num_epochs 100 --data_path data/gbm_2.pkl --resume experiments/latent_ctfp_gbm/pretrained.pth

Evaluating CTFP model on OU Process

python eval_ctfp.py --test_batch_size 100 --num_blocks 1 --save ctfp_ou --num_workers 2 --layer_type concat --dims 32,64,64,32 --nonlinearity tanh --data_path data/ou_2.pkl --activation identity --resume experiments/ctfp_ou/pretrained.pth

Evaluating latent CTFP model on OU Process

python eval_latent_ctfp.py --test_batch_size 5 --num_blocks 1 --save latent_ctfp_ou --num_workers 2 --layer_type concat --dims 32,64,64,32 --nonlinearity tanh --data_path data/ou_2.pkl --activation identity --resume experiments/latent_ctfp_ou/pretrained.pth

Performance Summary

Model GBM OU
CTFP 3.107 2.902
Latent CTFP 3.106 2.902

Download the data from this link for evaluating the models on GBM and OU data with $\lambda=20$ and training the models on mixture of OU data.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].