All Projects → DiffEqML → Torchdyn

DiffEqML / Torchdyn

Licence: apache-2.0
A PyTorch based library for all things neural differential equations

Projects that are alternatives of or similar to Torchdyn

Drq
DrQ: Data regularized Q
Stars: ✭ 268 (-49.91%)
Mutual labels:  jupyter-notebook, control
Pythonrobotics
Python sample codes for robotics algorithms.
Stars: ✭ 13,934 (+2504.49%)
Mutual labels:  jupyter-notebook, control
Deepul
Stars: ✭ 528 (-1.31%)
Mutual labels:  jupyter-notebook
Training Data Analyst
Labs and demos for courses for GCP Training (http://cloud.google.com/training).
Stars: ✭ 5,653 (+956.64%)
Mutual labels:  jupyter-notebook
Mtcnn Pytorch
Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks
Stars: ✭ 531 (-0.75%)
Mutual labels:  jupyter-notebook
Robot Surgery Segmentation
Wining solution and its improvement for MICCAI 2017 Robotic Instrument Segmentation Sub-Challenge
Stars: ✭ 528 (-1.31%)
Mutual labels:  jupyter-notebook
Python Deliberate Practice
Deliberate Practice for Learning Python
Stars: ✭ 528 (-1.31%)
Mutual labels:  jupyter-notebook
Shablona
A template for small scientific python projects
Stars: ✭ 526 (-1.68%)
Mutual labels:  jupyter-notebook
Justenoughscalaforspark
A tutorial on the most important features and idioms of Scala that you need to use Spark's Scala APIs.
Stars: ✭ 538 (+0.56%)
Mutual labels:  jupyter-notebook
Fullstack Data Engineer
全栈数据工程师养成攻略
Stars: ✭ 531 (-0.75%)
Mutual labels:  jupyter-notebook
Dlaicourse
Notebooks for learning deep learning
Stars: ✭ 5,355 (+900.93%)
Mutual labels:  jupyter-notebook
Data Science Your Way
Ways of doing Data Science Engineering and Machine Learning in R and Python
Stars: ✭ 530 (-0.93%)
Mutual labels:  jupyter-notebook
Music recommender
Music recommender using deep learning with Keras and TensorFlow
Stars: ✭ 528 (-1.31%)
Mutual labels:  jupyter-notebook
Digital Signal Processing Lecture
Digital Signal Processing - Theory and Computational Examples
Stars: ✭ 532 (-0.56%)
Mutual labels:  jupyter-notebook
Cvpr 2019 Paper Statistics
Statistics and Visualization of acceptance rate, main keyword of CVPR 2019 accepted papers for the main Computer Vision conference (CVPR)
Stars: ✭ 527 (-1.5%)
Mutual labels:  jupyter-notebook
Terrapattern
Enabling journalists, citizen scientists, humanitarian workers and others to detect “patterns of interest” in satellite imagery.
Stars: ✭ 536 (+0.19%)
Mutual labels:  jupyter-notebook
Deepke
基于深度学习的开源中文关系抽取框架
Stars: ✭ 525 (-1.87%)
Mutual labels:  jupyter-notebook
Interpretable machine learning with python
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Stars: ✭ 530 (-0.93%)
Mutual labels:  jupyter-notebook
Tensor Sensor
The goal of this library is to generate more helpful exception messages for numpy/pytorch matrix algebra expressions.
Stars: ✭ 532 (-0.56%)
Mutual labels:  jupyter-notebook
Srflow
Official SRFlow training code: Super-Resolution using Normalizing Flow in PyTorch
Stars: ✭ 537 (+0.37%)
Mutual labels:  jupyter-notebook

torchdyn

A PyTorch based library for all things neural differential equations. Maintained by DiffEqML.

Slack codecov Docs python_sup

Continuous Integration

System / Python version 3.6 3.7 3.8+
Ubuntu 16.04 DiffEqML DiffEqML DiffEqML
Ubuntu 18.04 DiffEqML DiffEqML DiffEqML
Windows DiffEqML DiffEqML

Quick Start

Neural differential equations made easy:

from torchdyn import NeuralODE

# your preferred torch.nn.Module here 
f = nn.Sequential(nn.Conv2d(1, 32, 3),
                  nn.Softplus(),
                 nn.Conv2d(32, 1, 3)
            )

nde = NeuralODE(f)

And you're good to go. The nde object can be seamlessly combined with other deep learning models.

Installation

Stable release:

pip install torchdyn

  • NOTE: temporarily requires additional manual installation of torchsde:

pip install git+https://github.com/google-research/torchsde.git

Bleeding-edge version:

git clone https://github.com/DiffEqML/torchdyn.git

cd torchdyn

python setup.py install

Documentation

https://torchdyn.readthedocs.io/

Introduction

Interest in the blend of differential equations, deep learning and dynamical systems has been reignited by recent works [1,2]. Modern deep learning frameworks such as PyTorch, coupled with progressive improvements in computational resources have allowed the continuous version of neural networks, with proposals dating back to the 80s [3], to finally come to life and provide a novel perspective on classical machine learning problems (e.g. density estimation [4])

Since the introduction of the torchdiffeq library with the seminal work [1] in 2018, little effort has been expended by the PyTorch research community on an unified framework for neural differential equations. While significant progress is being made by the Julia community and SciML [5], we believe a native PyTorch version of torchdyn with a focus on deep learning to be a valuable asset for the research ecosystem.

Central to the torchdyn approach are continuous neural networks, where width, depth (or both) are taken to their infinite limit. On the optimization front, we consider continuous "data-stream" regimes and gradient flow methods, where the dataset represents a time-evolving signal processed by the neural network to adapt its parameters.

By providing a centralized, easy-to-access collection of model templates, tutorial and application notebooks, we hope to speed-up research in this area and ultimately contribute to turning neural differential equations into an effective tool for control, system identification and common machine learning tasks.

The development of torchdyn, sparked by the joint work of Michael Poli & Stefano Massaroli, has been supported throughout by their almae matres. In particular, by Prof. Jinkyoo Park (KAIST), Prof. Atsushi Yamashita (The University of Tokyo) and Prof. Hajime Asama (The University of Tokyo).

torchdyn is maintained by the core DiffEqML team, with the generous support of the deep learning community.

Feature roadmap

The current offering of torchdyn is limited compared to the rich ecosystem of continuous deep learning. If you are a researcher working in this space, and particularly if one of your previous works happens to be a WIP feature, feel free to reach out and help us in its implementation.

  • Basics: quickstart ✅, cookbook ✅
  • Expressivity and augmentation: crossing trajectories ✅, augmentation ✅, higher order ✅
  • Adjoint and beyond: generalized adjoint ✅
  • Regularization tutorials: regularization (coming soon) ⬜️ adaptive depth ⬜️ STEER ⬜️
  • Controlled Neural DEs: data control ✅ neural cde (coming soon) ⬜️
  • Energy models: hamiltonian nets ✅, lagrangian nets ✅, stable models ✅
  • Image classification: MNIST ✅, CIFAR10 (coming soon) ⬜️
  • Density estimation tutorials: continuous normalizing flows ✅, ffjord ✅, manifold cnf ⬜️
  • Density estimation applications: 2d density ✅, images (coming soon) ⬜️
  • Hybrid Neural DEs: hybrid models ✅
  • Variational Neural DE tutorials: variational neural ode ✅ variational neural sde (coming soon) ⬜️
  • Graph Neural DEs (GDEs) tutorials: gde node classification ✅ autoregressive gde (coming soon) ⬜️

Looking for contributions of the below variants:

  • Specific variants: ode2vae ⬜️, anodev2 ⬜️, gruode-bayes ⬜️, neural jump stochastic ⬜️, ode2ode ⬜️

Dependencies

torchdyn leverages modern PyTorch best practices and handles training with pytorch-lightning [6]. We build Graph Neural ODEs utilizing the Graph Neural Networks (GNNs) API of dgl [6].

Goals of torchdyn

Our aim with torchdyn aims is to provide a unified, flexible API to the most recent advances in continuous deep learning. Examples include neural differential equations variants, e.g.

  • Neural Ordinary Differential Equations (Neural ODE) [1]
  • Neural Stochastic Differential Equations (Neural SDE) [7,8]
  • Graph Neural ODEs [9]
  • Hamiltonian Neural Networks [10]

Depth-variant versions,

  • ANODEv2 [11]
  • Galerkin Neural ODE [12]

Recurrent or "hybrid" versions

  • ODE-RNN [13]
  • GRU-ODE-Bayes [14]

Augmentation strategies to relieve neural differential equations of their expressivity limitations and reduce the computational burden of the numerical solver

  • ANODE (0-augmentation) [15]
  • Input-layer augmentation [16]
  • Higher-order augmentation [17]

Alternative or modified adjoint training techniques

  • Integral loss adjoint [18]
  • Checkpointed adjoint [19]

Applications and tutorials

The current version of torchdyn contains the following self-contained quickstart examples / tutorials (with a lot more to come):

  • 00_quickstart: offers a quickstart guide for torchdyn and Neural DEs
  • 01_cookbook: here, we explore the API and how to define Neural DE variants within torchdyn
  • 02_image_classification: convolutional Neural DEs on MNIST
  • 03_crossing_trajectories: a standard benchmark problem, highlighting expressivity limitations of Neural DEs, and how they can be addressed
  • 04_augmentation_strategies: augmentation API for Neural DEs

and the advanced tutorials

  • 05_generalized_adjoint: minimize integral losses with torchdyn's special integral loss adjoint [18] to track a sinusoidal signal
  • 06_higher_order: higher-order Neural ODE variants for classification
  • 07a_continuous_normalizing_flows: recover densities with continuous normalizing flows [1]
  • 07b_ffjord: recover densities with FFJORD variants of continuous normalizing flows [19]
  • 08_hamiltonian_nets: learn dynamics of energy preserving systems with a simple implementation of Hamiltonian Neural Networks in torchdyn [10]
  • 09_lagrangian_nets: learn dynamics of energy preserving systems with a simple implementation of Lagrangian Neural Networks in torchdyn [12]
  • 10_stable_neural_odes: learn dynamics of dynamical systems with a simple implementation of Stable Neural Flows in torchdyn [18]
  • 11_gde_node_classification: first steps into the vast world of Neural GDEs [9], or ODEs on graphs parametrized by graph neural networks (GNN). Classification on Cora

Features

Check our wiki for a full description of available features.

Contribute

torchdyn is meant to be a community effort: we welcome all contributions of tutorials, model variants, numerical methods and applications related to continuous deep learning. We do not have specific style requirements, though we subscribe to many of Jeremy Howard's ideas.

Choosing what to work on: There is always ongoing work on new features, tests and tutorials. Contributing to any of the above is extremely valuable to us. If you wish to work on additional features not currently WIP, feel free to reach out on Slack or via email. We'll be glad to discuss details.

Cite us

If you find torchdyn valuable for your research or applied projects:

@article{poli2020torchdyn,
  title={TorchDyn: A Neural Differential Equations Library},
  author={Poli, Michael and Massaroli, Stefano and Yamashita, Atsushi and Asama, Hajime and Park, Jinkyoo},
  journal={arXiv preprint arXiv:2009.09346},
  year={2020}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].