All Projects → dgedon → DeepSSM_SysID

dgedon / DeepSSM_SysID

Licence: other
Official PyTorch implementation of "Deep State Space Models for Nonlinear System Identification", 2020.

Programming Languages

python
139335 projects - #7 most used programming language
matlab
3953 projects

Projects that are alternatives of or similar to DeepSSM SysID

Pytorch Vae
A Collection of Variational Autoencoders (VAE) in PyTorch.
Stars: ✭ 2,704 (+4261.29%)
Mutual labels:  vae
Disentangled vae
Replicating "Understanding disentangling in β-VAE"
Stars: ✭ 188 (+203.23%)
Mutual labels:  vae
Tf Vqvae
Tensorflow Implementation of the paper [Neural Discrete Representation Learning](https://arxiv.org/abs/1711.00937) (VQ-VAE).
Stars: ✭ 226 (+264.52%)
Mutual labels:  vae
Vae Lagging Encoder
PyTorch implementation of "Lagging Inference Networks and Posterior Collapse in Variational Autoencoders" (ICLR 2019)
Stars: ✭ 153 (+146.77%)
Mutual labels:  vae
Pytorch Vae
A CNN Variational Autoencoder (CNN-VAE) implemented in PyTorch
Stars: ✭ 181 (+191.94%)
Mutual labels:  vae
S Vae Tf
Tensorflow implementation of Hyperspherical Variational Auto-Encoders
Stars: ✭ 198 (+219.35%)
Mutual labels:  vae
Vae Seq
Variational Auto-Encoders in a Sequential Setting.
Stars: ✭ 145 (+133.87%)
Mutual labels:  vae
Human body prior
VPoser: Variational Human Pose Prior
Stars: ✭ 244 (+293.55%)
Mutual labels:  vae
Adversarial video summary
Unofficial PyTorch Implementation of SUM-GAN from "Unsupervised Video Summarization with Adversarial LSTM Networks" (CVPR 2017)
Stars: ✭ 187 (+201.61%)
Mutual labels:  vae
Vq Vae
Minimalist implementation of VQ-VAE in Pytorch
Stars: ✭ 224 (+261.29%)
Mutual labels:  vae
A Hierarchical Latent Structure For Variational Conversation Modeling
PyTorch Implementation of "A Hierarchical Latent Structure for Variational Conversation Modeling" (NAACL 2018 Oral)
Stars: ✭ 153 (+146.77%)
Mutual labels:  vae
Optimus
Optimus: the first large-scale pre-trained VAE language model
Stars: ✭ 180 (+190.32%)
Mutual labels:  vae
Cada Vae Pytorch
Official implementation of the paper "Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders" (CVPR 2019)
Stars: ✭ 198 (+219.35%)
Mutual labels:  vae
Sylvester Flows
Stars: ✭ 152 (+145.16%)
Mutual labels:  vae
Variational Recurrent Autoencoder Tensorflow
A tensorflow implementation of "Generating Sentences from a Continuous Space"
Stars: ✭ 228 (+267.74%)
Mutual labels:  vae
Beat Blender
Blend beats using machine learning to create music in a fun new way.
Stars: ✭ 147 (+137.1%)
Mutual labels:  vae
Twostagevae
Stars: ✭ 192 (+209.68%)
Mutual labels:  vae
Video prediction
Stochastic Adversarial Video Prediction
Stars: ✭ 247 (+298.39%)
Mutual labels:  vae
Vae Cvae Mnist
Variational Autoencoder and Conditional Variational Autoencoder on MNIST in PyTorch
Stars: ✭ 229 (+269.35%)
Mutual labels:  vae
Pytorch Vq Vae
PyTorch implementation of VQ-VAE by Aäron van den Oord et al.
Stars: ✭ 204 (+229.03%)
Mutual labels:  vae

DeepSSM_SysID

Official repository for the PyTorch implementation of the paper:
Deep State Space Models for Nonlinear System Identification, at the 19th IFAC Symposium on System Identification (SYSID).
Links: [doi] [arXiv] [Code] [Slides].
Authors: Daniel Gedon, Niklas Wahlström, Thomas B. Schön, Lennart Ljung.

In this work we use six new deep State-Space Models (SSMs) developed from various authors in previous work and apply them for the field of nonlinear system identification. The available code provides a reimplementation of the six different models in PyTorch as a unified framework for these models. A toy problem and two established nonlinear benchmarks are used. The chosen methods benefit besides the identification of the system dynamics also from uncertainty quantification.

If you find this work useful, please consider citing:

@inproceedings{gedon2021deepssm,
  author={Gedon, Daniel and Wahlstr{\"o}m, Niklas and Sch{\"o}n, Thomas B. and Ljung, Lennart},
  title={Deep State Space Models for Nonlinear System Identification},
  booktitle={Proceedings of the 19th IFAC Symposium on System Identification (SYSID)},
  month={July},
  year={2021},
  note={online},
}

Repository overview

The different models are available in /models. The models are:

  • VAE-RNN
  • VRNN-Gauss-I
  • VRNN-Gauss
  • VRNN-GMM-I
  • VRNN-GMM
  • STORN

The files of experiments to be able to generate the figures in the paper are available in /final_toy_lgssm, /final_narendra_li and /final_wiener_hammerstein.

To run a single model you can use the file in the folder /experiment called main_single.py. Within the option list you can choose a specific model and a specific dataset.

The used data files are stored in /data. For the Wiener Hammerstein system we refer to the original website (see readme in the folder) since the data files are rather large. In order to extend for more datasets the dataset has to be provided in a specific format and added in the /data/loader.py. A training, validation and test dataset has to be provided as numpy arrays of shape (sequence length, signal dimension). The sequence length is defined in the file /options/dataset_options.py.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].