All Projects → BethanyL → Deepkoopman

BethanyL / Deepkoopman

Licence: mit
neural networks to learn Koopman eigenfunctions

Projects that are alternatives of or similar to Deepkoopman

Dive Into Machine Learning
Dive into Machine Learning with Python Jupyter notebook and scikit-learn! First posted in 2016, maintained as of 2021. Pull requests welcome.
Stars: ✭ 10,810 (+8479.37%)
Mutual labels:  jupyter-notebook
The Data Science Workshop
A New, Interactive Approach to Learning Data Science
Stars: ✭ 126 (+0%)
Mutual labels:  jupyter-notebook
Examples
Stars: ✭ 126 (+0%)
Mutual labels:  jupyter-notebook
Geostatsmodels
This is a collection of geostatistical scripts written in Python
Stars: ✭ 125 (-0.79%)
Mutual labels:  jupyter-notebook
Distance Encoding
Distance Encoding for GNN Design
Stars: ✭ 126 (+0%)
Mutual labels:  jupyter-notebook
Meteorological Books
气象相关书籍合集(持续更新)
Stars: ✭ 125 (-0.79%)
Mutual labels:  jupyter-notebook
Deep Auto Punctuation
a pytorch implementation of auto-punctuation learned character by character
Stars: ✭ 125 (-0.79%)
Mutual labels:  jupyter-notebook
Nlpmetrics
Python code for various NLP metrics
Stars: ✭ 126 (+0%)
Mutual labels:  jupyter-notebook
Understandingbdl
Stars: ✭ 126 (+0%)
Mutual labels:  jupyter-notebook
Alfnet
Code for 'Learning Efficient Single-stage Pedestrian Detectors by Asymptotic Localization Fitting' in ECCV2018
Stars: ✭ 126 (+0%)
Mutual labels:  jupyter-notebook
Python Audio
Some Jupyter notebooks about audio signal processing with Python
Stars: ✭ 125 (-0.79%)
Mutual labels:  jupyter-notebook
Modular Rl
[ICML 2020] PyTorch Code for "One Policy to Control Them All: Shared Modular Policies for Agent-Agnostic Control"
Stars: ✭ 126 (+0%)
Mutual labels:  jupyter-notebook
Cmucomputationalphotography
Jupyter Notebooks for CMU Computational Photography Course 15.463
Stars: ✭ 126 (+0%)
Mutual labels:  jupyter-notebook
First Order Model
This repository contains the source code for the paper First Order Motion Model for Image Animation
Stars: ✭ 11,964 (+9395.24%)
Mutual labels:  jupyter-notebook
Simplestockanalysispython
Stock Analysis Tutorial in Python
Stars: ✭ 126 (+0%)
Mutual labels:  jupyter-notebook
Skills Ml
Data Processing and Machine learning methods for the Open Skills Project
Stars: ✭ 125 (-0.79%)
Mutual labels:  jupyter-notebook
Teaching Monolith
Data science teaching materials
Stars: ✭ 126 (+0%)
Mutual labels:  jupyter-notebook
L4 Optimizer
Code for paper "L4: Practical loss-based stepsize adaptation for deep learning"
Stars: ✭ 126 (+0%)
Mutual labels:  jupyter-notebook
Python
利用python来分析一些财务报表数据
Stars: ✭ 125 (-0.79%)
Mutual labels:  jupyter-notebook
Normalizing Flows
Understanding normalizing flows
Stars: ✭ 126 (+0%)
Mutual labels:  jupyter-notebook

DeepKoopman

neural networks to learn Koopman eigenfunctions

Code for the paper "Deep learning for universal linear embeddings of nonlinear dynamics" by Bethany Lusch, J. Nathan Kutz, and Steven L. Brunton

To run code:

  1. Clone respository.
  2. In the data directory, recreate desired dataset(s) by running DiscreteSpectrumExample, Pendulum, FluidFlowOnAttractor, and/or FluidFlowBox in Matlab. (or email to ask for the datasets)
  3. Back in the main directory, run desired experiment(s) with python.

Notes on running the Python experiments:

  • A GPU is recommended but not required. The code can be run on a GPU or CPU without any changes.
  • The paper contains results on the four datasets. These were the best results from running scripts that do a random parameter search (DiscreteSpectrumExampleExperiment.py, PendulumExperiment.py, FluidFlowOnAttractorExperiment.py, and FluidFlowBoxExperiment.py).
  • To train networks using the specific parameters that produced the results in the paper instead of doing a parameter search, run DiscreteSpectrumExampleExperimentBestParams.py, PendulumExperimentBestParams.py, FluidFlowOnAttractorExperimentBestParams.py, and FluidFlowBoxExperimentBestParams.py.
  • The experiment scripts include a loop over 200 random experiments (random parameters and random initializations of weights). You'll probably want to kill off the script earlier than that!
  • Each random experiment can run up to params['max_time'] (in these experiments, 4 or 6 hours) but may be automatically terminated earlier if the error is not decreasing enough. If one experiment is not doing well, the script moves on to another random experiment.
  • If the code decides to end an experiment, it saves the current results. It also saves every hour.

Postprocessing:

  • You might want to use something like ./postprocessing/InvestigateResultsExample.ipynb to check out your results. Which of your models has the best validation error so far? How does validation error compare to your hyperparameter choices?
  • To see what I did to dive into a particular trained deep learning model on a dataset, see the notebooks ./postprocessing/BestModel-DiscreteSpectrumExample.ipynb, ./postprocessing/BestModel-Pendulum.ipynb, etc. These notebooks also show how I calculated numbers and created figures for the paper.

New to deep learning? Here is some context:

  • It is currently normal in deep learning to need to try a range of hyperparameters ("hyperparameter search"). For example: how many layers should your network have? How wide should each layer be? You try some options and pick the best result. (See next bullet point.) Further, the random initialization of your weights matters, so (unless you fix the seed of your random number generator) even with fixed hyperparameters, you can re-run your training multiple times and get different models with different errors. I didn't fix my seeds, so if you re-run my code multiple times, you can get different models and errors.
  • It is standard to split your data into three sets: training, validation, and testing. You fit your neural network model to your training data. You only use the validation data to compare different models and choose the best one. The error on your validation data estimates how well your model will generalize to new data. You set aside the testing data even further. You only calcuate the error on the test data at the very end, after you've commited to a particular model. This should give a better estimate of how well your model will generalize, since you may have already heavily relied on your validation data when choosing a model.

Citation

@article{lusch2018deep,
  title={Deep learning for universal linear embeddings of nonlinear dynamics},
  author={Lusch, Bethany and Kutz, J Nathan and Brunton, Steven L},
  journal={Nature Communications},
  volume={9},
  number={1},
  pages={4950},
  year={2018},
  publisher={Nature Publishing Group},
  Doi = {10.1038/s41467-018-07210-0}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].