All Projects β†’ mbchang β†’ Dynamics

mbchang / Dynamics

A Compositional Object-Based Approach to Learning Physical Dynamics

Programming Languages

lua
6591 projects

Projects that are alternatives of or similar to Dynamics

Free Ai Resources
πŸš€ FREE AI Resources - πŸŽ“ Courses, πŸ‘· Jobs, πŸ“ Blogs, πŸ”¬ AI Research, and many more - for everyone!
Stars: ✭ 192 (+20.75%)
Mutual labels:  artificial-intelligence, ai, deep-neural-networks, unsupervised-learning
Lda
LDA topic modeling for node.js
Stars: ✭ 262 (+64.78%)
Mutual labels:  artificial-intelligence, ai, node-js
L2c
Learning to Cluster. A deep clustering strategy.
Stars: ✭ 262 (+64.78%)
Mutual labels:  artificial-intelligence, deep-neural-networks, unsupervised-learning
Ffdl
Fabric for Deep Learning (FfDL, pronounced fiddle) is a Deep Learning Platform offering TensorFlow, Caffe, PyTorch etc. as a Service on Kubernetes
Stars: ✭ 640 (+302.52%)
Mutual labels:  artificial-intelligence, ai, deep-neural-networks
Strips
AI Automated Planning with STRIPS and PDDL in Node.js
Stars: ✭ 272 (+71.07%)
Mutual labels:  artificial-intelligence, ai, node-js
Machine Learning And Ai In Trading
Applying Machine Learning and AI Algorithms applied to Trading for better performance and low Std.
Stars: ✭ 258 (+62.26%)
Mutual labels:  artificial-intelligence, ai, prediction
Deeppavlov
An open source library for deep learning end-to-end dialog systems and chatbots.
Stars: ✭ 5,525 (+3374.84%)
Mutual labels:  artificial-intelligence, ai, deep-neural-networks
Caffe2
Caffe2 is a lightweight, modular, and scalable deep learning framework.
Stars: ✭ 8,409 (+5188.68%)
Mutual labels:  artificial-intelligence, ai, deep-neural-networks
Letslearnai.github.io
Lets Learn AI
Stars: ✭ 33 (-79.25%)
Mutual labels:  artificial-intelligence, ai, deep-neural-networks
Gans In Action
Companion repository to GANs in Action: Deep learning with Generative Adversarial Networks
Stars: ✭ 748 (+370.44%)
Mutual labels:  artificial-intelligence, ai, deep-neural-networks
Awesome Artificial Intelligence
A curated list of Artificial Intelligence (AI) courses, books, video lectures and papers.
Stars: ✭ 6,516 (+3998.11%)
Mutual labels:  artificial-intelligence, ai, unsupervised-learning
Best ai paper 2020
A curated list of the latest breakthroughs in AI by release date with a clear video explanation, link to a more in-depth article, andΒ code
Stars: ✭ 2,140 (+1245.91%)
Mutual labels:  artificial-intelligence, ai, deep-neural-networks
Ml
A high-level machine learning and deep learning library for the PHP language.
Stars: ✭ 1,270 (+698.74%)
Mutual labels:  artificial-intelligence, ai, prediction
Tenginekit
TengineKit - Free, Fast, Easy, Real-Time Face Detection & Face Landmarks & Face Attributes & Hand Detection & Hand Landmarks & Body Detection & Body Landmarks & Iris Landmarks & Yolov5 SDK On Mobile.
Stars: ✭ 2,103 (+1222.64%)
Mutual labels:  artificial-intelligence, ai, deep-neural-networks
Machine Learning Flappy Bird
Machine Learning for Flappy Bird using Neural Network and Genetic Algorithm
Stars: ✭ 1,683 (+958.49%)
Mutual labels:  artificial-intelligence, ai
Trainer Mac
Trains a model, then generates a complete Xcode project that uses it - no code necessary
Stars: ✭ 122 (-23.27%)
Mutual labels:  ai, deep-neural-networks
Top 10 Computer Vision Papers 2020
A list of the top 10 computer vision papers in 2020 with video demos, articles, code and paper reference.
Stars: ✭ 132 (-16.98%)
Mutual labels:  artificial-intelligence, deep-neural-networks
Cocoaai
πŸ€– The Cocoa Artificial Intelligence Lab
Stars: ✭ 134 (-15.72%)
Mutual labels:  artificial-intelligence, ai
Machinelearning
An easy neural network for Java!
Stars: ✭ 122 (-23.27%)
Mutual labels:  artificial-intelligence, prediction
Voice activity detection
Voice Activity Detection based on Deep Learning & TensorFlow
Stars: ✭ 132 (-16.98%)
Mutual labels:  artificial-intelligence, deep-neural-networks

Neural Physics Engine

This repository contains the code described in https://arxiv.org/abs/1612.00341, accepted to ICLR 2017.

Project website: http://mbchang.github.io/npe

Abstract

We present the Neural Physics Engine (NPE), a framework for learning simulators of intuitive physics that naturally generalize across variable object count and different scene configurations. We propose a factorization of a physical scene into composable object-based representations and a neural network architecture whose compositional structure factorizes object dynamics into pairwise interactions. Like a symbolic physics engine, the NPE is endowed with generic notions of objects and their interactions; realized as a neural network, it can be trained via stochastic gradient descent to adapt to specific object properties and dynamics of different worlds. We evaluate the efficacy of our approach on simple rigid body dynamics in two-dimensional worlds. By comparing to less structured architectures, we show that the NPE's compositional representation of the structure in physical interactions improves its ability to predict movement, generalize across variable object count and different scene configurations, and infer latent properties of objects such as mass.

Citation

If this paper is helpful, or you use our code, please cite us!

@article{chang2016compositional,
    title={A Compositional Object-Based Approach to Learning Physical Dynamics},
    author={Chang, Michael B and Ullman, Tomer and Torralba, Antonio and Tenenbaum, Joshua B},
    journal={arXiv preprint arXiv:1612.00341},
    year={2016}
}

Below are some predictions from the model:

Requirements

Dependencies

To install lua dependencies, run:

luarocks install pl
luarocks install torchx
luarocks install nn
luarocks install nngraph
luarocks install rnn
luarocks install gnuplot
luarocks install paths
luarocks install json

To install js dependencies, run:

cd src/js
npm install

Instructions

NOTE: The code in this repository is still in the process of being cleaned up.

Generating Data

The code to generate data is adapted from the demo code in matter-js.

This is an example of generating 50000 trajectories of 4 balls of variable mass over 60 timesteps. It will create a folder balls_n4_t60_s50000_m in the data/ folder.

> cd src/js
> node demo/js/generate.js -e balls -n 4 -t 60 -s 50000 -m

This is an example of generating 50000 trajectories of 2 balls over 60 timesteps for wall geometry "U." It will create a folder walls_n2_t60_s50000_wU in the data/ folder.

> cd src/js
> node demo/js/generate.js -e walls -n 2 -t 60 -s 50000 -w U

It takes quite a bit of time to generate 50000 trajectories, so 200 trajectories is enough for debugging purposes. In that case you may want to change the flags accordingly in the examples below.

Visualization

Trajectory data is stored in a .json file. You can visualize the trajectory by opening src/js/demo/render.html in your browser and passing in the .json file.

Training the Model

This is an example of training the model for the balls_n4_t60_s50000_m dataset. The model checkpoints are saved in src/lua/logs/balls_n4_t60_ex50000_m__balls_n4_t60_ex50000_m_layers5_nbrhd_rs_fast_lr0.0003_modelnpe_seed0. If you are comfortable looking at code that has not been cleaned up yet, please check out the flags in src/lua/main.lua.

> cd src/lua
> th main.lua -layers 5 -dataset_folders "{'balls_n4_t60_ex50000_m'}" -nbrhd -rs -test_dataset_folders "{'balls_n4_t60_ex50000_m'}" -fast -lr 0.0003 -model npe -seed 0 -name balls_n4_t60_ex50000_m__balls_n4_t60_ex50000_m_layers5_nbrhd_rs_fast_lr0.0003_modelnpe_seed0 -mode exp

Here is an example of training on 3, 4, 5 balls of variable mass and testing on 6, 7, 8 balls of variable mass, provided that those datasets have been generated. The model checkpoints are saved in src/lua/logs/balls_n3_t60_ex50000_m,balls_n4_t60_ex50000_m,balls_n5_t60_ex50000_m__balls_n6_t60_ex50000_m,balls_n7_t60_ex50000_m,balls_n8_t60_ex50000_m_layers5_nbrhd_rs_fast_lr0.0003_modelnpe_seed0.

> cd src/lua
> th main.lua -layers 5 -dataset_folders "{'balls_n3_t60_ex50000_m','balls_n4_t60_ex50000_m','balls_n5_t60_ex50000_m'}" -nbrhd -rs -test_dataset_folders "{'balls_n6_t60_ex50000_m','balls_n7_t60_ex50000_m','balls_n8_t60_ex50000_m'}" -fast -lr 0.0003 -model npe -seed 0 -name balls_n3_t60_ex50000_m,balls_n4_t60_ex50000_m,balls_n5_t60_ex50000_m__balls_n6_t60_ex50000_m,balls_n7_t60_ex50000_m,balls_n8_t60_ex50000_m_layers5_nbrhd_rs_fast_lr0.0003_modelnpe_seed0 -mode exp

Here is an example of training on "O" and "I" wall geometries and testing on "U" and "I" wall geometries, provided that those datasets have been generated. The model checkpoints are saved in src/lua/logs/walls_n2_t60_ex50000_wO,walls_n2_t60_ex50000_wL__walls_n2_t60_ex50000_wU,walls_n2_t60_ex50000_wI_layers5_nbrhd_rs_fast_lr0.0003_modelnpe_seed0.

> cd src/lua
> th main.lua -layers 5 -dataset_folders "{'walls_n2_t60_ex50000_wO','walls_n2_t60_ex50000_wL'}" -nbrhd -rs -test_dataset_folders "{'walls_n2_t60_ex50000_wU','walls_n2_t60_ex50000_wI'}" -fast -lr 0.0003 -model npe -seed 0 -name walls_n2_t60_ex50000_wO,walls_n2_t60_ex50000_wL__walls_n2_t60_ex50000_wU,walls_n2_t60_ex50000_wI_layers5_nbrhd_rs_fast_lr0.0003_modelnpe_seed0 -mode exp 

Be sure to look at the command line flags in main.lua for more details. You may want to change the number of training iterations if you are just debugging . The code defaults to cpu, but you can switch to gpu with the -cuda flag.

Prediction

This is an example of running simulations using trained model that was saved in src/lua/logs/balls_n4_t60_ex50000_m__balls_n4_t60_ex50000_m_layers5_nbrhd_rs_fast_lr0.0003_modelnpe_seed0.

> cd src/lua
> th eval.lua -test_dataset_folders "{'balls_n4_t60_ex50000_m'}" -name balls_n4_t60_ex50000_m__balls_n4_t60_ex50000_m_layers5_nbrhd_rs_fast_lr0.0003_modelnpe_seed0 -mode sim

This is an example of running simulations using trained model that was saved in src/lua/logs/balls_n3_t60_ex50000_m,balls_n4_t60_ex50000_m,balls_n5_t60_ex50000_m__balls_n6_t60_ex50000_m,balls_n7_t60_ex50000_m,balls_n8_t60_ex50000_m_layers5_nbrhd_rs_fast_lr0.0003_modelnpe_seed0.

> cd src/lua
> th eval.lua -test_dataset_folders "{'balls_n3_t60_ex50000_m','balls_n4_t60_ex50000_m','balls_n5_t60_ex50000_m','balls_n6_t60_ex50000_m','balls_n7_t60_ex50000_m','balls_n8_t60_ex50000_m'}" -name balls_n3_t60_ex50000_m,balls_n4_t60_ex50000_m,balls_n5_t60_ex50000_m__balls_n6_t60_ex50000_m,balls_n7_t60_ex50000_m,balls_n8_t60_ex50000_m_layers5_nbrhd_rs_fast_lr0.0003_modelnpe_seed0 -mode sim

This is an example of running simulations using trained model that was saved in src/lua/logs/walls_n2_t60_ex50000_wO,walls_n2_t60_ex50000_wL__walls_n2_t60_ex50000_wU,walls_n2_t60_ex50000_wI_layers5_nbrhd_rs_fast_lr0.0003_modelnpe_seed0.

> cd src/lua
> th eval.lua -test_dataset_folders "{'walls_n2_t60_ex50000_wO','walls_n2_t60_ex50000_wL','walls_n2_t60_ex50000_wU','walls_n2_t60_ex50000_wI'}" -name walls_n2_t60_ex50000_wO,walls_n2_t60_ex50000_wL__walls_n2_t60_ex50000_wU,walls_n2_t60_ex50000_wI_layers5_nbrhd_rs_fast_lr0.0003_modelnpe_seed0 -mode sim

You can visualize the predictions with src/js/demo/render.html and passing in the .json files in src/lua/logs/<experiment_name>/<dataset_name>predictions/<jsonfile>.

Inference

This is an example of running mass inference using trained model that was saved in src/lua/logs/balls_n4_t60_ex50000_m__balls_n4_t60_ex50000_m_layers5_nbrhd_rs_fast_lr0.0003_modelnpe_seed0.

> cd src/lua
> th eval.lua -test_dataset_folders "{'balls_n4_t60_ex50000_m'}" -name balls_n4_t60_ex50000_m__balls_n4_t60_ex50000_m_layers5_nbrhd_rs_fast_lr0.0003_modelnpe_seed0 -mode minf

This is an example of running mass inference using trained model that was saved in balls_n3_t60_ex50000_m,balls_n4_t60_ex50000_m,balls_n5_t60_ex50000_m__balls_n6_t60_ex50000_m,balls_n7_t60_ex50000_m,balls_n8_t60_ex50000_m_layers5_nbrhd_rs_fast_lr0.0003_modelnpe_seed0.

> cd src/lua
> th eval.lua -test_dataset_folders "{'balls_n6_t60_ex50000_m','balls_n7_t60_ex50000_m','balls_n8_t60_ex50000_m','balls_n3_t60_ex50000_m','balls_n4_t60_ex50000_m','balls_n5_t60_ex50000_m'}" -name balls_n3_t60_ex50000_m,balls_n4_t60_ex50000_m,balls_n5_t60_ex50000_m__balls_n6_t60_ex50000_m,balls_n7_t60_ex50000_m,balls_n8_t60_ex50000_m_layers5_nbrhd_rs_fast_lr0.0003_modelnpe_seed0 -mode minf

This is an example of running mass inference using trained model that was saved in src/lua/logs/walls_n2_t60_ex50000_wO,walls_n2_t60_ex50000_wL__walls_n2_t60_ex50000_wU,walls_n2_t60_ex50000_wI_layers5_nbrhd_rs_fast_lr0.0003_modelnpe_seed0.

> cd src/lua
> th eval.lua -test_dataset_folders "{'walls_n2_t60_ex50000_wO','walls_n2_t60_ex50000_wL','walls_n2_t60_ex50000_wU','walls_n2_t60_ex50000_wI'}" -name walls_n2_t60_ex50000_wO,walls_n2_t60_ex50000_wL__walls_n2_t60_ex50000_wU,walls_n2_t60_ex50000_wI_layers5_nbrhd_rs_fast_lr0.0003_modelnpe_seed0 -mode minf

Acknowledgements

This project was built with Torch7, rnn, and matter-js. A big thank you to these folks.

We thank Tejas Kulkarni for insightful discussions and guidance. We thank Ilker Yildirim, Erin Reynolds, Feras Saad, Andreas Stuhlmuller, Adam Lerer, Chelsea Finn, Jiajun Wu, and the anonymous reviewers for valuable feedback. We thank Liam Brummit, Kevin Kwok, and Guillermo Webster for help with matter-js. M. Chang was graciously supported by MIT’s SuperUROP and UROP programs.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].