All Projects β†’ locuslab β†’ Icnn

locuslab / Icnn

Licence: apache-2.0
Input Convex Neural Networks

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Icnn

Reinforcement Learning Stanford
πŸ•ΉοΈ CS234: Reinforcement Learning, Winter 2019 | YouTube videos πŸ‘‰
Stars: ✭ 201 (-6.07%)
Mutual labels:  reinforcement-learning
Rl Tutorial Jnrr19
Stable-Baselines tutorial for JournΓ©es Nationales de la Recherche en Robotique 2019
Stars: ✭ 204 (-4.67%)
Mutual labels:  reinforcement-learning
Pytorch Reinforce
PyTorch Implementation of REINFORCE for both discrete & continuous control
Stars: ✭ 212 (-0.93%)
Mutual labels:  reinforcement-learning
Papers we read
Summaries of the papers that are discussed by VLG.
Stars: ✭ 203 (-5.14%)
Mutual labels:  reinforcement-learning
Minerva
Meandering In Networks of Entities to Reach Verisimilar Answers
Stars: ✭ 205 (-4.21%)
Mutual labels:  reinforcement-learning
Pytorch A2c Ppo Acktr Gail
PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).
Stars: ✭ 2,632 (+1129.91%)
Mutual labels:  reinforcement-learning
Knowledge graph reasoning papers
Must-read papers on knowledge graph reasoning
Stars: ✭ 201 (-6.07%)
Mutual labels:  reinforcement-learning
Pokerrl
Framework for Multi-Agent Deep Reinforcement Learning in Poker
Stars: ✭ 214 (+0%)
Mutual labels:  reinforcement-learning
Rl trading
An environment to high-frequency trading agents under reinforcement learning
Stars: ✭ 205 (-4.21%)
Mutual labels:  reinforcement-learning
Reinforcement Learning An Introduction Chinese
γ€ŠReinforcement Learning: An Introductionγ€‹οΌˆη¬¬δΊŒη‰ˆοΌ‰δΈ­ζ–‡ηΏ»θ―‘
Stars: ✭ 210 (-1.87%)
Mutual labels:  reinforcement-learning
Gym Unrealcv
Unreal environments for reinforcement learning
Stars: ✭ 202 (-5.61%)
Mutual labels:  reinforcement-learning
Epg
Code for the paper "Evolved Policy Gradients"
Stars: ✭ 204 (-4.67%)
Mutual labels:  reinforcement-learning
Alphazero gomoku
An implementation of the AlphaZero algorithm for Gomoku (also called Gobang or Five in a Row)
Stars: ✭ 2,570 (+1100.93%)
Mutual labels:  reinforcement-learning
Alpha Zero General
A clean implementation based on AlphaZero for any game in any framework + tutorial + Othello/Gobang/TicTacToe/Connect4 and more
Stars: ✭ 2,617 (+1122.9%)
Mutual labels:  reinforcement-learning
Awesome Deeplearning Resources
Deep Learning and deep reinforcement learning research papers and some codes
Stars: ✭ 2,483 (+1060.28%)
Mutual labels:  reinforcement-learning
Release
Deep Reinforcement Learning for de-novo Drug Design
Stars: ✭ 201 (-6.07%)
Mutual labels:  reinforcement-learning
Icychesszero
中国豑棋alpha zero程序
Stars: ✭ 206 (-3.74%)
Mutual labels:  reinforcement-learning
Autodrome
Framework and OpenAI Gym Environment for Autonomous Vehicle Development
Stars: ✭ 214 (+0%)
Mutual labels:  reinforcement-learning
Reco Papers
Classic papers and resources on recommendation
Stars: ✭ 2,804 (+1210.28%)
Mutual labels:  reinforcement-learning
Gymfc
A universal flight control tuning framework
Stars: ✭ 210 (-1.87%)
Mutual labels:  reinforcement-learning

Input Convex Neural Networks (ICNNs)

This repository is by Brandon Amos, Leonard Xu, and J. Zico Kolter and contains the TensorFlow source code to reproduce the experiments in our ICML 2017 paper Input Convex Neural Networks.

If you find this repository helpful in your publications, please consider citing our paper.

@InProceedings{amos2017icnn,
  title = {Input Convex Neural Networks},
  author = {Brandon Amos and Lei Xu and J. Zico Kolter},
  booktitle = {Proceedings of the 34th International Conference on Machine Learning},
  pages = {146--155},
  year = {2017},
  volume = {70},
  series = {Proceedings of Machine Learning Research},
  publisher = {PMLR},
}

Setup and Dependencies

  • Python/numpy
  • TensorFlow (we used r10)
  • OpenAI Gym + Mujoco (for the RL experiments)

Libraries

lib
└── bundle_entropy.py - Optimize a function over the [0,1] box with the bundle entropy method.
                        (Development is still in-progress and we are still
                        fixing some numerical issues here.)

Synthetic Classification

This image shows FICNN (top) and PICNN (bottom) classification of synthetic non-convex decision boundaries.

synthetic-cls
β”œβ”€β”€ icnn.py - Main script.
β”œβ”€β”€ legend.py - Create a figure of just the legend.
β”œβ”€β”€ make-tile.sh - Make the tile of images.
└── run.sh - Run all experiments on 4 GPUs.

Multi-Label Classification

(These are currently slightly inconsistent with our paper and we plan on synchronizing our paper and code.)

multi-label-cls
β”œβ”€β”€ bibsonomy.py - Loads the Bibsonomy datasets.
β”œβ”€β”€ ebundle-vs-gd.py - Compare ebundle and gradient descent.
β”œβ”€β”€ ff.py - Train a feed-forward net baseline.
β”œβ”€β”€ icnn_ebundle.py - Train an ICNN with the bundle entropy method.
β”œβ”€β”€ icnn.back.py - Train an ICNN with gradient descent and back differentiation.
└── icnn.plot.py - Plot the results from any multi-label cls experiment.

Image Completion

This image shows the test set completions on the Olivetti faces dataset over the first few iterations of training a PICNN with the bundle entropy method for 5 iterations.

completion
β”œβ”€β”€ icnn.back.py - Train an ICNN with gradient descent and back differentiation.
β”œβ”€β”€ icnn_ebundle.py - Train an ICNN with the bundle entropy method.
β”œβ”€β”€ icnn.plot.py - Plot the results from any image completion experiment.
└── olivetti.py - Loads the Olivetti faces dataset.

Reinforcement Learning

Training

From the RL directory, run a single experiment with:

python src/main.py --model ICNN --env InvertedPendulum-v1 --outdir output \
  --total 100000 --train 100 --test 1 --tfseed 0 --npseed 0 --gymseed 0
  • Use --model to select a model from [DDPG, NAF, ICNN].
  • Use --env to select a task. TaskList
  • View all of the parameters with python main.py -h.

Output

The TensorBoard summary is on by default. Use --summary False to turn it off. The TensorBoard summary includes (1) average Q value, (2) loss function, and (3) average reward for each training minibatch.

The testing total rewards are logged to log.txt. Each line is [training_timesteps] [testing_episode_total_reward].

Settings

To reproduce our experiments, run the scripts in the RL directory.

Acknowledgments

The DDPG portions of our RL code are from Simon Ramstedt's SimonRamstedt/ddpg repository.

Licensing

Unless otherwise stated, the source code is copyright Carnegie Mellon University and licensed under the Apache 2.0 License. Portions from the following third party sources have been modified and are included in this repository. These portions are noted in the source files and are copyright their respective authors with the licenses listed.

Project License
SimonRamstedt/ddpg MIT
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].