All Projects → LinZichuan → AdMRL

LinZichuan / AdMRL

Licence: other
Code for paper "Model-based Adversarial Meta-Reinforcement Learning" (https://arxiv.org/abs/2006.08875)

Programming Languages

python
139335 projects - #7 most used programming language
javascript
184084 projects - #8 most used programming language

Projects that are alternatives of or similar to AdMRL

neural-mpc
No description or website provided.
Stars: ✭ 54 (+80%)
Mutual labels:  model-based-rl
Model-Based-Reinforcement-Learning-for-Online-Recommendation
A pytorch implementation of A Model-Based Reinforcement Learning with Adversarial Training for Online Recommendation.
Stars: ✭ 33 (+10%)
Mutual labels:  model-based-rl
AWP
Codes for NeurIPS 2020 paper "Adversarial Weight Perturbation Helps Robust Generalization"
Stars: ✭ 114 (+280%)
Mutual labels:  adversarial-training
Adversarial-Patch-Training
Code for the paper: Adversarial Training Against Location-Optimized Adversarial Patches. ECCV-W 2020.
Stars: ✭ 30 (+0%)
Mutual labels:  adversarial-training
KitanaQA
KitanaQA: Adversarial training and data augmentation for neural question-answering models
Stars: ✭ 58 (+93.33%)
Mutual labels:  adversarial-training
omd
JAX code for the paper "Control-Oriented Model-Based Reinforcement Learning with Implicit Differentiation"
Stars: ✭ 43 (+43.33%)
Mutual labels:  model-based-rl
dads
Code for 'Dynamics-Aware Unsupervised Discovery of Skills' (DADS). Enables skill discovery without supervision, which can be combined with model-based control.
Stars: ✭ 138 (+360%)
Mutual labels:  model-based-rl
Deep-Reinforcement-Learning-CS285-Pytorch
Solutions of assignments of Deep Reinforcement Learning course presented by the University of California, Berkeley (CS285) in Pytorch framework
Stars: ✭ 104 (+246.67%)
Mutual labels:  model-based-rl
domain-shift-robustness
Code for the paper "Addressing Model Vulnerability to Distributional Shifts over Image Transformation Sets", ICCV 2019
Stars: ✭ 22 (-26.67%)
Mutual labels:  adversarial-training
Adversarial-Distributional-Training
Adversarial Distributional Training (NeurIPS 2020)
Stars: ✭ 52 (+73.33%)
Mutual labels:  adversarial-training
consistency-adversarial
Consistency Regularization for Adversarial Robustness (AAAI 2022)
Stars: ✭ 37 (+23.33%)
Mutual labels:  adversarial-training
Robust-Semantic-Segmentation
Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation (ICCV2021)
Stars: ✭ 25 (-16.67%)
Mutual labels:  adversarial-training
adan
Language-Adversarial Training for Cross-Lingual Text Classification (TACL)
Stars: ✭ 60 (+100%)
Mutual labels:  adversarial-training
FeatureScatter
Feature Scattering Adversarial Training
Stars: ✭ 64 (+113.33%)
Mutual labels:  adversarial-training

AdMRL

This is the implementation for the paper Model-based Adversarial Meta-Reinforcement Learning.

If you use this code in your research, please cite the following paper:

@article{lin2020model, 
    title={Model-based Adversarial Meta-Reinforcement Learning}, 
    author={Lin, Zichuan and Thomas, Garrett and Yang, Guangwen and Ma, Tengyu},
    journal={arXiv preprint arXiv:2006.08875}, 
    year={2020} 
}

Requirements

  1. OpenAI Baselines (0.1.6)
  2. MuJoCo (>= 1.5)
  3. TensorFlow (>= 1.9)
  4. NumPy (>= 1.14.5)
  5. Python 3.6

🔧 Installation

To install, you need to first install MuJoCo. Set LD_LIBRARY_PATH to point to the MuJoCo binaries (/$HOME/.mujoco/mujoco200/bin) and MUJOCO_LICENSE_PATH to point to the MuJoCo license (/$HOME/.mujoco/mjkey.txt). You can then setup mujoco by running rllab/scripts/setup_mujoco.sh. To install the remaining dependencies, you can create our environment with conda env create -f environment.yml. To use rllab, you also need to run cd rllab; pip install -e ..

🚀 Run

You can run experiments:

python main.py --taskname=Ant2D

You can also specify the hyper-parameters in launch.py and run many experiments:

python launch.py

License

MIT License.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].