All Projects → ucuapps → modelicagym

ucuapps / modelicagym

Licence: GPL-3.0 license
Modelica models integration with Open AI Gym

Programming Languages

python
139335 projects - #7 most used programming language
Modelica
51 projects

Projects that are alternatives of or similar to modelicagym

robo-gym-robot-servers
Repository containing Robot Servers ROS packages
Stars: ✭ 25 (-52.83%)
Mutual labels:  openai-gym, reinforcement-learning-environments
gym-hybrid
Collection of OpenAI parametrized action-space environments.
Stars: ✭ 26 (-50.94%)
Mutual labels:  openai-gym, reinforcement-learning-environments
gym-cartpole-swingup
A simple, continuous-control environment for OpenAI Gym
Stars: ✭ 20 (-62.26%)
Mutual labels:  openai-gym, reinforcement-learning-environments
bark-ml
Gym environments and agents for autonomous driving.
Stars: ✭ 68 (+28.3%)
Mutual labels:  openai-gym, reinforcement-learning-environments
gym-R
An R package providing access to the OpenAI Gym API
Stars: ✭ 21 (-60.38%)
Mutual labels:  openai-gym
gym-rs
OpenAI's Gym written in pure Rust for blazingly fast performance
Stars: ✭ 34 (-35.85%)
Mutual labels:  openai-gym
deep-rl-docker
Docker image with OpenAI Gym, Baselines, MuJoCo and Roboschool, utilizing TensorFlow and JupyterLab.
Stars: ✭ 44 (-16.98%)
Mutual labels:  openai-gym
Pontryagin-Differentiable-Programming
A unified end-to-end learning and control framework that is able to learn a (neural) control objective function, dynamics equation, control policy, or/and optimal trajectory in a control system.
Stars: ✭ 111 (+109.43%)
Mutual labels:  reinforcement-learning-environments
Deep-Reinforcement-Learning-for-Automated-Stock-Trading-Ensemble-Strategy-ICAIF-2020
Live Trading. Please star.
Stars: ✭ 1,251 (+2260.38%)
Mutual labels:  openai-gym
SNAC
Simultaneous Navigation and Construction
Stars: ✭ 13 (-75.47%)
Mutual labels:  reinforcement-learning-environments
deep rl acrobot
TensorFlow A2C to solve Acrobot, with synchronized parallel environments
Stars: ✭ 32 (-39.62%)
Mutual labels:  openai-gym
a3c-super-mario-pytorch
Reinforcement Learning for Super Mario Bros using A3C on GPU
Stars: ✭ 35 (-33.96%)
Mutual labels:  openai-gym
iroko
A platform to test reinforcement learning policies in the datacenter setting.
Stars: ✭ 55 (+3.77%)
Mutual labels:  openai-gym
ddp-gym
Differential Dynamic Programming controller operating in OpenAI Gym environment.
Stars: ✭ 70 (+32.08%)
Mutual labels:  openai-gym
pong-from-pixels
Training a Neural Network to play Pong from pixels
Stars: ✭ 25 (-52.83%)
Mutual labels:  openai-gym
Deep-Reinforcement-Learning-With-Python
Master classic RL, deep RL, distributional RL, inverse RL, and more using OpenAI Gym and TensorFlow with extensive Math
Stars: ✭ 222 (+318.87%)
Mutual labels:  openai-gym
gym-mtsim
A general-purpose, flexible, and easy-to-use simulator alongside an OpenAI Gym trading environment for MetaTrader 5 trading platform (Approved by OpenAI Gym)
Stars: ✭ 196 (+269.81%)
Mutual labels:  openai-gym
FlashRL
No description or website provided.
Stars: ✭ 25 (-52.83%)
Mutual labels:  reinforcement-learning-environments
ZeroSimROSUnity
Robotic simulation in Unity with ROS integration.
Stars: ✭ 112 (+111.32%)
Mutual labels:  reinforcement-learning-environments
gym-tetris
An OpenAI Gym interface to Tetris on the NES.
Stars: ✭ 33 (-37.74%)
Mutual labels:  openai-gym

modelicagym_logo

ModelicaGym: Applying Reinforcement Learning to Modelica Models

This ModelicaGym toolbox was developed to employ Reinforcement Learning (RL) for solving optimization and control tasks in Modelica models. The developed tool allows connecting models using Functional Mock-up Interface (FMI) to OpenAI Gym toolkit in order to exploit Modelica equation-based modelling and co-simulation together with RL algorithms as a functionality of the tools correspondingly. Thus, ModelicaGym facilitates fast and convenient development of RL algorithms and their comparison when solving optimal control problem for Modelica dynamic models.

Inheritance structure of the ModelicaGym toolbox classes and the implemented methods are discussed in details in examples. The toolbox functionality validation is performed on the Cart-Pole balancing problem. This includes physical system model description and it's integration in the toolbox, experiments on selection and influence of the model parameters (i.e. force magnitude, Cart-pole mass ratio, reward ratio, and simulation time step) on the learning process of Q-learning algorithm supported with discussion of the simulation results.

Paper

ArXiv preprint version can be found here.

Repository contains:

  • modelicagym.environments package for integration of FMU as an environment to OpenAI Gym. FMU is a functional model unit exported from one of the main Modelica tools, e.g. Dymola(proprietary) or JModelica(open source). Currently only FMU's exported in co-simulation mode are supported.
  • gymalgs.rl package for Reinforcement Learning algorithms compatible to OpenAI Gym environments.

Instalation

Full instalation guide is available here.

You can test working environment by running ./test_setup.py script.

You can install package itself by running pip install git+https://github.com/ucuapps/modelicagym.git (or pip3 install git+https://github.com/ucuapps/modelicagym.git if you have both python versions installed).

Examples

Examples of usage of both packages can be found in examples folder.

  • Tutorial explains how to integrate FMU using this toolbox in a step-wise manner. CartPole problem is considered as an illustrative example for the tutorial. Code from cart_pole_env.py is referenced and described in details.

  • cart_pole_env.py is an example how a specific FMU can be integrated to an OpenAI Gym as an environment. Classic cart-pole environment is considered. Corresponding FMU's can be found in the resources folder.

  • cart_pole_q_learner.py is an example of Q-learning algorithm application. Agent is trained on the Cart-pole environment simulated with an FMU. Its' integration is described in previous example.

  • Running examples is expected without modelicagym package installation. To run cart_pole_q_learner.py one just has to clone the repo. The advised way to run examples is with PyCharm IDE. It automatically adds project root to the PYTHONPATH.

If one wants to run example from the command line, they should update PYTHONPATH with project root:

:<work_dir>$ git clone https://github.com/ucuapps/modelicagym.git
:<work_dir>$ export PYTHONPATH=$PYTHONPATH:<work_dir>/modelicagym
:<work_dir>$ cd modelicagym/examples
:<work_dir>/modelicagym/examples $ python3 cart_pole_q_learner.py
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].