All Projects → RLGC-Project → RLGC

RLGC-Project / RLGC

Licence: other
An open-source platform for applying Reinforcement Learning for Grid Control (RLGC)

Programming Languages

Jupyter Notebook
11667 projects
java
68154 projects - #9 most used programming language
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to RLGC

ocs2
Optimal Control for Switched Systems
Stars: ✭ 263 (+209.41%)
Mutual labels:  control, optimal-control
RobustAndOptimalControl.jl
Robust and optimal design and analysis of linear control systems
Stars: ✭ 25 (-70.59%)
Mutual labels:  control, optimal-control
Spot mini mini
Dynamics and Domain Randomized Gait Modulation with Bezier Curves for Sim-to-Real Legged Locomotion.
Stars: ✭ 426 (+401.18%)
Mutual labels:  control, openai-gym
Dock
A docking layout system.
Stars: ✭ 204 (+140%)
Mutual labels:  control
Grbac
👮 grbac is a fast, elegant and concise RBAC(role-based access control) framework
Stars: ✭ 231 (+171.76%)
Mutual labels:  control
ColorPickerLib
A WPF/MVVM implementation of a themeable color picker control.
Stars: ✭ 44 (-48.24%)
Mutual labels:  control
Xamarin.Forms.MultiSelectListView
☑️ Select multiple rows in a listview with xamarin.forms
Stars: ✭ 61 (-28.24%)
Mutual labels:  control
Clearml
ClearML - Auto-Magical CI/CD to streamline your ML workflow. Experiment Manager, MLOps and Data-Management
Stars: ✭ 2,868 (+3274.12%)
Mutual labels:  control
rts2
Contains core part of the RTS2 - RTS2 libraries, and drivers for basic devices.
Stars: ✭ 35 (-58.82%)
Mutual labels:  control
awesome-isaac-gym
A curated list of awesome NVIDIA Issac Gym frameworks, papers, software, and resources
Stars: ✭ 373 (+338.82%)
Mutual labels:  openai-gym
STM32 TimerInterrupt
This library enables you to use Interrupt from Hardware Timers on an STM32F/L/H/G/WB/MP1-based board. These STM32F/L/H/G/WB/MP1 Hardware Timers, using Interrupt, still work even if other functions are blocking. Moreover, they are much more precise (certainly depending on clock frequency accuracy) than other software timers using millis() or micr…
Stars: ✭ 27 (-68.24%)
Mutual labels:  control
Handycontrol
Contains some simple and commonly used WPF controls
Stars: ✭ 3,349 (+3840%)
Mutual labels:  control
PuTTY-ng
An improved multi-tabbed PuTTY with better user experience. This project is based on noddle1983's putty-nd.
Stars: ✭ 37 (-56.47%)
Mutual labels:  control
Hoverboard Firmware Hack Foc
With Field Oriented Control (FOC)
Stars: ✭ 215 (+152.94%)
Mutual labels:  control
AXIOM-Remote
A device to control AXIOM cameras.
Stars: ✭ 24 (-71.76%)
Mutual labels:  control
Acados
Fast and embedded solvers for nonlinear optimal control
Stars: ✭ 194 (+128.24%)
Mutual labels:  control
Reactor-and-Turbine-control-program
This is my Reactor- and Turbine control program for ComputerCraft and BigReactors
Stars: ✭ 18 (-78.82%)
Mutual labels:  control
NumericUpDownLib
Implements numeric up down WPF controls to edit/display values (byte, integer, short, ushort etc.) with a textbox and optional up/down arrow (repeat) buttons. Value editing is possible by dragging the mouse vertically/horizontally, clicking up/down buttons, using up/down or left right cursor keys, spinning mousewheel on mouseover, or editing th…
Stars: ✭ 68 (-20%)
Mutual labels:  control
Dropdownmenukit
UIKit drop down menu, simple yet flexible and written in Swift
Stars: ✭ 246 (+189.41%)
Mutual labels:  control
yarll
Combining deep learning and reinforcement learning.
Stars: ✭ 84 (-1.18%)
Mutual labels:  openai-gym

RLGC

Repo of the Reinforcement Learning for Grid Control (RLGC) Project.

In this project, we explore to use deep reinforcement learning methods for control and decision-making problems in power systems. We leverage the InterPSS simulation platform (http://www.interpss.org/) as the power system simulator. We develop an OpenAI gym (https://gym.openai.com/) compatible power grid dynamic simulation environment for developing, testing and benchmarking reinforcement learning algorithms for grid control.

NOTE: RLGC is under active development and may change at any time. Feel free to provide feedback and comments.


Environment setup

To run the training, you need python 3.5 or above and Java 8. Unix-based OS is recommended. We suggest using Anaconda to create virtual environment from the yaml file we provided.

  • To clone our project

    git clone https://github.com/RLGC-Project/RLGC.git
    
  • To create the virtual environment
    In case you like to use our development environment, we have provided environment. yml file

    cd RLGC
    conda env create -f environment.yml
    

    or you can create your own environment. The main dependent modules/libs include gym, tensorflow, py4j, numpy, matplotlib, stable-baselines,jupyter-notebooks

    cd RLGC    
    conda env create --name <your-env-name>  
    

    If you get errors about OpenAI gym you probably need to install cmake and zlib1g-dev. For example on Ubuntu machine, do the following command.

    sudo apt-get upgrade
    sudo apt-get install cmake
    sudo apt-get install zlib1g-dev
    

    After creating environment , you can activate the virtual environment and do development under this environment.

  • To activate virtual environment

    source activate <your-env-name>  
    
  • To deactivate virtual environment

    source deactivate
    

Training

  • With the RLGCJavaSever version 0.80 or newer and grid environment definition version 5 (PowerDynSimEnvDef_v5.py) or newer, users don't need to start the java server explicitly. The server will be started automatically when the grid environment PowerDynSimEnv is created.
  • To launch the training, you need first activate the virtual environment. Then run the training scripts under the folder
source activate <your-env-name> 
cd RLGC/examples/IEEE39_load_shedding/  
python trainIEEE39LoadSheddingAgent_discrete_action.py 

During the training the screen will dump out the training log. After training, you can deactivate the virtual environment by

source deactivate

Check training results and test trained model

Two Jupyter notebooks (with Linux and Windows versions-- directory paths are specified differently) are provided as examples for checking training results and testing trained RL model.

Customize the grid environment for training and testing

If you want to develop a new grid environment for RL training or customize the existing grid environment (e.g. IEEE 39-bus system for load shedding), the simplest way is through providing your own cases and configuration files.

When you open trainIEEE39LoadSheddingAgent_discrete_action.py you will notice the following codes:

case_files_array =[]
case_files_array.append(repo_path + '/testData/IEEE39/IEEE39bus_multiloads_xfmr4_smallX_v30.raw')
case_files_array.append(repo_path + '/testData/IEEE39/IEEE39bus_3AC.dyr')

....
# configuration files for dynamic simulation and RL
dyn_config_file = repo_path + '/testData/IEEE39/json/IEEE39_dyn_config.json'
rl_config_file = repo_path + '/testData/IEEE39/json/IEEE39_RL_loadShedding_3motor_2levels.json'

env = PowerDynSimEnv(case_files_array,dyn_config_file,rl_config_file, jar_path, java_port)

They are to specify the cases and configuration files for dynamic simulation and RL training. You can develop your environment by following these examples. Since PowerDynSimEnv is defined based on OpenAI Gym environment definition, once the environment is created, you can use it like other Gym environments, and seamlessly interface it with RL algorithms provided in OpenAI baselines or Stable baselines


Citation

If you use this code please cite it as:

@article{huang2019adaptive,
  title={Adaptive Power System Emergency Control using Deep Reinforcement Learning},
  author={Huang, Qiuhua and Huang, Renke and Hao, Weituo and Tan, Jie and Fan, Rui and Huang, Zhenyu},
  journal={IEEE Transactions on Smart Grid},
  year={2019},
  publisher={IEEE}
}

Communication

If you spot a bug or have a problem running the code, please open an issue.

Please direct other correspondence to Qiuhua Huang: qiuhua DOT huang AT pnnl DOT gov

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].