All Projects → ADGEfficiency → Energy Py

ADGEfficiency / Energy Py

Licence: mit
Reinforcement learning for energy systems

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Energy Py

Flappy Es
Flappy Bird AI using Evolution Strategies
Stars: ✭ 140 (-5.41%)
Mutual labels:  reinforcement-learning
Sumo Rl
A simple interface to instantiate Reinforcement Learning environments with SUMO for Traffic Signal Control. Compatible with Gym Env from OpenAI and MultiAgentEnv from RLlib.
Stars: ✭ 145 (-2.03%)
Mutual labels:  reinforcement-learning
Study Reinforcement Learning
Studying Reinforcement Learning Guide
Stars: ✭ 147 (-0.68%)
Mutual labels:  reinforcement-learning
Cherry
A PyTorch Library for Reinforcement Learning Research
Stars: ✭ 143 (-3.38%)
Mutual labels:  reinforcement-learning
Machin
Reinforcement learning library(framework) designed for PyTorch, implements DQN, DDPG, A2C, PPO, SAC, MADDPG, A3C, APEX, IMPALA ...
Stars: ✭ 145 (-2.03%)
Mutual labels:  reinforcement-learning
Rl Book Challenge
self-studying the Sutton & Barto the hard way
Stars: ✭ 146 (-1.35%)
Mutual labels:  reinforcement-learning
Complete Life Cycle Of A Data Science Project
Complete-Life-Cycle-of-a-Data-Science-Project
Stars: ✭ 140 (-5.41%)
Mutual labels:  reinforcement-learning
Calliope
A multi-scale energy systems modelling framework
Stars: ✭ 147 (-0.68%)
Mutual labels:  energy
Articulations Robot Demo
Stars: ✭ 145 (-2.03%)
Mutual labels:  reinforcement-learning
Rainbow
A PyTorch implementation of Rainbow DQN agent
Stars: ✭ 147 (-0.68%)
Mutual labels:  reinforcement-learning
Gb
A minimal C implementation of Nintendo Gameboy - An fast research environment for Reinforcement Learning
Stars: ✭ 143 (-3.38%)
Mutual labels:  reinforcement-learning
Allenact
An open source framework for research in Embodied-AI from AI2.
Stars: ✭ 144 (-2.7%)
Mutual labels:  reinforcement-learning
Show Adapt And Tell
Code for "Show, Adapt and Tell: Adversarial Training of Cross-domain Image Captioner" in ICCV 2017
Stars: ✭ 146 (-1.35%)
Mutual labels:  reinforcement-learning
Awesome Deep Learning Papers For Search Recommendation Advertising
Awesome Deep Learning papers for industrial Search, Recommendation and Advertising. They focus on Embedding, Matching, Ranking (CTR prediction, CVR prediction), Post Ranking, Transfer, Reinforcement Learning, Self-supervised Learning and so on.
Stars: ✭ 136 (-8.11%)
Mutual labels:  reinforcement-learning
Open Quadruped
An open-source 3D-printed quadrupedal robot. Intuitive gait generation through 12-DOF Bezier Curves. Full 6-axis body pose manipulation. Custom 3DOF Leg Inverse Kinematics Model accounting for offsets.
Stars: ✭ 148 (+0%)
Mutual labels:  reinforcement-learning
Safe learning
Safe reinforcement learning with stability guarantees
Stars: ✭ 140 (-5.41%)
Mutual labels:  reinforcement-learning
Tensor2tensor
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
Stars: ✭ 11,865 (+7916.89%)
Mutual labels:  reinforcement-learning
Doudizhu
AI斗地主
Stars: ✭ 149 (+0.68%)
Mutual labels:  reinforcement-learning
Minimalrl
Implementations of basic RL algorithms with minimal lines of codes! (pytorch based)
Stars: ✭ 2,051 (+1285.81%)
Mutual labels:  reinforcement-learning
Chess Alpha Zero
Chess reinforcement learning by AlphaGo Zero methods.
Stars: ✭ 1,868 (+1162.16%)
Mutual labels:  reinforcement-learning

energy-py

Build Status

energypy is a framework for running reinforcement learning experiments on energy environments.

energypy is built and maintained by Adam Green - [email protected].

Installation

$ git clone https://github.com/ADGEfficiency/energy-py

$ pip install --ignore-installed -r requirements.txt

$ python setup.py install

Running experiments

energy-py has a high level API to run a specific run of an experiment from a yaml config file:

$ energypy-experiment energypy/examples/example_config.yaml battery

An example config file (energypy/examples/example_config.yaml):

expt:
    name: example

battery: &defaults
    total_steps: 10000

    env:
        env_id: battery
        dataset: example

    agent:
        agent_id: random

Results (log files for each episode & experiment summaries) are placed into a folder in the users $HOME. The progress of an experiment can be watched with TensorBoard by running a server looking at this results folder:

$ tensorboard --logdir='~/energy-py-results'

Low level API

energypy provides the familiar gym style low-level API for agent and environment initialization and interactions:

import energypy

env = energypy.make_env(env_id='battery')

agent = energypy.make_agent(
    agent_id='dqn',
    env=env,
    total_steps=10000
	)

observation = env.reset()

while not done:
    action = agent.act(observation)
    next_observation, reward, done, info = env.step(action)
    training_info = agent.learn()
    observation = next_observation

Library

energy-py environments follow the design of OpenAI gym. energy-py also wraps some classic gym environments such as CartPole, Pendulum and MountainCar.

energy-py currently implements:

  • naive agents
  • DQN agent
  • Battery storage environment
  • Demand side flexibility environment
  • Wrappers around the OpenAI gym CartPole, Pendulum and MountainCar environments

Further reading

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].