All Projects → Sentenai → reinforce

Sentenai / reinforce

Licence: BSD-3-Clause License
Reinforcement learning in haskell

Programming Languages

haskell
3896 projects
Makefile
30231 projects

Projects that are alternatives of or similar to reinforce

rSoccer
🎳 Environments for Reinforcement Learning
Stars: ✭ 26 (-36.59%)
Mutual labels:  gym, gym-environments
gym-management
Gym Management System provides an easy to use interface for the users and a database for the admin to maintain the records of gym members.
Stars: ✭ 27 (-34.15%)
Mutual labels:  gym
safe-control-gym
PyBullet CartPole and Quadrotor environments—with CasADi symbolic a priori dynamics—for learning-based control and RL
Stars: ✭ 272 (+563.41%)
Mutual labels:  gym
es pytorch
High performance implementation of Deep neuroevolution in pytorch using mpi4py. Intended for use on HPC clusters
Stars: ✭ 20 (-51.22%)
Mutual labels:  gym
Pytorch-RL-CPP
A Repository with C++ implementations of Reinforcement Learning Algorithms (Pytorch)
Stars: ✭ 73 (+78.05%)
Mutual labels:  gym
Explorer
Explorer is a PyTorch reinforcement learning framework for exploring new ideas.
Stars: ✭ 54 (+31.71%)
Mutual labels:  gym
GoBigger
Come & try Decision-Intelligence version of "Agar"! Gobigger could also help you with multi-agent decision intelligence study.
Stars: ✭ 410 (+900%)
Mutual labels:  gym
wolpertinger ddpg
Wolpertinger Training with DDPG (Pytorch), Deep Reinforcement Learning in Large Discrete Action Spaces. Multi-GPU/Singer-GPU/CPU compatible.
Stars: ✭ 44 (+7.32%)
Mutual labels:  gym
safe-grid-agents
Training (hopefully) safe agents in gridworlds
Stars: ✭ 25 (-39.02%)
Mutual labels:  gym
gym-cellular-automata
Cellular Automata Environments for Reinforcement Learning
Stars: ✭ 12 (-70.73%)
Mutual labels:  gym-environments
flutter
Flutter fitness/workout app for wger
Stars: ✭ 106 (+158.54%)
Mutual labels:  gym
CartPole
Run OpenAI Gym on a Server
Stars: ✭ 16 (-60.98%)
Mutual labels:  gym
proto
Proto-RL: Reinforcement Learning with Prototypical Representations
Stars: ✭ 67 (+63.41%)
Mutual labels:  gym
ecole
Extensible Combinatorial Optimization Learning Environments
Stars: ✭ 249 (+507.32%)
Mutual labels:  gym
gym-cryptotrading
OpenAI Gym Environment API based Bitcoin trading environment
Stars: ✭ 111 (+170.73%)
Mutual labels:  gym
ios-build-script
Shell scripts to build ipa
Stars: ✭ 52 (+26.83%)
Mutual labels:  gym
gym-battlesnake
Multi-agent reinforcement learning environment
Stars: ✭ 29 (-29.27%)
Mutual labels:  gym
mgym
A collection of multi-agent reinforcement learning OpenAI gym environments
Stars: ✭ 41 (+0%)
Mutual labels:  gym
squadgym
Environment that can be used to evaluate reasoning capabilities of artificial agents
Stars: ✭ 27 (-34.15%)
Mutual labels:  gym
gym-anm
Design Reinforcement Learning environments that model Active Network Management (ANM) tasks in electricity distribution networks.
Stars: ✭ 87 (+112.2%)
Mutual labels:  gym-environments

reinforce

Build Status

reinforce is a library which exports an openai-gym-like typeclass, MonadEnv, with both an interface to the gym-http-api, as well as haskell-native environments which provide a substantial speed-up to the http-server interface.

This is an environment-first library, with basic reinforcement learning algorithms being developed on branches in subpackages (see #Development and Milestones). reinforce is currently an "alpha" release since it still needs some work defining some formal structures around what state-spaces and action-spaces should look like, however haskell's typesystem is expressive enough that this seems to be more of a "nice-to-have".

This repo is in active development and has some beginner-friendly contributions, from porting new gym environments to implementing new algorithms. Because this library is not on hackage, if you would like to see the haddocks, you can find it here.

An example agent

In reinforce-zoo/bandits/examples/, you can find an agent which showcases some of the functionality of this library.

module Main where

import Reinforce.Prelude
    -- ^ NoImplicitPrelude is on

import Environments.CartPole (Environment, runEnvironment_)
import Control.MonadEnv      (Initial(..), Obs(..))

import qualified Control.MonadEnv        as Env (step, reset)
import qualified Environments.CartPole   as Env (StateCP)
    -- Comments:
    --     StateCP - An "observation" or "the state of the agent" - note that State overloaded, so StateCP
    --     Action  - A performable action in the environment.
import qualified Reinforce.Spaces.Action as Actions (randomChoice)

main :: IO ()
main = runEnvironment_ gogoRandomAgent

  where
    gogoRandomAgent :: Environment ()
    gogoRandomAgent = forM_ [0..maxEpisodes] $ \_ ->
      Env.reset >>= \case           -- this comes from LambdaCase. Sugar for: \a -> case a of ...
        EmptyEpisode -> pure ()
        Initial obs  -> do
          liftIO . print $ "Initialized episode and am in state " ++ show obs
          rolloutEpisode obs 0

    maxEpisodes :: Int
    maxEpisodes = 100

    -- this is usually the structure of a rollout:
    rolloutEpisode :: Env.StateCP -> Double -> Environment ()
    rolloutEpisode obs totalRwd = do
      a <- liftIO Actions.randomChoice
      Env.step a >>= \case
        Terminated   -> pure ()
        Done r mobs  ->
          liftIO . print
            $ "Done! final reward: " ++ show (totalRwd+r) ++ ", final state: " ++ show mobs
        Next r  obs' -> do
          liftIO . print
            $ "Stepped with " ++ show a ++ " - reward: " ++ show r ++ ", next state: " ++ show obs'
          rolloutEpisode obs' (totalRwd+r)

You can build and run this with the following commands:

git clone https://github.com/Sentenai/reinforce
cd reinforce
stack build
stack exec random-agent-example

Note that if you want to run a gym environment, you'll have to run the openai/gym-http-api server with the following steps:

git clone https://github.com/openai/gym-http-api
cd gym-http-api
pip install -r requirements.txt
python ./gym_http_server.py

Currently, development has been primarily focused around classic control, so if you want to add any of the Atari environments, this would be an easy contribution!

Installing

Reinforce doesn't exist on hackage or stackage (yet), so your best bet is to add this git repo to your stack.yaml file:

packages:
- '.'
- location:
    git: [email protected]:Sentenai/reinforce.git
    commit: 'v0.0.1'
  extra-dep:true

# This is a requirement due to some tight coupling of the gym-http-api
- location:
    git: https://github.com/stites/gym-http-api.git
    commit: '5b72789'
  subdirs:
    - binding-hs
  extra-dep: true
- ...

and add it to your cabal file or package.yaml (recommended) dependencies.

Development and Milestones

If you want to contribute, you're in luck! There are a range of things to do from the beginner haskeller to, even, advanced pythonistas!

Please file an issue mentioning where you'd like to help, or track down @stites in the dataHaskell gitter or directly through keybase.io.

While you can check the Github issues, here are some items off the top of my head which could use some immediate attention (and may also need to be filed).

A few quick environment contributions might be the following:

  • #1 (easy) - Add an Atari environment to the api (like pong! others might require directly commiting to gym-http-api)
  • #8 (med) - Port Richard Sutton's Acrobot code to haskell
  • #6 (hard) - Break the dependency on the openai/gym-http-api server -- this would speed up performance considerably
  • #9 (harder) - Render the haskell CartPole environment with SDL

Some longer-running algorithmic contributions which would take place on the algorithms or deep-rl branches might be:

  • #10 (easy) - Convert algorithms into agents
  • #11 (med) - Add a testable "convergence" criteria
  • #12 (med) - Implement some eligibility trace variants to the algorithms branch
  • #13 (med) - Add some policy gradient methods to the algorithms branch
  • #14 (hard) - Head over to the deep-rl branch and convert some of the deep reinforcement learning models into haskell with tensorflow-haskell, and/or backprop

For a longer-term view, feel free to check out Milestones.

Contributors

Thanks goes to these wonderful people (emoji key):


Sam Stites

💻 🤔 📖

Mitchell Rosen

🤔

Anastasia Aizman

📖

This project follows the all-contributors specification. Contributions of any kind welcome!

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].