All Projects → cair → FlashRL

cair / FlashRL

Licence: other
No description or website provided.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to FlashRL

deep-rts
A Real-Time-Strategy game for Deep Learning research
Stars: ✭ 152 (+508%)
Mutual labels:  deep-reinforcement-learning, tree-search, per-arne
ActiveRagdollControllers
Research into controllers for 2d and 3d Active Ragdolls (using MujocoUnity+ml_agents)
Stars: ✭ 30 (+20%)
Mutual labels:  deep-reinforcement-learning
Reinforcement-Learning-on-google-colab
Reinforcement Learning algorithm's using google-colab
Stars: ✭ 33 (+32%)
Mutual labels:  deep-reinforcement-learning
Deep-Reinforcement-Learning-With-Python
Master classic RL, deep RL, distributional RL, inverse RL, and more using OpenAI Gym and TensorFlow with extensive Math
Stars: ✭ 222 (+788%)
Mutual labels:  deep-reinforcement-learning
UAV obstacle avoidance controller
UAV Obstacle Avoidance using Deep Recurrent Reinforcement Learning with Temporal Attention
Stars: ✭ 61 (+144%)
Mutual labels:  deep-reinforcement-learning
scala-rl
Functional Reinforcement Learning in Scala.
Stars: ✭ 26 (+4%)
Mutual labels:  deep-reinforcement-learning
abc
SeqGAN but with more bells and whistles
Stars: ✭ 25 (+0%)
Mutual labels:  deep-reinforcement-learning
Fruit-API
A Universal Deep Reinforcement Learning Framework
Stars: ✭ 61 (+144%)
Mutual labels:  deep-reinforcement-learning
tpprl
Code and data for "Deep Reinforcement Learning of Marked Temporal Point Processes", NeurIPS 2018
Stars: ✭ 68 (+172%)
Mutual labels:  deep-reinforcement-learning
RL course
The page of the Ural Federal University course "Reinforcement Learning and Neural Networks"
Stars: ✭ 23 (-8%)
Mutual labels:  deep-reinforcement-learning
pytorch-distributed
Ape-X DQN & DDPG with pytorch & tensorboard
Stars: ✭ 98 (+292%)
Mutual labels:  deep-reinforcement-learning
omd
JAX code for the paper "Control-Oriented Model-Based Reinforcement Learning with Implicit Differentiation"
Stars: ✭ 43 (+72%)
Mutual labels:  deep-reinforcement-learning
dqn-obstacle-avoidance
Deep Reinforcement Learning for Fixed-Wing Flight Control with Deep Q-Network
Stars: ✭ 57 (+128%)
Mutual labels:  deep-reinforcement-learning
Practical-DRL
This is a practical resource that makes it easier to learn about and apply Practical Deep Reinforcement Learning (DRL) https://ibrahimsobh.github.io/Practical-DRL/
Stars: ✭ 66 (+164%)
Mutual labels:  deep-reinforcement-learning
jax-rl
JAX implementations of core Deep RL algorithms
Stars: ✭ 61 (+144%)
Mutual labels:  deep-reinforcement-learning
realant
RealAnt robot platform for low-cost, real-world reinforcement learning
Stars: ✭ 40 (+60%)
Mutual labels:  reinforcement-learning-environments
good robot
"Good Robot! Now Watch This!": Repurposing Reinforcement Learning for Task-to-Task Transfer; and “Good Robot!”: Efficient Reinforcement Learning for Multi-Step Visual Tasks with Sim to Real Transfer
Stars: ✭ 84 (+236%)
Mutual labels:  deep-reinforcement-learning
awesome-rl
Awesome RL: Papers, Books, Codes, Benchmarks
Stars: ✭ 105 (+320%)
Mutual labels:  deep-reinforcement-learning
robo-gym-robot-servers
Repository containing Robot Servers ROS packages
Stars: ✭ 25 (+0%)
Mutual labels:  reinforcement-learning-environments
a3c-super-mario-pytorch
Reinforcement Learning for Super Mario Bros using A3C on GPU
Stars: ✭ 35 (+40%)
Mutual labels:  deep-reinforcement-learning

FlashRL - Flash Platform for Reinforcement Learning

For the updated version of FlashRL, go to this link.

TODO List

  • Fix pyVNC issue. Currently pyVNC fails to start a VNC server for the game to run on. We need to solve this issue in order to run our games in headless mode.
  • Begin developing custom environments.
  • Begin developing Docker containers for our code to run in. Preferably, create a Dockerfile that can be used to run custom environments without the need for the local machine to have all the dependencies installed.

Prerequisites

  • Ubuntu 18.04 (Our most recent testing of 20.04 proves that it does not work.)
  • Python 3.x.x (Python 3.6.8 is tested)
  • gnash
  • xvfb

Installation

For our testing, we have been working in a python virtual environment.

sudo apt-get install xvfb
sudo apt-get install gnash
sudo apt-get install vnc4server
# I would reccomend doing the next steps inside a virtual environment.
pip install git+https://github.com/cair/pyVNC
pip install git+https://github.com/JDaniel41/FlashRL

Deploy new environment

Developers are able to import custom environments through project/contrib/environments/

A typical custom implementation looks like this:

- project
    - __init__.py
    - main.py
    - contrib
        - environments
            - env_name
                - __init__.py
                - dataset.p
                - model.h5
                - env.swf

in the following section, we demonstrate how to implement the flash game Mujaffa as an environment for FlashRL.

Mujaffa-1.6

Prerequisites

  • SWF Game File
  • Python 3x
  • Keras

  • Create directory structure mkdir -p contrib/environments/mujaffa-v1.6
  • Create Configuration file:
echo "define = {
    "swf": "mujaffa.swf",
    "model": "model.h5",
    "dataset": "dataset.p",
    "scenes": [],
    "state_space": (84, 84, 3)
}" > contrib/environments/mujaffa-v1.6/__init__.py
  • Add swf "mujaffa.swf" to contrib/environments/mujaffa-v1.6/
  • Create file main.py in project root with following template
from FlashRL import Game

def on_frame(state, type, vnc):
    # vnc.send_key("a") # Sends the key "a"
    # vnc.send_mouse("Left", (200, 200)) # Left Clicks at x=200, y=200
    # vnc.send_mouse("Right", (200, 200)) # Right Clicks at x=200, y=200
    pass

g = Game("mujaffa-v1.6", fps=10, frame_callback=on_frame, grayscale=True, normalized=True)

Licence

Copyright 2017/2018 Per-Arne Andersen

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].