All Projects → mymusise → Trading Gym

mymusise / Trading Gym

A Trading environment base on Gym

Programming Languages

python
139335 projects - #7 most used programming language
python3
1442 projects

Projects that are alternatives of or similar to Trading Gym

Gym Gazebo2
gym-gazebo2 is a toolkit for developing and comparing reinforcement learning algorithms using ROS 2 and Gazebo
Stars: ✭ 257 (+261.97%)
Mutual labels:  gym, reinforcement-learning, rl
Muzero General
MuZero
Stars: ✭ 1,187 (+1571.83%)
Mutual labels:  gym, reinforcement-learning, rl
Rl trading
An environment to high-frequency trading agents under reinforcement learning
Stars: ✭ 205 (+188.73%)
Mutual labels:  trading, reinforcement-learning, rl
Stable Baselines
Mirror of Stable-Baselines: a fork of OpenAI Baselines, implementations of reinforcement learning algorithms
Stars: ✭ 115 (+61.97%)
Mutual labels:  gym, reinforcement-learning, rl
Rl Baselines Zoo
A collection of 100+ pre-trained RL agents using Stable Baselines, training and hyperparameter optimization included.
Stars: ✭ 839 (+1081.69%)
Mutual labels:  gym, reinforcement-learning, rl
Rl Baselines3 Zoo
A collection of pre-trained RL agents using Stable Baselines3, training and hyperparameter optimization included.
Stars: ✭ 161 (+126.76%)
Mutual labels:  gym, reinforcement-learning, rl
Rlenv.directory
Explore and find reinforcement learning environments in a list of 150+ open source environments.
Stars: ✭ 79 (+11.27%)
Mutual labels:  gym, reinforcement-learning, rl
Atari
AI research environment for the Atari 2600 games 🤖.
Stars: ✭ 174 (+145.07%)
Mutual labels:  gym, reinforcement-learning, rl
Drq
DrQ: Data regularized Q
Stars: ✭ 268 (+277.46%)
Mutual labels:  gym, reinforcement-learning, rl
Rl Book
Source codes for the book "Reinforcement Learning: Theory and Python Implementation"
Stars: ✭ 464 (+553.52%)
Mutual labels:  gym, reinforcement-learning
Rosettastone
Hearthstone simulator using C++ with some reinforcement learning
Stars: ✭ 510 (+618.31%)
Mutual labels:  reinforcement-learning, rl
Deepdrive
Deepdrive is a simulator that allows anyone with a PC to push the state-of-the-art in self-driving
Stars: ✭ 628 (+784.51%)
Mutual labels:  gym, reinforcement-learning
Robotics Rl Srl
S-RL Toolbox: Reinforcement Learning (RL) and State Representation Learning (SRL) for Robotics
Stars: ✭ 453 (+538.03%)
Mutual labels:  gym, reinforcement-learning
Mushroom Rl
Python library for Reinforcement Learning.
Stars: ✭ 442 (+522.54%)
Mutual labels:  reinforcement-learning, rl
Amazon Sagemaker Examples
Example 📓 Jupyter notebooks that demonstrate how to build, train, and deploy machine learning models using 🧠 Amazon SageMaker.
Stars: ✭ 6,346 (+8838.03%)
Mutual labels:  reinforcement-learning, rl
Deep Rl Keras
Keras Implementation of popular Deep RL Algorithms (A3C, DDQN, DDPG, Dueling DDQN)
Stars: ✭ 395 (+456.34%)
Mutual labels:  gym, reinforcement-learning
Super Mario Bros Ppo Pytorch
Proximal Policy Optimization (PPO) algorithm for Super Mario Bros
Stars: ✭ 649 (+814.08%)
Mutual labels:  gym, reinforcement-learning
Super Mario Bros A3c Pytorch
Asynchronous Advantage Actor-Critic (A3C) algorithm for Super Mario Bros
Stars: ✭ 775 (+991.55%)
Mutual labels:  gym, reinforcement-learning
Pytorch Rl
This repository contains model-free deep reinforcement learning algorithms implemented in Pytorch
Stars: ✭ 394 (+454.93%)
Mutual labels:  gym, reinforcement-learning
Gym Anytrading
The most simple, flexible, and comprehensive OpenAI Gym trading environment (Approved by OpenAI Gym)
Stars: ✭ 627 (+783.1%)
Mutual labels:  trading, reinforcement-learning

Trading-Gym

Build Status

Trading-Gym is a trading environment base on Gym. For those who want to custom everything.

install

$ pip install trading-gym

Creating features with ta-lib is suggested, that will improve the performance of agent and make it easy to learn. You should install ta-lib before it. Take Ubuntu x64 for example.

$ wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz 
$ tar -zxvf ta-lib-0.4.0-src.tar.gz
$ cd ta-lib/
$ ./configure --prefix=$PREFIX
$ make install

$ export TA_LIBRARY_PATH=$PREFIX/lib
$ export TA_INCLUDE_PATH=$PREFIX/include

$ pip install TA-Lib

See more.

Examples

quick start

from trading_gym.env import TradeEnv
import random


env = TradeEnv(data_path='./data/test_exchange.json')
done = False
obs = env.reset()
for i in range(500):
    action = random.sample([0, 1, 2], 1)[0]
    obs, reward, done, info = env.step(action)
    env.render()
    if done:
        break

A sample train with stable-baselines

from trading_gym.env import TradeEnv
from stable_baselines.common.vec_env import DummyVecEnv
from stable_baselines import DQN
from stable_baselines.deepq.policies import MlpPolicy


data_path = './data/fake_sin_data.json'
env = TradeEnv(data_path=data_path, unit=50000, data_kwargs={'use_ta': True})
env = DummyVecEnv([lambda: env])

model = DQN(MlpPolicy, env, verbose=2, learning_rate=1e-5)
model.learn(200000)


obs = env.reset()
for i in range(8000):
    action, _states = model.predict(obs)
    obs, rewards, done, info = env.step(action)
    env.render()
    if done:
        break

input format

[
    {
        "open": 10.0,
        "close": 10.0,
        "high": 10.0,
        "low": 10.0,
        "volume": 10.0,
        "date": "2019-01-01 09:59"
    },
    {
        "open": 10.1,
        "close": 10.1,
        "high": 10.1,
        "low": 10.1,
        "volume": 10.1,
        "date": "2019-01-01 10:00"
    }
]

actions

Action Value
PUT 0
HOLD 1
PUSH 2

observation

  • native obs: shape=(*, 51, 6), return 51 history data with OCHL
env = TradeEnv(data_path=data_path)
  • obs with ta: shape=(*, 10), return obs using talib.
    • default feature: ['ema', 'wma', 'sma', 'sar', 'apo', 'macd', 'macdsignal', 'macdhist', 'adosc', 'obv']
env = TradeEnv(data_path=data_path, data_kwargs={'use_ta': True})

Custom

custom obs

def custom_obs_features_func(history, info):
    close = [obs.close for obs in history]
    return close


env = TradeEnv(data_path=data_path,
               get_obs_features_func=custom_obs_features_func,
               ops_shape=(1))

custom reward

def custom_reward_func(exchange):
    return exchange.profit


env = TradeEnv(data_path=data_path,
               get_reward_func=custom_reward_func)

Param exchange is entity of Exchange

Reward

  • reward = fixed_profit
  • profit = fixed_profit + floating_profit
  • floating_profit = (latest_price - avg_price) * unit
  • unit = int(nav / buy_in_price)
  • avg_price = ((buy_in_price * unit) + charge) / unit
  • fixed_profit = SUM([every floating_profit after close position])
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].