All Projects → StateOfTheArt-quant → trading_gym

StateOfTheArt-quant / trading_gym

Licence: Apache-2.0 license
a unified environment for supervised learning and reinforcement learning in the context of quantitative trading

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to trading gym

Sgx Full Orderbook Tick Data Trading Strategy
Providing the solutions for high-frequency trading (HFT) strategies using data science approaches (Machine Learning) on Full Orderbook Tick Data.
Stars: ✭ 733 (+1936.11%)
Mutual labels:  quant, quantitative-trading
Turingtrader
The Open-Source Backtesting Engine/ Market Simulator by Bertram Solutions.
Stars: ✭ 132 (+266.67%)
Mutual labels:  quant, quantitative-trading
Quantstats
Portfolio analytics for quants, written in Python
Stars: ✭ 823 (+2186.11%)
Mutual labels:  quant, quantitative-trading
Tianshou
An elegant PyTorch deep reinforcement learning library.
Stars: ✭ 4,109 (+11313.89%)
Mutual labels:  ddpg, ppo
mujoco-benchmark
Provide full reinforcement learning benchmark on mujoco environments, including ddpg, sac, td3, pg, a2c, ppo, library
Stars: ✭ 101 (+180.56%)
Mutual labels:  ddpg, ppo
Qlib
Qlib is an AI-oriented quantitative investment platform, which aims to realize the potential, empower the research, and create the value of AI technologies in quantitative investment. With Qlib, you can easily try your ideas to create better Quant investment strategies. An increasing number of SOTA Quant research works/papers are released in Qlib.
Stars: ✭ 7,582 (+20961.11%)
Mutual labels:  quant, quantitative-trading
Zvt
modular quant framework.
Stars: ✭ 1,801 (+4902.78%)
Mutual labels:  quant, quantitative-trading
Deep Reinforcement Learning Algorithms
31 projects in the framework of Deep Reinforcement Learning algorithms: Q-learning, DQN, PPO, DDPG, TD3, SAC, A2C and others. Each project is provided with a detailed training log.
Stars: ✭ 167 (+363.89%)
Mutual labels:  ddpg, ppo
Deep-Reinforcement-Learning-for-Automated-Stock-Trading-Ensemble-Strategy-ICAIF-2020
Live Trading. Please star.
Stars: ✭ 1,251 (+3375%)
Mutual labels:  ddpg, ppo
Quant Notes
Quantitative Interview Preparation Guide, updated version here ==>
Stars: ✭ 180 (+400%)
Mutual labels:  quant, quantitative-trading
Pytorch Drl
PyTorch implementations of various Deep Reinforcement Learning (DRL) algorithms for both single agent and multi-agent.
Stars: ✭ 233 (+547.22%)
Mutual labels:  ddpg, ppo
Rainy
☔ Deep RL agents with PyTorch☔
Stars: ✭ 39 (+8.33%)
Mutual labels:  ddpg, ppo
Deeprl
Modularized Implementation of Deep RL Algorithms in PyTorch
Stars: ✭ 2,640 (+7233.33%)
Mutual labels:  ddpg, ppo
Awesome Quant
A curated list of insanely awesome libraries, packages and resources for Quants (Quantitative Finance)
Stars: ✭ 8,205 (+22691.67%)
Mutual labels:  quant, quantitative-trading
Machine Learning Is All You Need
🔥🌟《Machine Learning 格物志》: ML + DL + RL basic codes and notes by sklearn, PyTorch, TensorFlow, Keras & the most important, from scratch!💪 This repository is ALL You Need!
Stars: ✭ 173 (+380.56%)
Mutual labels:  ddpg, ppo
Abu
阿布量化交易系统(股票,期权,期货,比特币,机器学习) 基于python的开源量化交易,量化投资架构
Stars: ✭ 8,589 (+23758.33%)
Mutual labels:  quant, quantitative-trading
Machin
Reinforcement learning library(framework) designed for PyTorch, implements DQN, DDPG, A2C, PPO, SAC, MADDPG, A3C, APEX, IMPALA ...
Stars: ✭ 145 (+302.78%)
Mutual labels:  ddpg, ppo
Minimalrl
Implementations of basic RL algorithms with minimal lines of codes! (pytorch based)
Stars: ✭ 2,051 (+5597.22%)
Mutual labels:  ddpg, ppo
Quant Trading
Python quantitative trading strategies including VIX Calculator, Pattern Recognition, Commodity Trading Advisor, Monte Carlo, Options Straddle, Shooting Star, London Breakout, Heikin-Ashi, Pair Trading, RSI, Bollinger Bands, Parabolic SAR, Dual Thrust, Awesome, MACD
Stars: ✭ 2,407 (+6586.11%)
Mutual labels:  quant, quantitative-trading
Deep-Reinforcement-Learning-With-Python
Master classic RL, deep RL, distributional RL, inverse RL, and more using OpenAI Gym and TensorFlow with extensive Math
Stars: ✭ 222 (+516.67%)
Mutual labels:  ddpg, ppo

trading_gym

trading_gym is a unified environment for supervised learning and reinforcement learning in the context of quantitative trading.

Philosophy

trading_gym is designed with the idea that, in the context of quantitative trading, different data format is needed for different research task. For example, cross-sectional data is used for explaining the cross-sectional variation in stock returns, time series data is used for timing strategy development, sequential data is used for sequencial-model, e.g. RNN and it variation algorithm. Besides, supervised learning algorithm and reinforcement learning need different data architecture.

The goal of trading_gym is to provide a unified environment for supervised learning and reinforcement learning on top of reinforcement learning concepts framework.

The main concepts of RL are the agent and the environment. The environment is the world that the agent lives in and interacts with. At every step of interaction, the agent sees a (possibly partial) observation of the state of the world, and then decides on an action to take. then the agent perceives a signal from the environment, some infomation that tells it how good or bad the current world state is.

As such, the trading_gym has been written with a few philosophical goals in mind:

  • Easier use for supervised learning and reinforcement learning
  • Easier use for sequencial and non-sequencial data architecture
  • low-level data architecture to faciliate customised preprocessing

trading_gym is attempt to remove the boundary between supervised learing and reinforcement learning, and want researcher who don't do finacial research for a living to be able to use our APIs.

Example

from trading_gym.utils.data.toy import create_toy_data
from trading_gym.envs import PortfolioTradingGym

mock_data = create_toy_data(order_book_ids_number=2, feature_number=3, start="2019-01-01", end="2019-06-11")
env = PortfolioTradingGym(data_df=mock_data, sequence_window=3, add_cash=True)
    
state = env.reset()
print(state)
action = np.array([0.6, 0.4, 0])
while True:        
    next_state, reward, done, info = env.step(action)
    if done:
        break    
env.render()
                          feature1  feature2  feature3
order_book_id datetime                                
000001.XSHE   2019-01-01  2.689461  1.648796  0.464287
              2019-01-02 -0.647372  0.306223 -1.137051
              2019-01-03  0.394545 -1.374702 -0.121645
600000.XSHG   2019-01-01  0.403601  1.313482 -2.010203
              2019-01-02 -0.261228 -1.039094  0.809173
              2019-01-03 -0.659177  0.904236  1.019546
CASH          2019-01-01  1.000000  1.000000  1.000000
              2019-01-02  1.000000  1.000000  1.000000
              2019-01-03  1.000000  1.000000  1.000000

Install

git clone https://github.com/StateOfTheArt-quant/trading_gym.git
cd trading_gym
python setup.py install

Examples

using trading_gym, we provide several examples to to display how the supervised learning and reinforcement learning can be unified under a framework.

linear regression

linear regression is an essential algorithm in the context of quantitative trading which can be used for calculating factor returns, portfolio covariance estimation and estimated returns.

RNN and its variant

deep reinforcement learning

minimal portfolio backtest engine

trading_gym can also serve as a minimal backtest engine for portfolio investment. Benchmark strategies can be easily implemented and backtest by it.

Author

Allen Yu ([email protected])

License

This project following Apache 2.0 License as written in LICENSE file

Copyright 2020 Allen Yu, StateOfTheArt.quant

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].