All Projects → halessi → rl_trading

halessi / rl_trading

Licence: other
No description or website provided.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to rl trading

gym-mtsim
A general-purpose, flexible, and easy-to-use simulator alongside an OpenAI Gym trading environment for MetaTrader 5 trading platform (Approved by OpenAI Gym)
Stars: ✭ 196 (+1300%)
Mutual labels:  trading, openai-gym
Tf deep rl trader
Trading Environment(OpenAI Gym) + PPO(TensorForce)
Stars: ✭ 139 (+892.86%)
Mutual labels:  trading, ppo
Machine Learning And Ai In Trading
Applying Machine Learning and AI Algorithms applied to Trading for better performance and low Std.
Stars: ✭ 258 (+1742.86%)
Mutual labels:  trading, artificial-neural-networks
GOAi
No description or website provided.
Stars: ✭ 57 (+307.14%)
Mutual labels:  trading, algo-trading
Gym Fx
Forex trading simulator environment for OpenAI Gym, observations contain the order status, performance and timeseries loaded from a CSV file containing rates and indicators. Work In Progress
Stars: ✭ 151 (+978.57%)
Mutual labels:  agent, openai-gym
Metatrader
Expert advisors, scripts, indicators and code libraries for Metatrader.
Stars: ✭ 99 (+607.14%)
Mutual labels:  trading, algo-trading
Gym trading
Stars: ✭ 87 (+521.43%)
Mutual labels:  trading, openai-gym
algotrading-example
algorithmic trading backtest and optimization examples using order book imbalances. (bitcoin, cryptocurrency, bitmex, binance futures, market making)
Stars: ✭ 169 (+1107.14%)
Mutual labels:  trading, algo-trading
Doom Net Pytorch
Reinforcement learning models in ViZDoom environment
Stars: ✭ 113 (+707.14%)
Mutual labels:  agent, ppo
Gym Minigrid
Minimalistic gridworld package for OpenAI Gym
Stars: ✭ 1,047 (+7378.57%)
Mutual labels:  agent, openai-gym
AutoTrader
A Python-based development platform for automated trading systems - from backtesting to optimisation to livetrading.
Stars: ✭ 227 (+1521.43%)
Mutual labels:  trading, algo-trading
cira
Cira algorithmic trading made easy. A Façade library for simpler interaction with alpaca-trade-API from Alpaca Markets.
Stars: ✭ 21 (+50%)
Mutual labels:  trading, algo-trading
tvdatafeed
A simple TradingView historical Data Downloader
Stars: ✭ 189 (+1250%)
Mutual labels:  trading, algo-trading
hummingbot
Hummingbot is open source software that helps you build trading bots that run on any exchange or blockchain
Stars: ✭ 3,602 (+25628.57%)
Mutual labels:  trading, algo-trading
Sequence-to-Sequence-Learning-of-Financial-Time-Series-in-Algorithmic-Trading
My bachelor's thesis—analyzing the application of LSTM-based RNNs on financial markets. 🤓
Stars: ✭ 64 (+357.14%)
Mutual labels:  trading, artificial-neural-networks
Gym Anytrading
The most simple, flexible, and comprehensive OpenAI Gym trading environment (Approved by OpenAI Gym)
Stars: ✭ 627 (+4378.57%)
Mutual labels:  trading, openai-gym
Hands On Reinforcement Learning With Python
Master Reinforcement and Deep Reinforcement Learning using OpenAI Gym and TensorFlow
Stars: ✭ 640 (+4471.43%)
Mutual labels:  openai-gym, ppo
Super Mario Bros Ppo Pytorch
Proximal Policy Optimization (PPO) algorithm for Super Mario Bros
Stars: ✭ 649 (+4535.71%)
Mutual labels:  openai-gym, ppo
Deep Trading Agent
Deep Reinforcement Learning based Trading Agent for Bitcoin
Stars: ✭ 573 (+3992.86%)
Mutual labels:  agent, trading
Deep-Reinforcement-Learning-With-Python
Master classic RL, deep RL, distributional RL, inverse RL, and more using OpenAI Gym and TensorFlow with extensive Math
Stars: ✭ 222 (+1485.71%)
Mutual labels:  openai-gym, ppo

this is NOT in progress. probably will never be updated

also: there are so many problems with this, please DO NOT attempt to use it in any live setting.

Optional Text

Installation

If you'd like to make use of this code, the easiest method for doing so will be cloning the repository and creating a new conda environment based upon the yml file.

git clone https://github.com/halessi/rl_trading
cd rl_trading
conda env create -f rl_trading.yml

Then activate the new environment with:

source activate rl_trading

Training a model

TensorForce and OpenAI Gym, two incredible resources for RL-learning, provide a neat and clean manner for formalizing the interactions between the agent and environment. This allows the code to configure the agent to act as a separate entity from the underlying environment that handles all of the stock-data manipulation.

For some background, remember that the basic formulation for training a reinforcement learning agent is as follows:

# fetch the first state (after initializing env and agent)
initial_state = env.reset()

# act on our first state
action = agent.act(state = initial_state)

# process the action, fetch the next state, reward, and done (which tells us whether the environment is terminal,
# which might occur if the agent ran out of $$)
reward, done, next_state = env.step(action = action)

# act on the next state
action = agent.act(state = next_state)
.....
# continue until finished

Using OpenAI's formal structure for an environment, I created stock_gym, which is a work in progress, but eventually will allow all manner of underlying preprocessing and data manipulation to enable our agent to extract the greatest signal.

To train an agent, see below.

python main.py /
    --window=      # whatever length of timesteps you'd like each state to include
    --preprocess=  # how you'd like the data to be preprocessed, options (TO EVENTUALLY) include: [MinMax, Renko log-return, autoencoded, hopefully more]
    --episodes=    # the number of episodes to train for. an episode is a full run through the dataset
    --agent=       # agent type to use
    --network=     # network architecture for data analysis

Some example trades

As can be seen in the images below, ....

Problems with reinforcement learning in time-series stem from a variety of issues, including delayed credit assignment and a low signal-to-noise ratios. It is unclear whether the OHLCV data we use here, even when preprocessed, contains sufficient signal for advantageously predicting future prices. (data MinMax processed, hasn't been unscaled here. immediate next step is to implement log-return scaling)

$AAPL

Optional Text

Future steps

  • run with aggregated bitcoin order book data
  • implement here a convolutional network for derivation of more specific state information
  • explore applicability of renko-blocks for denoising time-series data
  • incorporate sentiment analysis into state information
  • attempt to use an autoencoder for feature extraction on OHLCV data before feeding to RL
  • implement log-return scaling
  • set up command-line-args input to enable more rapid model and environment prototyping
  • implement basic MinMax scaling with scikit-learn
  • build OpenAI gym environment
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].