All Projects → pskrunner14 → Trading Bot

pskrunner14 / Trading Bot

Licence: mit
Stock Trading Bot using Deep Q-Learning

Projects that are alternatives of or similar to Trading Bot

Qtrader
Reinforcement Learning for Portfolio Management
Stars: ✭ 363 (+32.97%)
Mutual labels:  jupyter-notebook, reinforcement-learning, q-learning
Notebooks
Some notebooks
Stars: ✭ 53 (-80.59%)
Mutual labels:  jupyter-notebook, reinforcement-learning, q-learning
Reinforcement learning tutorial with demo
Reinforcement Learning Tutorial with Demo: DP (Policy and Value Iteration), Monte Carlo, TD Learning (SARSA, QLearning), Function Approximation, Policy Gradient, DQN, Imitation, Meta Learning, Papers, Courses, etc..
Stars: ✭ 442 (+61.9%)
Mutual labels:  jupyter-notebook, reinforcement-learning, q-learning
Hands On Reinforcement Learning With Python
Master Reinforcement and Deep Reinforcement Learning using OpenAI Gym and TensorFlow
Stars: ✭ 640 (+134.43%)
Mutual labels:  jupyter-notebook, reinforcement-learning, q-learning
Gekko Strategies
Strategies to Gekko trading bot with backtests results and some useful tools.
Stars: ✭ 1,022 (+274.36%)
Mutual labels:  trading-bot, trading-algorithms, stock-price-prediction
Dinoruntutorial
Accompanying code for Paperspace tutorial "Build an AI to play Dino Run"
Stars: ✭ 285 (+4.4%)
Mutual labels:  jupyter-notebook, reinforcement-learning, q-learning
Basic reinforcement learning
An introductory series to Reinforcement Learning (RL) with comprehensive step-by-step tutorials.
Stars: ✭ 826 (+202.56%)
Mutual labels:  jupyter-notebook, reinforcement-learning, q-learning
Gym Anytrading
The most simple, flexible, and comprehensive OpenAI Gym trading environment (Approved by OpenAI Gym)
Stars: ✭ 627 (+129.67%)
Mutual labels:  trading-algorithms, reinforcement-learning, q-learning
Stock Prediction Models
Gathers machine learning and deep learning models for Stock forecasting including trading bots and simulations
Stars: ✭ 4,660 (+1606.96%)
Mutual labels:  trading-bot, jupyter-notebook, stock-price-prediction
2048 Deep Reinforcement Learning
Trained A Convolutional Neural Network To Play 2048 using Deep-Reinforcement Learning
Stars: ✭ 169 (-38.1%)
Mutual labels:  jupyter-notebook, reinforcement-learning, q-learning
Ctc Executioner
Master Thesis: Limit order placement with Reinforcement Learning
Stars: ✭ 112 (-58.97%)
Mutual labels:  jupyter-notebook, reinforcement-learning, q-learning
Tradzqai
Trading environnement for RL agents, backtesting and training.
Stars: ✭ 150 (-45.05%)
Mutual labels:  trading-bot, trading-algorithms, reinforcement-learning
Zerodha live automate trading using ai ml on indian stock market Using Basic Python
Online trading using Artificial Intelligence Machine leaning with basic python on Indian Stock Market, trading using live bots indicator screener and back tester using rest API and websocket 😊
Stars: ✭ 131 (-52.01%)
Mutual labels:  trading-bot, jupyter-notebook, stock-price-prediction
FAIG
Fully Automated IG Trading
Stars: ✭ 134 (-50.92%)
Mutual labels:  trading-bot, stock-price-prediction, trading-algorithms
algotrading-example
algorithmic trading backtest and optimization examples using order book imbalances. (bitcoin, cryptocurrency, bitmex, binance futures, market making)
Stars: ✭ 169 (-38.1%)
Mutual labels:  trading-bot, trading-algorithms
btrccts
BackTest and Run CryptoCurrency Trading Strategies
Stars: ✭ 100 (-63.37%)
Mutual labels:  trading-bot, trading-algorithms
binance-technical-algorithm
Technical trading algorithm for Binance
Stars: ✭ 44 (-83.88%)
Mutual labels:  trading-bot, trading-algorithms
roq-samples
How to use the Roq C++20 API for Live Cryptocurrency Algorithmic and High-Frequency Trading as well as for Back-Testing and Historical Simulation
Stars: ✭ 119 (-56.41%)
Mutual labels:  trading-bot, trading-algorithms
tradingview-alert-binance-trader
This trading bot listens to the TradingView alert emails on your inbox and executes trades on Binance based on the parameters set on the TD alerts.
Stars: ✭ 153 (-43.96%)
Mutual labels:  trading-bot, trading-algorithms
LickHunterPRO
Cryptocurrency Trading Bot that looks for large pools of liquidity getting liquidated on margin trading, when it finds these it counter trades them!
Stars: ✭ 114 (-58.24%)
Mutual labels:  trading-bot, trading-algorithms

Overview

This project implements a Stock Trading Bot, trained using Deep Reinforcement Learning, specifically Deep Q-learning. Implementation is kept simple and as close as possible to the algorithm discussed in the paper, for learning purposes.

Introduction

Generally, Reinforcement Learning is a family of machine learning techniques that allow us to create intelligent agents that learn from the environment by interacting with it, as they learn an optimal policy by trial and error. This is especially useful in many real world tasks where supervised learning might not be the best approach due to various reasons like nature of task itself, lack of appropriate labelled data, etc.

The important idea here is that this technique can be applied to any real world task that can be described loosely as a Markovian process.

Approach

This work uses a Model-free Reinforcement Learning technique called Deep Q-Learning (neural variant of Q-Learning). At any given time (episode), an agent abserves it's current state (n-day window stock price representation), selects and performs an action (buy/sell/hold), observes a subsequent state, receives some reward signal (difference in portfolio position) and lastly adjusts it's parameters based on the gradient of the loss computed.

There have been several improvements to the Q-learning algorithm over the years, and a few have been implemented in this project:

  • [x] Vanilla DQN
  • [x] DQN with fixed target distribution
  • [x] Double DQN
  • [ ] Prioritized Experience Replay
  • [ ] Dueling Network Architectures

Results

Trained on GOOG 2010-17 stock data, tested on 2019 with a profit of $1141.45 (validated on 2018 with profit of $863.41):

Google Stock Trading episode

You can obtain similar visualizations of your model evaluations using the notebook provided.

Some Caveats

  • At any given state, the agent can only decide to buy/sell one stock at a time. This is done to keep things as simple as possible as the problem of deciding how much stock to buy/sell is one of portfolio redistribution.
  • The n-day window feature representation is a vector of subsequent differences in Adjusted Closing price of the stock we're trading followed by a sigmoid operation, done in order to normalize the values to the range [0, 1].
  • Training is prefferably done on CPU due to it's sequential manner, after each episode of trading we replay the experience (1 epoch over a small minibatch) and update model parameters.

Data

You can download Historical Financial data from Yahoo! Finance for training, or even use some sample datasets already present under data/.

Getting Started

In order to use this project, you'll need to install the required python packages:

pip3 install -r requirements.txt

Now you can open up a terminal and start training the agent:

python3 train.py data/GOOG.csv data/GOOG_2018.csv --strategy t-dqn

Once you're done training, run the evaluation script and let the agent make trading decisions:

python3 eval.py data/GOOG_2019.csv --model-name model_GOOG_50 --debug

Now you are all set up!

Acknowledgements

References

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].