All Projects → SinaPournia → Deeplearning Trader

SinaPournia / Deeplearning Trader

backtrader with DRL ( Deep Reinforcement Learning)

Projects that are alternatives of or similar to Deeplearning Trader

Qlearning trading
Learning to trade under the reinforcement learning framework
Stars: ✭ 431 (+1695.83%)
Mutual labels:  trade, jupyter-notebook, reinforcement-learning
Tensorflow Book
Accompanying source code for Machine Learning with TensorFlow. Refer to the book for step-by-step explanations.
Stars: ✭ 4,448 (+18433.33%)
Mutual labels:  jupyter-notebook, reinforcement-learning
Practical rl
A course in reinforcement learning in the wild
Stars: ✭ 4,741 (+19654.17%)
Mutual labels:  jupyter-notebook, reinforcement-learning
David Silver Reinforcement Learning
Notes for the Reinforcement Learning course by David Silver along with implementation of various algorithms.
Stars: ✭ 623 (+2495.83%)
Mutual labels:  jupyter-notebook, reinforcement-learning
Tensor House
A collection of reference machine learning and optimization models for enterprise operations: marketing, pricing, supply chain
Stars: ✭ 449 (+1770.83%)
Mutual labels:  jupyter-notebook, reinforcement-learning
Courses
Quiz & Assignment of Coursera
Stars: ✭ 454 (+1791.67%)
Mutual labels:  jupyter-notebook, reinforcement-learning
Amazon Sagemaker Examples
Example 📓 Jupyter notebooks that demonstrate how to build, train, and deploy machine learning models using 🧠 Amazon SageMaker.
Stars: ✭ 6,346 (+26341.67%)
Mutual labels:  jupyter-notebook, reinforcement-learning
Deep Reinforcement Learning
Repo for the Deep Reinforcement Learning Nanodegree program
Stars: ✭ 4,012 (+16616.67%)
Mutual labels:  jupyter-notebook, reinforcement-learning
Funcat
Funcat 将同花顺、通达信、文华财经麦语言等的公式写法移植到了 Python 中。
Stars: ✭ 642 (+2575%)
Mutual labels:  trade, jupyter-notebook
Reinforcement Learning 2nd Edition By Sutton Exercise Solutions
Solutions of Reinforcement Learning, An Introduction
Stars: ✭ 713 (+2870.83%)
Mutual labels:  jupyter-notebook, reinforcement-learning
Deeprl Tutorials
Contains high quality implementations of Deep Reinforcement Learning algorithms written in PyTorch
Stars: ✭ 748 (+3016.67%)
Mutual labels:  jupyter-notebook, reinforcement-learning
Reinforcement learning tutorial with demo
Reinforcement Learning Tutorial with Demo: DP (Policy and Value Iteration), Monte Carlo, TD Learning (SARSA, QLearning), Function Approximation, Policy Gradient, DQN, Imitation, Meta Learning, Papers, Courses, etc..
Stars: ✭ 442 (+1741.67%)
Mutual labels:  jupyter-notebook, reinforcement-learning
Coursera
Quiz & Assignment of Coursera
Stars: ✭ 774 (+3125%)
Mutual labels:  jupyter-notebook, reinforcement-learning
Rl Book
Source codes for the book "Reinforcement Learning: Theory and Python Implementation"
Stars: ✭ 464 (+1833.33%)
Mutual labels:  jupyter-notebook, reinforcement-learning
Learning Deep Learning
Paper reading notes on Deep Learning and Machine Learning
Stars: ✭ 388 (+1516.67%)
Mutual labels:  jupyter-notebook, reinforcement-learning
Ml Mipt
Open Machine Learning course at MIPT
Stars: ✭ 480 (+1900%)
Mutual labels:  jupyter-notebook, reinforcement-learning
Qtrader
Reinforcement Learning for Portfolio Management
Stars: ✭ 363 (+1412.5%)
Mutual labels:  jupyter-notebook, reinforcement-learning
Lagom
lagom: A PyTorch infrastructure for rapid prototyping of reinforcement learning algorithms.
Stars: ✭ 364 (+1416.67%)
Mutual labels:  jupyter-notebook, reinforcement-learning
Hands On Reinforcement Learning With Python
Master Reinforcement and Deep Reinforcement Learning using OpenAI Gym and TensorFlow
Stars: ✭ 640 (+2566.67%)
Mutual labels:  jupyter-notebook, reinforcement-learning
Hands On Meta Learning With Python
Learning to Learn using One-Shot Learning, MAML, Reptile, Meta-SGD and more with Tensorflow
Stars: ✭ 768 (+3100%)
Mutual labels:  jupyter-notebook, reinforcement-learning

DeepLearning Trader

The purpose of this project is to make a neural network model which buys and sells in the stock or a similar system like forex market.

how it works :

“Reinforcement learning” is a technique to make a model (a neural network) which acts in an environment and tries to find how to “deal” with that environment to get the maximum “reward”.

“Agent” produces action and earns reward for that action from the environment and updates itself with that reward to produce better actions in the future.

In this situation backtrader is the environment, which simulates stock or forex market with real data history.

For example, single row of data frame includes: [Open, High, Low, Close, Volume]

Action produced by our model in a specific state is a decision to buy or sell. Backtrader calculates and returns a reward for every action made by the model.

If we buy, that means price will increase and if we sell that means price will be decrease.

PPO is a technique in reinforcement learning. In this technique two different models feeds back to one another to achieve a better result.

To improve the model training and maximizing performance in an agent, two different models act together and feedback each other.

It’s like different people working in a single team.

“Critic” predicts the maximum reward that can be earned from this situation base on the latest experience.

“Actor” predicts the best action in that situation which led to maximum reward.

These two work in parallel and feedback each other to have a better grasp of the present state.

Please refer to these links for more information on this subject:

https://www.youtube.com/watch?v=e3Jy2vShroE&t=125s

https://www.youtube.com/watch?v=5P7I-xPq8u8&t=7s

Achievements on This Version:

This is not the latest version that we've worked on, and also this model cannot achieve human-level performance. Since this model is based on LSTM, it is not capable of achieving a better performance.

Latest Achievements

Changing the model from LSTM to attention base, fixes the problem of data normalization. Also we’ve added more indicators, introduced data visualization and a backtrader buy-sell history.

Future Goals:

We are going to eliminate all the LSTMs from the model.

Also, having multiple models at once and train them in parallel (instead of training one model at once) will make the overall progress much faster.

Motivation for sharing this project:

It’s about a year that we’re working on this project. We’ve found that it’s like a full time job and we need more expertise in other fields to make this project really successful. For instance, an experienced scientist in Algotrading (Algorithmic Trading) would be very helpful for us to determine which indicators are useful and how to use them. For this reason, we are looking for investors so we could hire more people to fuel the project.

Contact me at LinkedIn: https://www.linkedin.com/in/sina-pournia-74246a185/

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].