wywongbd / Pairstrade Fyp 2019
Licence: mit
We tested 3 approaches for Pair Trading: distance, cointegration and reinforcement learning approach.
Stars: ✭ 109
Programming Languages
python
139335 projects - #7 most used programming language
Projects that are alternatives of or similar to Pairstrade Fyp 2019
Papers Literature Ml Dl Rl Ai
Highly cited and useful papers related to machine learning, deep learning, AI, game theory, reinforcement learning
Stars: ✭ 1,341 (+1130.28%)
Mutual labels: reinforcement-learning
Torchrl
Highly Modular and Scalable Reinforcement Learning
Stars: ✭ 102 (-6.42%)
Mutual labels: reinforcement-learning
Easy Rl
强化学习中文教程,在线阅读地址:https://datawhalechina.github.io/easy-rl/
Stars: ✭ 3,004 (+2655.96%)
Mutual labels: reinforcement-learning
Reimprovejs
A framework using TensorFlow.js for Deep Reinforcement Learning
Stars: ✭ 101 (-7.34%)
Mutual labels: reinforcement-learning
Reinforcement Learning Cheat Sheet
Reinforcement Learning Cheat Sheet
Stars: ✭ 104 (-4.59%)
Mutual labels: reinforcement-learning
Researchpapernotes
Initiative to read research papers
Stars: ✭ 97 (-11.01%)
Mutual labels: reinforcement-learning
Mojitalk
Code for "MojiTalk: Generating Emotional Responses at Scale" https://arxiv.org/abs/1711.04090
Stars: ✭ 107 (-1.83%)
Mutual labels: reinforcement-learning
Reinforcement learning
강화학습에 대한 기본적인 알고리즘 구현
Stars: ✭ 100 (-8.26%)
Mutual labels: reinforcement-learning
Aws Robomaker Sample Application Deepracer
Use AWS RoboMaker and demonstrate running a simulation which trains a reinforcement learning (RL) model to drive a car around a track
Stars: ✭ 105 (-3.67%)
Mutual labels: reinforcement-learning
Chemgan Challenge
Code for the paper: Benhenda, M. 2017. ChemGAN challenge for drug discovery: can AI reproduce natural chemical diversity? arXiv preprint arXiv:1708.08227.
Stars: ✭ 98 (-10.09%)
Mutual labels: reinforcement-learning
Gym Ignition
Framework for developing OpenAI Gym robotics environments simulated with Ignition Gazebo
Stars: ✭ 97 (-11.01%)
Mutual labels: reinforcement-learning
Reinforcement Learning
🤖 Implements of Reinforcement Learning algorithms.
Stars: ✭ 104 (-4.59%)
Mutual labels: reinforcement-learning
Rlai Exercises
Exercise Solutions for Reinforcement Learning: An Introduction [2nd Edition]
Stars: ✭ 97 (-11.01%)
Mutual labels: reinforcement-learning
Lang Emerge Parlai
Implementation of EMNLP 2017 Paper "Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog" using PyTorch and ParlAI
Stars: ✭ 106 (-2.75%)
Mutual labels: reinforcement-learning
Torchcraft
Connecting Torch to StarCraft
Stars: ✭ 1,341 (+1130.28%)
Mutual labels: reinforcement-learning
Direct Future Prediction Keras
Direct Future Prediction (DFP ) in Keras
Stars: ✭ 103 (-5.5%)
Mutual labels: reinforcement-learning
Numpy Ml
Machine learning, in numpy
Stars: ✭ 11,100 (+10083.49%)
Mutual labels: reinforcement-learning
Tensorflow2.0 Examples
🙄 Difficult algorithm, Simple code.
Stars: ✭ 1,397 (+1181.65%)
Mutual labels: reinforcement-learning
pairstrade-fyp-2019
Final year project at HKUST. We tested 3 main approaches for performing Pairs Trading:
- distance method
- cointegration method (rolling OLS, Kalman Filter)
- reinforcement learning agent (proposed)
FYP members: myself, Gordon, Brendan
How to get started?
- Run
./setup.sh
to install all dependencies
Note
- In our experiments, we used financial data taken from the Interactive Brokers platform, which is not free. Due to their regulations, we cannot release the financial data used in our experiments to the public. Feel free to use your own price data to perform experiments.
Disclaimer
- The strategies we implemented have not been proven to be profitable in a live trading account
- The reported returns are purely from backtesting procedures, and they may be susceptible to lookahead bias that we are not aware of
Updates
- We're no longer developing this, check out Yuri's findings regarding the RL agent
Note that the project description data, including the texts, logos, images, and/or trademarks,
for each open source project belongs to its rightful owner.
If you wish to add or remove any projects, please contact us at [email protected].