AI4Finance-Foundation / FinRL

Licence: MIT License
FinRL: The first open-source project for financial reinforcement learning. Please star. 🔥

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to FinRL

Finrl Library
FinRL: Financial Reinforcement Learning Framework. Please star. 🔥
Stars: ✭ 3,037 (-13.15%)
Mutual labels:  finance, deep-reinforcement-learning, openai-gym, fintech, stock-trading, multi-agent-learning, stock-markets, pythorch, tensorflow2, drl-trading-agents, drl-algorithms, finrl-library, drl-framework, trading-tasks
Deep-Reinforcement-Learning-for-Automated-Stock-Trading-Ensemble-Strategy-ICAIF-2020
Live Trading. Please star.
Stars: ✭ 1,251 (-64.23%)
Mutual labels:  deep-reinforcement-learning, openai-gym, stock-trading
TF2-RL
Reinforcement learning algorithms implemented for Tensorflow 2.0+ [DQN, DDPG, AE-DDPG, SAC, PPO, Primal-Dual DDPG]
Stars: ✭ 160 (-95.42%)
Mutual labels:  openai-gym, tensorflow2
fints-institute-db
Database of German Banks and their HBCI / FinTS endpoints
Stars: ✭ 28 (-99.2%)
Mutual labels:  finance, fintech
costa-rica-iban
Funciones utiles para extraer y validar información general de números de cuenta IBAN de Costa Rica
Stars: ✭ 16 (-99.54%)
Mutual labels:  finance, fintech
futu algo
Futu Algorithmic Trading Solution (Python) 基於富途OpenAPI所開發量化交易程序
Stars: ✭ 143 (-95.91%)
Mutual labels:  finance, fintech
rlflow
A TensorFlow-based framework for learning about and experimenting with reinforcement learning algorithms
Stars: ✭ 20 (-99.43%)
Mutual labels:  deep-reinforcement-learning, openai-gym
drl grasping
Deep Reinforcement Learning for Robotic Grasping from Octrees
Stars: ✭ 160 (-95.42%)
Mutual labels:  deep-reinforcement-learning, openai-gym
FinRL Podracer
Cloud-native Financial Reinforcement Learning
Stars: ✭ 179 (-94.88%)
Mutual labels:  deep-reinforcement-learning, openai-gym
Deep-Q-Networks
Implementation of Deep/Double Deep/Dueling Deep Q networks for playing Atari games using Keras and OpenAI gym
Stars: ✭ 38 (-98.91%)
Mutual labels:  deep-reinforcement-learning, openai-gym
deep reinforcement learning gallery
Deep reinforcement learning with tensorflow2
Stars: ✭ 35 (-99%)
Mutual labels:  deep-reinforcement-learning, tensorflow2
dqn-lambda
NeurIPS 2019: DQN(λ) = Deep Q-Network + λ-returns.
Stars: ✭ 20 (-99.43%)
Mutual labels:  deep-reinforcement-learning, openai-gym
stockholm
💵 Modern Python library for working with money and monetary amounts. Human friendly and flexible approach for development. 100% test coverage + built-in support for GraphQL and Protocol Buffers transports using current best-practices.
Stars: ✭ 26 (-99.26%)
Mutual labels:  finance, fintech
py-investment
Extensible Algo-Trading Python Package.
Stars: ✭ 19 (-99.46%)
Mutual labels:  finance, fintech
Autonomous-Drifting
Autonomous Drifting using Reinforcement Learning
Stars: ✭ 83 (-97.63%)
Mutual labels:  deep-reinforcement-learning, openai-gym
Meta-Learning-for-StarCraft-II-Minigames
We reproduced DeepMind's results and implement a meta-learning (MLSH) agent which can generalize across minigames.
Stars: ✭ 26 (-99.26%)
Mutual labels:  deep-reinforcement-learning, multi-agent-learning
muzero
A clean implementation of MuZero and AlphaZero following the AlphaZero General framework. Train and Pit both algorithms against each other, and investigate reliability of learned MuZero MDP models.
Stars: ✭ 126 (-96.4%)
Mutual labels:  deep-reinforcement-learning, tensorflow2
rl pytorch
Deep Reinforcement Learning Algorithms Implementation in PyTorch
Stars: ✭ 23 (-99.34%)
Mutual labels:  deep-reinforcement-learning, openai-gym
fibo
The Financial Industry Business Ontology (FIBO) defines the sets of things that are of interest in financial business applications and the ways that those things can relate to one another. In this way, FIBO can give meaning to any data (e.g., spreadsheets, relational databases, XML documents) that describe the business of finance.
Stars: ✭ 204 (-94.17%)
Mutual labels:  finance, fintech
Deep-Reinforcement-Learning-CS285-Pytorch
Solutions of assignments of Deep Reinforcement Learning course presented by the University of California, Berkeley (CS285) in Pytorch framework
Stars: ✭ 104 (-97.03%)
Mutual labels:  deep-reinforcement-learning, openai-gym

FinRL: Deep Reinforcement Learning for Quantitative Finance twitter facebook google+ linkedin

Downloads Downloads Python 3.6 PyPI Documentation Status License

Our Mission: to efficiently automate trading. We continuously develop and share codes for finance.

Our Vision: AI community has accumulated an open-source code ocean over the past decade. Applying these intellectual and engineering properties to finance will initiate a paradigm shift from the conventional trading routine to an automated machine learning approach, even RLOps in finance.

FinRL (website) is the first open-source project to explore the great potential of deep reinforcement learning in finance. We help practitioners pipeline a trading strategy using deep reinforcement learning (DRL).

The FinRL ecosystem is a unified framework, including various markets, state-of-the-art algorithms, financial tasks (portfolio management, cryptocurrency trading, high-frequency trading), live trading, etc.

Roadmap Level Users Example Desription
0.0 (Preparation) preparation practitioners of financial big data FinRL-Meta a universe of market environments
1.0 (Proof-of-Concept) entry-level beginners this repo demonstration, education
2.0 (Professional) intermediate-level full-stack developers, professionals ElegantRL financially optimized DRL algorithms
3.0 (Production) advance-level investment banks, hedge funds Podracer cloud-native solution

Outline

Overview

FinRL has three layers: applications, drl agents, and market environments.

For a trading task (on the top), an agent (in the middle) interacts with an environment (at the bottom), making sequential decisions.

Run FinRL_StockTrading_NeurIPS_2018.ipynb step by step for a quick start.

A video about FinRL library at the AI4Finance Youtube Channel.

File Structure

Correspondingly, the main folder finrl has three subfolders apps, drl_agents, finrl_meta.

We employ a train-test-trade pipeline by three files: train.py, test.py, and trade.py.

FinRL
├── finrl (main folder)
│   ├── apps
│   	├── cryptocurrency_trading
│   	├── high_frequency_trading
│   	├── portfolio_allocation
│   	├── stock_trading
│   	└── 
│   ├── drl_agents
│   	├── elegantrl
│   	├── rllib
│   	└── stablebaseline3
│   ├── finrl_meta
│   	├── data_processors
│   	├── env_cryptocurrency_trading
│   	├── env_portfolio_allocation
│   	├── env_stock_trading
│   	├── preprocessor
│   	├── data_processor.py
│   	└── finrl_meta_config.py
│   ├── config.py
│   ├── config_tickers.py
│   ├── main.py
│   ├── plot.py
│   ├── train.py
│   ├── test.py
│   └── trade.py
│   
├── tutorial (tutorial notebooks and educational files)
├── unit_testing (make sure verified codes working on env & data)
│   ├── test_env
│   	└── test_env_cashpenalty.py
│   └── test_marketdata
│   	└── test_yahoodownload.py
├── setup.py
├── requirements.txt
└── README.md

Supported Data Sources

Data Source Type Range and Frequency Request Limits Raw Data Preprocessed Data
Alpaca US Stocks, ETFs 2015-now, 1min Account-specific OHLCV Prices&Indicators
Baostock CN Securities 1990-12-19-now, 5min Account-specific OHLCV Prices&Indicators
Binance Cryptocurrency API-specific, 1s, 1min API-specific Tick-level daily aggegrated trades, OHLCV Prices&Indicators
CCXT Cryptocurrency API-specific, 1min API-specific OHLCV Prices&Indicators
IEXCloud NMS US securities 1970-now, 1 day 100 per second per IP OHLCV Prices&Indicators
JoinQuant CN Securities 2005-now, 1min 3 requests each time OHLCV Prices&Indicators
QuantConnect US Securities 1998-now, 1s NA OHLCV Prices&Indicators
RiceQuant CN Securities 2005-now, 1ms Account-specific OHLCV Prices&Indicators
tusharepro CN Securities, A share -now, 1 min Account-specific OHLCV Prices&Indicators
WRDS.TAQ US Securities 2003-now, 1ms 5 requests each time Intraday Trades Prices&Indicators
Yahoo! Finance US Securities Frequency-specific, 1min 2,000/hour OHLCV Prices&Indicators

OHLCV: open, high, low, and close prices; volume. adj_close: adjusted close price

Technical indicators: 'macd', 'boll_ub', 'boll_lb', 'rsi_30', 'dx_30', 'close_30_sma', 'close_60_sma'. Users also can add new features.

Installation

Status Update

Version History [click to expand]
  • 2021-08-25 0.3.1: pytorch version with a three-layer architecture, apps (financial tasks), drl_agents (drl algorithms), neo_finrl (gym env)
  • 2020-12-14 Upgraded to Pytorch with stable-baselines3; Remove tensorflow 1.0 at this moment, under development to support tensorflow 2.0
  • 2020-11-27 0.1: Beta version with tensorflow 1.5

Contributions

  • FinRL is the first open-source framework to demonstrate the great potential of applying DRL algorithms in quantitative finance. We build an ecosystem around the FinRL framework, which seeds the rapidly growing AI4Finance community.
  • The application layer provides interfaces for users to customize FinRL to their own trading tasks. Automated backtesting tool and performance metrics are provided to help quantitative traders iterate trading strategies at a high turnover rate. Profitable trading strategies are reproducible and hands-on tutorials are provided in a beginner-friendly fashion. Adjusting the trained models to the rapidly changing markets is also possible.
  • The agent layer provides state-of-the-art DRL algorithms that are adapted to finance with fine-tuned hyperparameters. Users can add new DRL algorithms.
  • The environment layer includes not only a collection of historical data APIs, but also live trading APIs. They are reconfigured into standard OpenAI gym-style environments. Moreover, it incorporates market frictions and allows users to customize the trading time granularity.

Tutorials

News

Citing FinRL

@article{finrl2020,
    author  = {Liu, Xiao-Yang and Yang, Hongyang and Chen, Qian and Zhang, Runjia and Yang, Liuqing and Xiao, Bowen and Wang, Christina Dan},
    title   = {{FinRL}: A deep reinforcement learning library for automated stock trading in quantitative finance},
    journal = {Deep RL Workshop, NeurIPS 2020},
    year    = {2020}
}
@article{liu2021finrl,
    author  = {Liu, Xiao-Yang and Yang, Hongyang and Gao, Jiechao and Wang, Christina Dan},
    title   = {{FinRL}: Deep reinforcement learning framework to automate trading in quantitative finance},
    journal = {ACM International Conference on AI in Finance (ICAIF)},
    year    = {2021}
}

We published FinTech papers, check Google Scholar, resulting in this project. Closely related papers are given in the list.

Join and Contribute

Welcome to the AI4Finance Foundation community!

Join to discuss FinRL: AI4Finance mailing list, AI4Finance Slack channel:

Follow us on WeChat:

Please check Contributing Guidances.

Contributors

Thanks!

Sponsorship

Welcome gift money to support AI4Finance, a non-profit academic community. Use the links in the right, or scan the following vemo QR code:

Detailed sponsorship records can be found at Issue #425

LICENSE

MIT License

Disclaimer: Nothing herein is financial advice, and NOT a recommendation to trade real money. Please use common sense and always first consult a professional before trading or investing.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].