All Projects → SvenBecker → TAA-PG

SvenBecker / TAA-PG

Licence: other
Usage of policy gradient reinforcement learning to solve portfolio optimization problems (Tactical Asset Allocation).

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to TAA-PG

Mlfinlab
MlFinLab helps portfolio managers and traders who want to leverage the power of machine learning by providing reproducible, interpretable, and easy to use tools.
Stars: ✭ 2,676 (+10192.31%)
Mutual labels:  portfolio-optimization, portfolio-management
portfoliolab
PortfolioLab is a python library that enables traders to take advantage of the latest portfolio optimisation algorithms used by professionals in the industry.
Stars: ✭ 104 (+300%)
Mutual labels:  portfolio-optimization, portfolio-management
Algorithmic-Trading
I have been deeply interested in algorithmic trading and systematic trading algorithms. This Repository contains the code of what I have learnt on the way. It starts form some basic simple statistics and will lead up to complex machine learning algorithms.
Stars: ✭ 47 (+80.77%)
Mutual labels:  monte-carlo-simulation, portfolio-optimization
Pyportfolioopt
Financial portfolio optimisation in python, including classical efficient frontier, Black-Litterman, Hierarchical Risk Parity
Stars: ✭ 2,502 (+9523.08%)
Mutual labels:  portfolio-optimization, portfolio-management
Trading-Algorithms
This repository contains the customized trading algorithms that I have created using the Quantopian IDE.
Stars: ✭ 86 (+230.77%)
Mutual labels:  portfolio-optimization, portfolio-management
xmimsim
Monte Carlo simulation of energy-dispersive X-ray fluorescence spectrometers
Stars: ✭ 29 (+11.54%)
Mutual labels:  monte-carlo-simulation
antaresViz
ANTARES Visualizations
Stars: ✭ 19 (-26.92%)
Mutual labels:  monte-carlo-simulation
RiskPortfolios
Functions for the construction of risk-based portfolios
Stars: ✭ 43 (+65.38%)
Mutual labels:  portfolio-optimization
ncrystal
NCrystal : a library for thermal neutron transport in crystals and other materials
Stars: ✭ 27 (+3.85%)
Mutual labels:  monte-carlo-simulation
QuantResearch
Quantitative analysis, strategies and backtests
Stars: ✭ 1,013 (+3796.15%)
Mutual labels:  portfolio-management
Portfolio-Management-list
📓 List of portfolio management resources, using Reinforcement Learning.
Stars: ✭ 29 (+11.54%)
Mutual labels:  portfolio-management
deep rl acrobot
TensorFlow A2C to solve Acrobot, with synchronized parallel environments
Stars: ✭ 32 (+23.08%)
Mutual labels:  policy-gradient
rpg
Ranking Policy Gradient
Stars: ✭ 22 (-15.38%)
Mutual labels:  policy-gradient
flr-vscode-extension
Flr(Flutter-R) Extension: A Flutter Resource Manager VSCode Extension
Stars: ✭ 28 (+7.69%)
Mutual labels:  assets-management
FlowViz
A Power BI template that provides easy to understand, actionable flow metrics and predictive analytics for your agile teams using Azure DevOps, Azure DevOps Server and/or TFS.
Stars: ✭ 150 (+476.92%)
Mutual labels:  monte-carlo-simulation
auto assets
assets management tool for Flutter
Stars: ✭ 17 (-34.62%)
Mutual labels:  assets-management
Deep-Reinforcement-Learning-With-Python
Master classic RL, deep RL, distributional RL, inverse RL, and more using OpenAI Gym and TensorFlow with extensive Math
Stars: ✭ 222 (+753.85%)
Mutual labels:  policy-gradient
coincube
A Python/Vue.js crypto portfolio management and trade automation program with support for 10 exchanges.
Stars: ✭ 85 (+226.92%)
Mutual labels:  portfolio-management
rl trading
No description or website provided.
Stars: ✭ 14 (-46.15%)
Mutual labels:  tensorforce
mathematics-statistics-for-data-science
Mathematical & Statistical topics to perform statistical analysis and tests; Linear Regression, Probability Theory, Monte Carlo Simulation, Statistical Sampling, Bootstrapping, Dimensionality reduction techniques (PCA, FA, CCA), Imputation techniques, Statistical Tests (Kolmogorov Smirnov), Robust Estimators (FastMCD) and more in Python and R.
Stars: ✭ 56 (+115.38%)
Mutual labels:  monte-carlo-simulation

🌍 EnglishGerman

Reinforcement Learning for Tactical Asset Allocation

This project contains the training and testing of multiple reinforcement learning agents given a portfolio management environment.

The interaction between environment and agent is given by:
Environment <> Runner <> Agent <> Model

Important files and folders

Dependencies

The proposed implementation was done mainly in python, therefore python version >=3.6 is required. Furthermore following additional python packages are required:

  • h5py==2.7.1
  • Keras==2.1.3
  • matplotlib==2.1.0
  • numpy==1.14.1
  • pandas==0.20.3
  • pandas-datareader==0.5.0
  • scikit-learn==0.19.1
  • scipy==1.0.0
  • seaborn==0.8.1
  • tensorflow==1.4.0
  • tensorflow-tensorboard==0.4.0rc3
  • tensorforce==0.3.5.1

Running the train file

For agent training you should run the file train.py using the console.
For example:

python ~/path/to/file/run/train.py -at "clipping" -v 1

Changes to the environment and/or run parameters can be selected through the train file, config file and through pre specified flags shown below.

Modifications of the agents and models must be specified through the belonging config file (json format).

Flags:

Flag 1 Flag 2 Meaning
-d --data path of the environment.csv file
-sp --split train/test split
-th --threaded (bool) threaded runner oder single runner
-ac --agent-config path to the agent config file
-nw --num-worker number of threads if threaded is being selected
-ep --epochs number of epochs
-e --episodes number of episodes
-hz --horizon investment horizon
-at --action-type action typ: 'signal', 'signal_softmax', 'direct', 'direct_softmax', 'clipping'
-as --action-space action space: 'unbounded', 'bounded', 'discrete'
-na --num-actions number of dicrete actions given a discrete action space
-mp --model-path saving path of the agent model
--eph --eval-path saving path of the agent model of the evaluation files
-v --verbose console verbose level
-l --load-agent if given agent will be loaded based on a prior save point (path)
-ds --discrete_states discretization of the state space if true
-ss -standardize-state (bool) standardization or normalization of the state
-rs --random-starts (bool) random starts for each new episode

Running the test file

The execution of the test file is very similar to the one of the train file. There has to be a checkpoint in saves for the selected agent.

python ~/path/to/project/run/test.py -l /project/model/saves/AgentName

The folder saved_results contains multiple parameter constellations of pretrained agents.

Flags

Flag Flag 2 Meaning
-d --data path of the environment.csv file
-ba --basic-agent selection of a BasicAgenten: 'BuyAndHoldAgent', 'RandomActionAgent'
-sp --split train/test split
-ac --agent-config path to the agent config file
-e --episodes number of episodes
-hz --horizon investment horizon
-at --action-type action typ: 'signal', 'signal_softmax', 'direct', 'direct_softmax', 'clipping'
-as --action-space action space: 'unbounded', 'bounded', 'discrete'
-na --num-actions number of dicrete actions given a discrete action space
--eph --eval-path saving path of the agent model of the evaluation files
-v --verbose console verbose level
-l --load-agent if given agent will be loaded based on a prior save point (path)
-ds --discrete_states (bool) discretization of the state space if true
-ss -standardize-state (bool) standardization or normalization of the state

TensorBoard

The files predictor.py as well as train.py integrate TensorBoard. TensorBoard can be loaded by typing

tensorboard --logdir path/to/project/env/board
tensorboard --logdir path/to/project/run/board

Got to localhost:6006 for the results.

Credits

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].