All Projects → mjuchli → Ctc Executioner

mjuchli / Ctc Executioner

Master Thesis: Limit order placement with Reinforcement Learning

Projects that are alternatives of or similar to Ctc Executioner

Deep Reinforcement Learning
Repo for the Deep Reinforcement Learning Nanodegree program
Stars: ✭ 4,012 (+3482.14%)
Mutual labels:  jupyter-notebook, reinforcement-learning, dqn, openai-gym
Gym Anytrading
The most simple, flexible, and comprehensive OpenAI Gym trading environment (Approved by OpenAI Gym)
Stars: ✭ 627 (+459.82%)
Mutual labels:  reinforcement-learning, dqn, openai-gym, q-learning
Hands On Reinforcement Learning With Python
Master Reinforcement and Deep Reinforcement Learning using OpenAI Gym and TensorFlow
Stars: ✭ 640 (+471.43%)
Mutual labels:  jupyter-notebook, reinforcement-learning, openai-gym, q-learning
Basic reinforcement learning
An introductory series to Reinforcement Learning (RL) with comprehensive step-by-step tutorials.
Stars: ✭ 826 (+637.5%)
Mutual labels:  jupyter-notebook, reinforcement-learning, openai-gym, q-learning
Mushroom Rl
Python library for Reinforcement Learning.
Stars: ✭ 442 (+294.64%)
Mutual labels:  reinforcement-learning, dqn, openai-gym
Reinforcement learning tutorial with demo
Reinforcement Learning Tutorial with Demo: DP (Policy and Value Iteration), Monte Carlo, TD Learning (SARSA, QLearning), Function Approximation, Policy Gradient, DQN, Imitation, Meta Learning, Papers, Courses, etc..
Stars: ✭ 442 (+294.64%)
Mutual labels:  jupyter-notebook, reinforcement-learning, q-learning
Gym trading
Stars: ✭ 87 (-22.32%)
Mutual labels:  jupyter-notebook, dqn, openai-gym
Reinforcement Learning
Learn Deep Reinforcement Learning in 60 days! Lectures & Code in Python. Reinforcement Learning + Deep Learning
Stars: ✭ 3,329 (+2872.32%)
Mutual labels:  jupyter-notebook, reinforcement-learning, dqn
Reinforcement Learning With Tensorflow
Simple Reinforcement learning tutorials, 莫烦Python 中文AI教学
Stars: ✭ 6,948 (+6103.57%)
Mutual labels:  reinforcement-learning, dqn, q-learning
Openaigym
Solving OpenAI Gym problems.
Stars: ✭ 98 (-12.5%)
Mutual labels:  reinforcement-learning, dqn, openai-gym
Async Deeprl
Playing Atari games with TensorFlow implementation of Asynchronous Deep Q-Learning
Stars: ✭ 44 (-60.71%)
Mutual labels:  reinforcement-learning, openai-gym, q-learning
Pytorch Rl
This repository contains model-free deep reinforcement learning algorithms implemented in Pytorch
Stars: ✭ 394 (+251.79%)
Mutual labels:  reinforcement-learning, dqn, openai-gym
Rainbow Is All You Need
Rainbow is all you need! A step-by-step tutorial from DQN to Rainbow
Stars: ✭ 938 (+737.5%)
Mutual labels:  jupyter-notebook, reinforcement-learning, dqn
Deep traffic
MIT DeepTraffic top 2% solution (75.01 mph) 🚗.
Stars: ✭ 47 (-58.04%)
Mutual labels:  reinforcement-learning, dqn, q-learning
Rl Book
Source codes for the book "Reinforcement Learning: Theory and Python Implementation"
Stars: ✭ 464 (+314.29%)
Mutual labels:  jupyter-notebook, reinforcement-learning, openai-gym
Qtrader
Reinforcement Learning for Portfolio Management
Stars: ✭ 363 (+224.11%)
Mutual labels:  jupyter-notebook, reinforcement-learning, q-learning
Trading Bot
Stock Trading Bot using Deep Q-Learning
Stars: ✭ 273 (+143.75%)
Mutual labels:  jupyter-notebook, reinforcement-learning, q-learning
Dinoruntutorial
Accompanying code for Paperspace tutorial "Build an AI to play Dino Run"
Stars: ✭ 285 (+154.46%)
Mutual labels:  jupyter-notebook, reinforcement-learning, q-learning
Easy Rl
强化学习中文教程,在线阅读地址:https://datawhalechina.github.io/easy-rl/
Stars: ✭ 3,004 (+2582.14%)
Mutual labels:  reinforcement-learning, dqn, q-learning
Notebooks
Some notebooks
Stars: ✭ 53 (-52.68%)
Mutual labels:  jupyter-notebook, reinforcement-learning, q-learning

Order placement with Reinforcement Learning

CTC-Executioner is a tool that provides an on-demand execution/placement strategy for limit orders on crypto currency markets using Reinforcement Learning techniques. The underlying framework provides functionalities which allow to analyse order book data and derive features thereof. Those findings can then be used in order to dynamically update the decision making process of the execution strategy.

The methods being used are based on a research project (master thesis) currently proceeding at TU Delft.

Documentation

Comprehensive documentation and concepts explained in the academic report

For hands-on documentation and examples see Wiki

Usage

Load orderbooks

orderbook = Orderbook()
orderbook.loadFromEvents('data/example-ob-train.tsv')
orderbook.summary()
orderbook.plot(show_bidask=True)

orderbook_test = Orderbook()
orderbook_test.loadFromEvents('data/example-ob-test.tsv')
orderbook_test.summary()

Create and configure environments

import gym_ctc_executioner
env = gym.make("ctc-executioner-v0")
env.setOrderbook(orderbook)

env_test = gym.make("ctc-executioner-v0")
env_test.setOrderbook(orderbook_test)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].