All Projects → happynoom → Deeptrade_keras

happynoom / Deeptrade_keras

Programming Languages

python
139335 projects - #7 most used programming language

Labels

Projects that are alternatives of or similar to Deeptrade keras

Deeptrade
A LSTM model using Risk Estimation loss function for stock trades in market
Stars: ✭ 256 (+12.28%)
Mutual labels:  stock, lstm
Base-On-Relation-Method-Extract-News-DA-RNN-Model-For-Stock-Prediction--Pytorch
基於關聯式新聞提取方法之雙階段注意力機制模型用於股票預測
Stars: ✭ 33 (-85.53%)
Mutual labels:  stock, lstm
Paper-Implementation-DSTP-RNN-For-Stock-Prediction-Based-On-DA-RNN
基於DA-RNN之DSTP-RNN論文試做(Ver1.0)
Stars: ✭ 62 (-72.81%)
Mutual labels:  stock, lstm
Copper price forecast
copper price(time series) prediction using bpnn and lstm
Stars: ✭ 81 (-64.47%)
Mutual labels:  stock, lstm
Stocksensation
基于情感字典和机器学习的股市舆情情感分类可视化Web
Stars: ✭ 215 (-5.7%)
Mutual labels:  stock
Chameleon recsys
Source code of CHAMELEON - A Deep Learning Meta-Architecture for News Recommender Systems
Stars: ✭ 202 (-11.4%)
Mutual labels:  lstm
Stock
30天掌握量化交易 (持续更新)
Stars: ✭ 2,966 (+1200.88%)
Mutual labels:  stock
Stock Selection A Framework
This project demonstrates how to apply machine learning algorithms to distinguish "good" stocks from the "bad" stocks.
Stars: ✭ 198 (-13.16%)
Mutual labels:  stock
Rnn ctc
Recurrent Neural Network and Long Short Term Memory (LSTM) with Connectionist Temporal Classification implemented in Theano. Includes a Toy training example.
Stars: ✭ 220 (-3.51%)
Mutual labels:  lstm
Lstm Siamese Text Similarity
⚛️ It is keras based implementation of siamese architecture using lstm encoders to compute text similarity
Stars: ✭ 216 (-5.26%)
Mutual labels:  lstm
Dolibarr
Dolibarr ERP CRM is a modern software package to manage your company or foundation's activity (contacts, suppliers, invoices, orders, stocks, agenda, accounting, ...). It is open source software (written in PHP) and designed for small and medium businesses, foundations and freelancers. You can freely install, use and distribute it as a standalon…
Stars: ✭ 2,877 (+1161.84%)
Mutual labels:  stock
Icdar 2019 Sroie
ICDAR 2019 Robust Reading Challenge on Scanned Receipts OCR and Information Extraction
Stars: ✭ 202 (-11.4%)
Mutual labels:  lstm
Sign Language Gesture Recognition
Sign Language Gesture Recognition From Video Sequences Using RNN And CNN
Stars: ✭ 214 (-6.14%)
Mutual labels:  lstm
Deepsleepnet
DeepSleepNet: a Model for Automatic Sleep Stage Scoring based on Raw Single-Channel EEG
Stars: ✭ 200 (-12.28%)
Mutual labels:  lstm
Video Classification Cnn And Lstm
To classify video into various classes using keras library with tensorflow as back-end.
Stars: ✭ 218 (-4.39%)
Mutual labels:  lstm
Up Down Captioner
Automatic image captioning model based on Caffe, using features from bottom-up attention.
Stars: ✭ 195 (-14.47%)
Mutual labels:  lstm
Tts Cube
End-2-end speech synthesis with recurrent neural networks
Stars: ✭ 213 (-6.58%)
Mutual labels:  lstm
Siamese Lstm
Siamese LSTM for evaluating semantic similarity between sentences of the Quora Question Pairs Dataset.
Stars: ✭ 217 (-4.82%)
Mutual labels:  lstm
Graph convolutional lstm
Traffic Graph Convolutional Recurrent Neural Network
Stars: ✭ 210 (-7.89%)
Mutual labels:  lstm
Stonky
A command line dashboard for monitoring stocks
Stars: ✭ 208 (-8.77%)
Mutual labels:  stock

keras version of DeepTrade

Licence(版权声明)

The author is Xiaoyu Fang from China. Please quot the source whenever you use it. This project has key update already. Contact [email protected] to buy a licence.

开源版本对学术应用完全免费,使用时请引用出处;商业应用需要获得授权。鉴于keras已经迁移到tensorflow项目,建议使用DeepTrade的tensorflow版本。

致谢

感谢chenli0830(李辰)贡献的宝贵代码和慷慨捐赠!

Thanks to chenli0830(Chen Li) for his valuable source code and donation!

Experiment

Train Loss: image

Validation Loss: image

A LSTM model using Risk Estimation loss function for trades in market

Introduction

Could deep learning help us with buying and selling stocks in market? The answer could be 'Yes'. We design a solution, named DeepTrade, including history data representation, neural network construction and trading optimization methods, which could maximizing our profit based on passed experience.

In our solution, effective representations are extracted from history data (including date/open/high/low/close/volume) first. Then a neural network based on LSTM is constructed to learn useful knowledges to direct our trading behaviors. Meanwhile, a loss function is elaborately designed to ensure the network optimizing our profit and minimizing our risk. Finaly, according the predictions of this neural network, buying and selling plans are carried out.

Feature Representation

History features are extracted in the order of date. Each day, with open/high/low/close/volume data, invariant features are computed, including rate of price change, MACD, RSI, rate of volume change, BOLL, distance between MA and price, distance between volume MA and volume, cross feature between price and volume. Some of these features could be used directly. Some of them should be normalized. And some should use diffrential values. A fixed length(i.e., 30 days) of feature is extracted for network learning.

Network Construction

LSTM network [1] is effective with learning knowleges from time series. A fixed length of history data (i.e., 30 days) is used to plan trade of next day. We make the network output a real value (p) between 0 and 1, which means how much position (in percent) of the stock we should hold to tomorrow. So that if the rate of price change is r next day, out profit will be p*r. If r is negtive, we lost our money. Therefore, we define a Loss Function (called Risk Estimation) for the LSTM network:

Loss = -100. * mean(P * R)

P is a set of our output, and R is the set of corresponding rates of price change. Further more, we add a small cost rate (c=0.0002) for money occupied by buying stock to the loss function. Then the loss function with cost rate is defined as follows:

Loss = -100. * mean(P * (R - c))

Both of these two loss functions are evaluated in our experiments.

Our network includes four layers: LSTM layer, dense connected layer, batch normalization [3] layer, activation layer. LSTM layer is used to learn knowldges from histories. The relu6 function is used as activation to produce output value.

Trading Plans

Every day, at the time before market close (nearer is better), input history features into the network, then we get an output value p. This p mean an advice of next-day's position. If p=0, we should sell all we have before close. If p is positive, we should keep a poistion of p to next day, sell the redundant or buy the insufficient.

Experimental Results

  If the network goes crazy(overfitting), just restart it. Or, a dropout layer [2] is good idea. Also, larger train dataset will help.

For more demos of the experimental results, visit our website: http://www.deeplearning.xin.

  Experimental Results

Requirements

ta-lib, ta-lib for python, numpy, tensorflow

Bug Report

Contact [email protected] to report any bugs.

Reference

[1] Gers F A, Schmidhuber J, Cummins F, et al. Learning to Forget: Continual Prediction with LSTM[J]. Neural Computation, 2000, 12(10): 2451-2471.

[2] Srivastava N, Hinton G E, Krizhevsky A, et al. Dropout: a simple way to prevent neural networks from overfitting[J]. Journal of Machine Learning Research, 2014, 15(1): 1929-1958.

[3] Ioffe S, Szegedy C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift[C]. international conference on machine learning, 2015: 448-456.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].