All Projects → arleigh418 → Base-On-Relation-Method-Extract-News-DA-RNN-Model-For-Stock-Prediction--Pytorch

arleigh418 / Base-On-Relation-Method-Extract-News-DA-RNN-Model-For-Stock-Prediction--Pytorch

Licence: other
基於關聯式新聞提取方法之雙階段注意力機制模型用於股票預測

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Base-On-Relation-Method-Extract-News-DA-RNN-Model-For-Stock-Prediction--Pytorch

Chinese Chatbot
中文聊天机器人,基于10万组对白训练而成,采用注意力机制,对一般问题都会生成一个有意义的答复。已上传模型,可直接运行,跑不起来直播吃键盘。
Stars: ✭ 124 (+275.76%)
Mutual labels:  lstm, rnn, seq2seq, attention
Paper-Implementation-DSTP-RNN-For-Stock-Prediction-Based-On-DA-RNN
基於DA-RNN之DSTP-RNN論文試做(Ver1.0)
Stars: ✭ 62 (+87.88%)
Mutual labels:  stock, lstm, rnn, darnn
Pytorch Seq2seq
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
Stars: ✭ 3,418 (+10257.58%)
Mutual labels:  lstm, rnn, seq2seq, attention
Poetry Seq2seq
Chinese Poetry Generation
Stars: ✭ 159 (+381.82%)
Mutual labels:  lstm, rnn, seq2seq
Machine Learning
My Attempt(s) In The World Of ML/DL....
Stars: ✭ 78 (+136.36%)
Mutual labels:  lstm, rnn, attention
Cnn lstm for text classify
CNN, LSTM, NBOW, fasttext 中文文本分类
Stars: ✭ 90 (+172.73%)
Mutual labels:  lstm, rnn, attention
Text Classification Models Pytorch
Implementation of State-of-the-art Text Classification Models in Pytorch
Stars: ✭ 379 (+1048.48%)
Mutual labels:  seq2seq, attention, fasttext
Natural Language Processing With Tensorflow
Natural Language Processing with TensorFlow, published by Packt
Stars: ✭ 222 (+572.73%)
Mutual labels:  lstm, rnn, seq2seq
Rnn For Joint Nlu
Pytorch implementation of "Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling" (https://arxiv.org/abs/1609.01454)
Stars: ✭ 176 (+433.33%)
Mutual labels:  lstm, rnn, attention
Pytorch Sentiment Analysis
Tutorials on getting started with PyTorch and TorchText for sentiment analysis.
Stars: ✭ 3,209 (+9624.24%)
Mutual labels:  lstm, rnn, fasttext
Attentive Neural Processes
implementing "recurrent attentive neural processes" to forecast power usage (w. LSTM baseline, MCDropout)
Stars: ✭ 33 (+0%)
Mutual labels:  prediction, rnn, attention
Pointer Networks Experiments
Sorting numbers with pointer networks
Stars: ✭ 53 (+60.61%)
Mutual labels:  lstm, seq2seq, attention
Time Attention
Implementation of RNN for Time Series prediction from the paper https://arxiv.org/abs/1704.02971
Stars: ✭ 52 (+57.58%)
Mutual labels:  lstm, rnn, attention
Copper price forecast
copper price(time series) prediction using bpnn and lstm
Stars: ✭ 81 (+145.45%)
Mutual labels:  stock, lstm, rnn
EBIM-NLI
Enhanced BiLSTM Inference Model for Natural Language Inference
Stars: ✭ 24 (-27.27%)
Mutual labels:  lstm, rnn, attention
Neural Networks
All about Neural Networks!
Stars: ✭ 34 (+3.03%)
Mutual labels:  lstm, rnn, fasttext
Deep Time Series Prediction
Seq2Seq, Bert, Transformer, WaveNet for time series prediction.
Stars: ✭ 183 (+454.55%)
Mutual labels:  lstm, seq2seq, attention
stock-forecast
Simple stock & cryptocurrency price forecasting console application, using PHP Machine Learning library (https://github.com/php-ai/php-ml)
Stars: ✭ 76 (+130.3%)
Mutual labels:  prediction, stock, stock-prediction
automatic-personality-prediction
[AAAI 2020] Modeling Personality with Attentive Networks and Contextual Embeddings
Stars: ✭ 43 (+30.3%)
Mutual labels:  lstm, rnn, attention
air writing
Online Hand Writing Recognition using BLSTM
Stars: ✭ 26 (-21.21%)
Mutual labels:  lstm, rnn

Base-On-Relation-Method-Extract-News-DA-RNN-Model-For-Stock-Prediction

Run model.py.

Update

2020/5/6

1.How I deal with article vector, you may can follow this relation work :https://github.com/arleigh418/How-Much-News-Should-We-Extract-For-Stock-Price-Prediction/tree/master/Stage1_Replace%20Company%20Name%20Train%20Embedding

2.How to use ta package and add each article vector you may can refer : https://github.com/arleigh418/How-Much-News-Should-We-Extract-For-Stock-Price-Prediction/tree/master/Stage2_2Count%20TA%20%26%20Merge%20Stock%20Price%20And%20Article

2019/11/30

Add data_prepare.py

1.As somebody need, I provide three function to show how we count vector, cos and getting sum of each day vector.If cos similar is not achieve your target(e.g. cos>0.7), then we also use top 30% to get more similar article with target center article, like below code:

np.percentile({article use} , {per}, interpolation='midpoint')

You can even try

np.percentile({article use} , {per}, interpolation='linear')

2.Not only add each day vector to present one day news imformation vector, we also get average to present one day vector to test(the excel file we provided is avg method). If you are interested, you can try it by yourself.

(I will provide avg method, I can't find avg method code now QQ)

3.I highly suggest you to clean each article(news) by Stopword or other method.

Reference

1.This porject is referenced this paper:

A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction (DARNN)_ Yao Qin, Dongjin Song, Haifeng Chen, Wei Cheng, Guofei Jiang, Garrison W. Cottrell, A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction, IJCAI, 2017.

2.This model modify from here: https://github.com/Zhenye-Na/DA-RNN

3.We use FASTTEXT to deal with news data ,the method that training fasttext and count relation between news(Cosine) you can reference this : https://github.com/arleigh418/Word-Embedding-With-Gensim (Cosine method is in doc2vec_count_cos.py)

--> tips:We add each word's vector from article to present one article vector .

4.We use the 'ta' package to count technical analysis in our data,please reference here: https://github.com/bukosabino/ta

5.Stock price come from : https://finance.yahoo.com/quote/2330.TW?p=2330.TW

How we do it ?

Please check Description.pdf.

Others

1.GOODONE.pkl is the model we train. 2.We know that there are still much left for improvement,we are trying to make this model better. If you have any question , please contact me for free.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].