All Projects → hengluchang → Deep News Summarization

hengluchang / Deep News Summarization

Licence: mit
News summarization using sequence to sequence model with attention in TensorFlow.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Deep News Summarization

dts
A Keras library for multi-step time-series forecasting.
Stars: ✭ 130 (-22.16%)
Mutual labels:  recurrent-neural-networks, lstm, seq2seq
Pytorch Seq2seq
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
Stars: ✭ 3,418 (+1946.71%)
Mutual labels:  lstm, seq2seq, encoder-decoder
Screenshot To Code
A neural network that transforms a design mock-up into a static website.
Stars: ✭ 13,561 (+8020.36%)
Mutual labels:  lstm, seq2seq, encoder-decoder
ai-visual-storytelling-seq2seq
Implementation of seq2seq model for Visual Storytelling Challenge (VIST) http://visionandlanguage.net/VIST/index.html
Stars: ✭ 50 (-70.06%)
Mutual labels:  recurrent-neural-networks, seq2seq, encoder-decoder
Pytorch Pos Tagging
A tutorial on how to implement models for part-of-speech tagging using PyTorch and TorchText.
Stars: ✭ 96 (-42.51%)
Mutual labels:  lstm, recurrent-neural-networks
Multitask sentiment analysis
Multitask Deep Learning for Sentiment Analysis using Character-Level Language Model, Bi-LSTMs for POS Tag, Chunking and Unsupervised Dependency Parsing. Inspired by this great article https://arxiv.org/abs/1611.01587
Stars: ✭ 93 (-44.31%)
Mutual labels:  lstm, recurrent-neural-networks
Pytorch Learners Tutorial
PyTorch tutorial for learners
Stars: ✭ 97 (-41.92%)
Mutual labels:  lstm, recurrent-neural-networks
Chinese Chatbot
中文聊天机器人,基于10万组对白训练而成,采用注意力机制,对一般问题都会生成一个有意义的答复。已上传模型,可直接运行,跑不起来直播吃键盘。
Stars: ✭ 124 (-25.75%)
Mutual labels:  lstm, seq2seq
Lstm Ctc Ocr
using rnn (lstm or gru) and ctc to convert line image into text, based on torch7 and warp-ctc
Stars: ✭ 70 (-58.08%)
Mutual labels:  lstm, recurrent-neural-networks
Rnn Text Classification Tf
Tensorflow Implementation of Recurrent Neural Network (Vanilla, LSTM, GRU) for Text Classification
Stars: ✭ 114 (-31.74%)
Mutual labels:  lstm, recurrent-neural-networks
Image Caption Generator
A neural network to generate captions for an image using CNN and RNN with BEAM Search.
Stars: ✭ 126 (-24.55%)
Mutual labels:  lstm, recurrent-neural-networks
Language Translation
Neural machine translator for English2German translation.
Stars: ✭ 82 (-50.9%)
Mutual labels:  lstm, recurrent-neural-networks
Tensorflow seq2seq chatbot
Stars: ✭ 81 (-51.5%)
Mutual labels:  lstm, seq2seq
Nspm
🤖 Neural SPARQL Machines for Knowledge Graph Question Answering.
Stars: ✭ 156 (-6.59%)
Mutual labels:  lstm, seq2seq
Ai Reading Materials
Some of the ML and DL related reading materials, research papers that I've read
Stars: ✭ 79 (-52.69%)
Mutual labels:  lstm, recurrent-neural-networks
Linear Attention Recurrent Neural Network
A recurrent attention module consisting of an LSTM cell which can query its own past cell states by the means of windowed multi-head attention. The formulas are derived from the BN-LSTM and the Transformer Network. The LARNN cell with attention can be easily used inside a loop on the cell state, just like any other RNN. (LARNN)
Stars: ✭ 119 (-28.74%)
Mutual labels:  lstm, recurrent-neural-networks
Document Classifier Lstm
A bidirectional LSTM with attention for multiclass/multilabel text classification.
Stars: ✭ 136 (-18.56%)
Mutual labels:  lstm, recurrent-neural-networks
Stockprediction
Plain Stock Close-Price Prediction via Graves LSTM RNNs
Stars: ✭ 134 (-19.76%)
Mutual labels:  lstm, recurrent-neural-networks
Keras Lmu
Keras implementation of Legendre Memory Units
Stars: ✭ 160 (-4.19%)
Mutual labels:  lstm, recurrent-neural-networks
Poetry Seq2seq
Chinese Poetry Generation
Stars: ✭ 159 (-4.79%)
Mutual labels:  lstm, seq2seq

Codacy Badge

News summarization

News summarization using sequence to sequence model in TensorFlow.

Introduction

This repository is a demonstration of abstractive summarization of news article exploiting TensorFlow sequence to sequence model. This model incorporates attention mechanism and uses LSTM cell as both encoder and decoder.

This model is trained on one million Associated Press Worldstream news stories from English Gigaword second edition. The examples below are based on the model trained on AWS EC2 g2.2xlarge instance for 10 epochs, which took around 20 hours.

For more detailed information, please see our project research paper: Headline Generation Using Recurrent Neural Network.

Examples

News 1

News: A roadside bomb killed five people Thursday near a shelter used as a police recruiting center in northeast Baghdad, police said.

Actual headline: Iraqi police: Bomb kills 5 near police recruiting center in northeast Baghdad

Predicted headline: URGENT Explosion kills five people in Baghdad

News 2

News: The euro hit a record high against the dollar Monday in Asia as concerns over the U.S. subprime mortgage crisis remain a heavy weight on the greenback.

Actual headline: Euro hits record high versus dollar in Asian trading

Predicted headline: Euro hits record high against dollar

How to run

For demonstration, we use the sample file (a very small portion of English Gigaword) from LDC as our dataset to train our model. If you want to reproduce the results like the above examples, larger training set is necessary. You can download the trained model parameters which was trained on a larger portion on Gigaword by following the instructions in the Download vocabs and trained model parameters section below. The whole English Gigaword can be obtained from university libraries.

Pre-req

  • Install Python 3
  • Download deep-news-summarization
$ git clone https://github.com/hengluchang/deep-news-summarization.git
  • Install TensorFlow 0.12, pandas, Numpy, nltk, and requests
$ pip install -r requirements.txt
  • Create two folders named "working_dir" and "output" under the deep-news-summarization folder.
$ cd deep-news-summarization
$ mkdir -p working_dir output

Download vocabs and trained model parameters

  • Run download_vocabs_and_trained_params.py file. This will download encoder and decoder vocabularies and trained model parameters to working_dir folder.
$ python download_vocabs_and_trained_params.py ./working_dir
  • Go to Interactive testing section below to reproduce the results as the examples above.

Train your own summarizer

  • Set "mode = train" in seq2seq.ini file.
  • Run split_data.py file to split the dataset into training, evaluation, and testing sets. train_enc.txt, eval_enc.txt, test_enc, train_dec.txt, eval_dec.txt, and test_dec.txt total of six files will be created under ./dataset.
$ python split_data.py
  • Run execute.py file. This will create vocab80000_enc.txt, vocab80000_dec.txt, and checkpoint data under ./working_dir. If you use your own dataset, optimizing bucket sizes to minimize padding in execute.py file can help to get better results. Also, keep training the model until the preplexity of the evaluation sets are under 10 for better performances.
$ python execute.py

Testing

  • Set "mode = test" in seq2seq.ini file.
  • Run execute.py file. This will read the model parameters (seq2seq.ckpt-XXXXX) into your model and create predicted_test_headline.txt under ./output.
$ python execute.py
  • Run evaluation.py file to get BLEU scores between actual headlines and predicted headlines. This will create BLEU.txt file.
$ python evaluation.py

Interactive testing

  • Set "mode = interactive" in seq2seq.ini file.
  • Run execute.py. This will read the model parameters (seq2seq.ckpt-XXXXX) into your model and ask user for an input.
$ python execute.py

References

Research Paper References

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].