All Projects → cbaziotis → seq3

cbaziotis / seq3

Licence: other
Source code for the NAACL 2019 paper "SEQ^3: Differentiable Sequence-to-Sequence-to-Sequence Autoencoder for Unsupervised Abstractive Sentence Compression"

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to seq3

DocSum
A tool to automatically summarize documents abstractively using the BART or PreSumm Machine Learning Model.
Stars: ✭ 58 (-52.07%)
Mutual labels:  summarization, abstractive-summarization
Copycat-abstractive-opinion-summarizer
ACL 2020 Unsupervised Opinion Summarization as Copycat-Review Generation
Stars: ✭ 76 (-37.19%)
Mutual labels:  summarization, abstractive-summarization
PlanSum
[AAAI2021] Unsupervised Opinion Summarization with Content Planning
Stars: ✭ 25 (-79.34%)
Mutual labels:  summarization, abstractive-summarization
awesome-text-summarization
Text summarization starting from scratch.
Stars: ✭ 86 (-28.93%)
Mutual labels:  abstractive-summarization, sentence-compression
SRB
Code for "Improving Semantic Relevance for Sequence-to-Sequence Learning of Chinese Social Media Text Summarization"
Stars: ✭ 41 (-66.12%)
Mutual labels:  summarization, seq2seq
Entity2Topic
[NAACL2018] Entity Commonsense Representation for Neural Abstractive Summarization
Stars: ✭ 20 (-83.47%)
Mutual labels:  summarization, abstractive-summarization
gazeta
Gazeta: Dataset for automatic summarization of Russian news / Газета: набор данных для автоматического реферирования на русском языке
Stars: ✭ 25 (-79.34%)
Mutual labels:  summarization, abstractive-summarization
Seq2seq Summarizer
Pointer-generator reinforced seq2seq summarization in PyTorch
Stars: ✭ 306 (+152.89%)
Mutual labels:  summarization, seq2seq
Tensorflow Tutorials
텐서플로우를 기초부터 응용까지 단계별로 연습할 수 있는 소스 코드를 제공합니다
Stars: ✭ 2,096 (+1632.23%)
Mutual labels:  autoencoder, seq2seq
Text summarization with tensorflow
Implementation of a seq2seq model for summarization of textual data. Demonstrated on amazon reviews, github issues and news articles.
Stars: ✭ 226 (+86.78%)
Mutual labels:  summarization, seq2seq
factsumm
FactSumm: Factual Consistency Scorer for Abstractive Summarization
Stars: ✭ 83 (-31.4%)
Mutual labels:  summarization, abstractive-summarization
seq2seq-autoencoder
Theano implementation of Sequence-to-Sequence Autoencoder
Stars: ✭ 12 (-90.08%)
Mutual labels:  autoencoder, seq2seq
probabilistic nlg
Tensorflow Implementation of Stochastic Wasserstein Autoencoder for Probabilistic Sentence Generation (NAACL 2019).
Stars: ✭ 28 (-76.86%)
Mutual labels:  autoencoder, seq2seq
Dual-CNN-Models-for-Unsupervised-Monocular-Depth-Estimation
Dual CNN Models for Unsupervised Monocular Depth Estimation
Stars: ✭ 36 (-70.25%)
Mutual labels:  autoencoder
frame
Notetaking Electron app that can answer your questions and makes summaries for you
Stars: ✭ 88 (-27.27%)
Mutual labels:  summarization
GATE
The implementation of "Gated Attentive-Autoencoder for Content-Aware Recommendation"
Stars: ✭ 65 (-46.28%)
Mutual labels:  autoencoder
tensorflow-ml-nlp-tf2
텐서플로2와 머신러닝으로 시작하는 자연어처리 (로지스틱회귀부터 BERT와 GPT3까지) 실습자료
Stars: ✭ 245 (+102.48%)
Mutual labels:  seq2seq
2D-and-3D-Deep-Autoencoder
Convolutional AutoEncoder application on MRI images
Stars: ✭ 57 (-52.89%)
Mutual labels:  autoencoder
chatbot
一个基于深度学习的中文聊天机器人,这里有详细的教程与代码,每份代码都有详细的注释,作为学习是美好的选择。A Chinese chatbot based on deep learning.
Stars: ✭ 94 (-22.31%)
Mutual labels:  seq2seq
Word-Level-Eng-Mar-NMT
Translating English sentences to Marathi using Neural Machine Translation
Stars: ✭ 37 (-69.42%)
Mutual labels:  seq2seq

This repository contains source code for the ΝΑACL 2019 paper "SEQ^3: Differentiable Sequence-to-Sequence-to-Sequence Autoencoder for Unsupervised Abstractive Sentence Compression" (Paper).

Introduction

The paper presents a sequence-to-sequence-to-sequence (SEQ3) autoencoder consisting of two chained encoder-decoder pairs, with words used as a sequence of discrete latent variables. We employ continuous approximations to sampling from categorical distributions, in order to generate the latent sequence of words. This enables gradient-based optimization.

We apply the proposed model to unsupervised abstractive sentence compression, where the first and last sequences are the input and reconstructed sentences, respectively, while the middle sequence is the compressed sentence. Constraining the length of the latent word sequences forces the model to distill important information from the input.

Architecture

Reference
@inproceedings{baziotis2019naacl,
    title = {\textsc{SEQ}\textsuperscript{3}: Differentiable Sequence-to-Sequence-to-Sequence Autoencoder for Unsupervised Abstractive Sentence Compression},
    author = {Christos Baziotis and Ion Androutsopoulos and Ioannis Konstas and Alexandros Potamianos},
    booktitle = {Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL:HLT)},
    address = {Minneapolis, USA},
    month = {June},
    url = {https://arxiv.org/abs/1904.03651},
    year = {2019}
}

Prerequisites

Dependencies

  • PyTorch version >= 1.0.0
  • Python version >= 3.6

Install Requirements

Create Environment (Optional): Ideally, you should create an environment for the project.

conda create -n seq3 python=3
conda activate seq3

Install PyTorch 1.0 with the desired Cuda version if you want to use the GPU and then the rest of the requirements:

pip install -r requirements.txt

Download Data

To train the model you need to download the training data and the pretrained word embeddings.

Dataset: In our experiments we used the Gigaword dataset, which can be downloaded from: https://github.com/harvardnlp/sent-summary Extract the data in datasets/gigaword/ and organize the files as:

datasets
└── gigaword
      └── dev/
      └── test1951/
      └── train.article.txt
      └── train.title.txt
      └── valid.article.filter.txt
      └── valid.title.filter.txt

Included in datasets/gigaword/dev/ you will find a small subset of the source (the target summaries are never used) training data, i.e., the articles, which were used for prototyping, as well as a dev set with 4K parallel sentences for evaluation.

You can also use your own data, as long as the source and target data are text files with one sentence per line.

Embeddings: In our experiments we used the "Wikipedia 2014 + Gigaword 5" (6B) Glove embeddings: http://nlp.stanford.edu/data/wordvecs/glove.6B.zip Put the embedding files in the embeddings/ directory.

Training

In order to train a model, either the LM or SEQ3, you need to run the corresponding python script and pass as an argument a yaml model config. The yaml config specifies everything regarding the experiment to be executed. Therefore, if you want to make any changes to a model, change or create a new yaml config. The model config files are under the model_configs/ directory. Use the provided configs as reference. Each parameter is documented in comments, although most of them are self-explanatory.

Train the Language Model prior

In our experiments we trained the LM on the source (only) sentences of the Gigaword dataset.

python models/sent_lm.py --config model_configs/camera/lm_prior.yaml 

After the training ends, the checkpoint with the best validation loss will be saved under the directory checkpoints/.

Train SEQ3

Training the LM prior is a prerequisite for training SEQ3. While you will still be able to train the model without it, the LM prior loss will be disabled.

python models/seq3.py --config model_configs/camera/seq3.full.yaml 

Prototyping: You can experiment with SEQ3 without downloading the full training data, by training with the configs model_configs/lm.yaml and model_configs/seq3.yaml, respectively, which use the small subset of the training data.

Troubleshooting

  • If you get the error ModuleNotFoundError: No module named 'X', then add the directory X to your PYTHONPATH in your ~/.bashrc, or simply:

    export PYTHONPATH='.'
    
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].