All Projects → clovaai → FocusSeq2Seq

clovaai / FocusSeq2Seq

Licence: MIT license
[EMNLP 2019] Mixture Content Selection for Diverse Sequence Generation (Question Generation / Abstractive Summarization)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to FocusSeq2Seq

text2text
Text2Text: Cross-lingual natural language processing and generation toolkit
Stars: ✭ 188 (+72.48%)
Mutual labels:  summarization, question-generation
Headlines
Automatically generate headlines to short articles
Stars: ✭ 516 (+373.39%)
Mutual labels:  generation, summarization
What I Have Read
Paper Lists, Notes and Slides, Focus on NLP. For summarization, please refer to https://github.com/xcfcode/Summarization-Papers
Stars: ✭ 110 (+0.92%)
Mutual labels:  generation, summarization
Human Video Generation
Human Video Generation Paper List
Stars: ✭ 139 (+27.52%)
Mutual labels:  generation
3d Iwgan
A repository for the paper "Improved Adversarial Systems for 3D Object Generation and Reconstruction".
Stars: ✭ 166 (+52.29%)
Mutual labels:  generation
Stubble
Trimmed down {{mustache}} templates in .NET
Stars: ✭ 247 (+126.61%)
Mutual labels:  generation
pn-summary
A well-structured summarization dataset for the Persian language!
Stars: ✭ 29 (-73.39%)
Mutual labels:  summarization
Terrarium
Replica of the Earth in Minecraft
Stars: ✭ 117 (+7.34%)
Mutual labels:  generation
image-gen-api
An API To manupulate Images
Stars: ✭ 14 (-87.16%)
Mutual labels:  generation
Mockery
A mock code autogenerator for Golang
Stars: ✭ 3,138 (+2778.9%)
Mutual labels:  generation
Webmap
A Python tool used to automate the execution of the following tools : Nmap , Nikto and Dirsearch but also to automate the report generation during a Web Penetration Testing
Stars: ✭ 188 (+72.48%)
Mutual labels:  generation
Kaiten
A Undetectable Payload Generation
Stars: ✭ 169 (+55.05%)
Mutual labels:  generation
cti-stix-generator
OASIS Cyber Threat Intelligence (CTI) TC: A tool for generating STIX content for prototyping and testing. https://github.com/oasis-open/cti-stix-generator
Stars: ✭ 27 (-75.23%)
Mutual labels:  generation
Swiftcolorgen
A tool that generate code for Swift projects, designed to improve the maintainability of UIColors
Stars: ✭ 152 (+39.45%)
Mutual labels:  generation
ConDigSum
Code for EMNLP 2021 paper "Topic-Aware Contrastive Learning for Abstractive Dialogue Summarization"
Stars: ✭ 62 (-43.12%)
Mutual labels:  summarization
Vue Formulate
⚡️ The easiest way to build forms with Vue.
Stars: ✭ 1,947 (+1686.24%)
Mutual labels:  generation
GPT2-Telegram-Chatbot
GPT-2 Telegram Chat bot
Stars: ✭ 67 (-38.53%)
Mutual labels:  generation
3d Pointcloud
Papers and Datasets about Point Cloud.
Stars: ✭ 179 (+64.22%)
Mutual labels:  generation
Xgen
A Swift package for generating Xcode workspaces & playgrounds
Stars: ✭ 178 (+63.3%)
Mutual labels:  generation
Voicebook
🗣️ A book and repo to get you started programming voice computing applications in Python (10 chapters and 200+ scripts).
Stars: ✭ 236 (+116.51%)
Mutual labels:  generation

Mixture Content Selection for Diverse Sequence Generation

We explicitly separate diversification from generation using a mixture-of-experts content selection module (called Selector) that guides an encoder-decoder model.

methods_figure

  1. Diverse Content Selection (one-to-many): Selector samples different binary masks (called focus; m1, m2, and m3 in the figure) on a source sequence.

  2. Focused Generation (one-to-one): an encoder-decoder model generates different sequences from the source sequence guided by different masks.

Not only does this improve diversity of the generated sequences, but also improves accuracy (high fidelity) of them, since conventional models often learn suboptimal mapping that is in the middle of the targets but not near any of them.

Prerequisites

1) Hardware

  • All experiments in paper were conducted with single P40 GPU (24GB).
  • You might want to adjust the size of batch and models for your memory size.

2) Software

  • Ubuntu 16.04 or 18.04 (Not tested with other versions, but might work)
  • Python 3.6+
    • pip install -r requirements.txt or manually install the packages below.
    torch==1.1
    nltk
    pandas
    tqdm
    pyyaml
    git+git://github.com/bheinzerling/pyrouge
    
  • ROUGE-1.5.5 (for CNN-DM evaluation)
    # From https://github.com/falcondai/pyrouge/tree/9cdbfbda8b8d96e7c2646ffd048743ddcf417ed9
    wget https://www.dropbox.com/s/dl/zqhvtgfg40h3g3l/rouge_1.5.5.zip
    unzip rouge_1.5.5.zip
    mv RELEASE-1.5.5 utils/rouge

3) Data

# Download preprocessed data at ./squad/, ./cnndm/ and ./glove/ respectively
wget https://www.dropbox.com/s/dl/0gtz5ckh3ie55oq/emnlp2019focus_redistribute.zip
unzip emnlp2019focus_redistribute.zip

# Generate train_df.pkl, val_df.pkl, test_df.pkl and vocab.pkl at ./squad_out/
python QG_data_loader.py

# Generate train_df.pkl, val_df.pkl, test_df.pkl and vocab.pkl at ./cnndm_out/
python CNNDM_data_loader.py

Details of dataset source are at Dataset_details.md

Run

You can see more configurations in configs.py

Train

  1. Question Generation
python train.py --task=QG --model=NQG --load_glove=True --feature_rich --data=squad \
    --rnn=GRU --dec_hidden_size=512 --dropout=0.5 \
    --batch_size=64 --eval_batch_size=64 \
    --use_focus=True --n_mixture=3 --decoding=greedy
  1. Abstract Summrization
python train.py --task=SM --model=PG --load_glove=False --data=cnndm \
    --rnn=LSTM --dec_hidden_size=512 \
    --batch_size=16 --eval_batch_size=64 \
    --use_focus=True --n_mixture=3 --decoding=greedy

Evaluation

--load_ckpt (integer; 5 for example) and --eval_only options need to be added.

  1. Question Generation
python evaluate.py --task=QG --model=NQG --load_glove=True --feature_rich --data=squad \
    --rnn=GRU --dec_hidden_size=512 --dropout=0.5 \
    --batch_size=64 --eval_batch_size=64 \
    --use_focus=True --n_mixture=3 --decoding=greedy \
    --load_ckpt=5 --eval_only
  1. Abstract Summrization
python evaluate.py --task=SM --model=PG --load_glove=False --data=cnndm \
    --rnn=LSTM --dec_hidden_size=512 \
    --batch_size=16 --eval_batch_size=64 \
    --use_focus=True --n_mixture=3 --decoding=greedy \
    --load_ckpt=5 --eval_only

Reference

If you use this code or model as part of any published research, please refer the following paper.

@inproceedings{cho2019focus,
  title     = {Mixture Content Selection for Diverse Sequence Generation},
  author    = {Cho, Jaemin and Seo, Minjoon and Hajishirzi, Hannaneh},
  booktitle = {EMNLP},
  year      = {2019}
}

License

Copyright (c) 2019-present NAVER Corp.

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].