All Projects โ†’ salesforce โ†’ query-focused-sum

salesforce / query-focused-sum

Licence: other
Official code repository for "Exploring Neural Models for Query-Focused Summarization".

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to query-focused-sum

hf-experiments
Experiments with Hugging Face ๐Ÿ”ฌ ๐Ÿค—
Stars: โœญ 37 (+117.65%)
Mutual labels:  question-answering, summarization
Haystack
๐Ÿ” Haystack is an open source NLP framework that leverages Transformer models. It enables developers to implement production-ready neural search, question answering, semantic document search and summarization for a wide range of applications.
Stars: โœญ 3,409 (+19952.94%)
Mutual labels:  question-answering, summarization
text2text
Text2Text: Cross-lingual natural language processing and generation toolkit
Stars: โœญ 188 (+1005.88%)
Mutual labels:  question-answering, summarization
verseagility
Ramp up your custom natural language processing (NLP) task, allowing you to bring your own data, use your preferred frameworks and bring models into production.
Stars: โœญ 23 (+35.29%)
Mutual labels:  question-answering, summarization
CompareModels TRECQA
Compare six baseline deep learning models on TrecQA
Stars: โœญ 61 (+258.82%)
Mutual labels:  question-answering
2021-dialogue-summary-competition
[2021 ํ›ˆ๋ฏผ์ •์Œ ํ•œ๊ตญ์–ด ์Œ์„ฑโ€ข์ž์—ฐ์–ด ์ธ๊ณต์ง€๋Šฅ ๊ฒฝ์ง„๋Œ€ํšŒ] ๋Œ€ํ™”์š”์•ฝ ๋ถ€๋ฌธ ์•Œ๋ผ๊ฟ๋‹ฌ๋ผ๊ฟ ํŒ€์˜ ๋Œ€ํ™”์š”์•ฝ ํ•™์Šต ๋ฐ ์ถ”๋ก  ์ฝ”๋“œ๋ฅผ ๊ณต์œ ํ•˜๊ธฐ ์œ„ํ•œ ๋ ˆํฌ์ž…๋‹ˆ๋‹ค.
Stars: โœญ 86 (+405.88%)
Mutual labels:  summarization
PororoQA
PororoQA, https://arxiv.org/abs/1707.00836
Stars: โœญ 26 (+52.94%)
Mutual labels:  question-answering
finance-qa-spider
้‡‘่ž้—ฎ็ญ”ๅนณๅฐๆ–‡ๆœฌๆ•ฐๆฎ้‡‡้›†/็ˆฌๅ–๏ผŒๆ•ฐๆฎๆบๆถ‰ๅŠไธŠไบคๆ‰€๏ผŒๆทฑไบคๆ‰€๏ผŒๅ…จๆ™ฏ็ฝ‘ๅŠๆ–ฐๆตช่‚กๅง
Stars: โœญ 33 (+94.12%)
Mutual labels:  question-answering
TOEFL-QA
A question answering dataset for machine comprehension of spoken content
Stars: โœญ 61 (+258.82%)
Mutual labels:  question-answering
textdigester
TextDigester: document summarization java library
Stars: โœญ 23 (+35.29%)
Mutual labels:  summarization
WikiTableQuestions
A dataset of complex questions on semi-structured Wikipedia tables
Stars: โœญ 81 (+376.47%)
Mutual labels:  question-answering
mrqa
Code for EMNLP-IJCNLP 2019 MRQA Workshop Paper: "Domain-agnostic Question-Answering with Adversarial Training"
Stars: โœญ 35 (+105.88%)
Mutual labels:  question-answering
video-summarizer
Summarizes videos into much shorter videos. Ideal for long lecture videos.
Stars: โœญ 92 (+441.18%)
Mutual labels:  summarization
SQUAD2.Q-Augmented-Dataset
Augmented version of SQUAD 2.0 for Questions
Stars: โœญ 31 (+82.35%)
Mutual labels:  question-answering
MSMARCO
Machine Comprehension Train on MSMARCO with S-NET Extraction Modification
Stars: โœญ 31 (+82.35%)
Mutual labels:  question-answering
deformer
[ACL 2020] DeFormer: Decomposing Pre-trained Transformers for Faster Question Answering
Stars: โœญ 111 (+552.94%)
Mutual labels:  question-answering
Medi-CoQA
Conversational Question Answering on Clinical Text
Stars: โœญ 22 (+29.41%)
Mutual labels:  question-answering
cherche
๐Ÿ“‘ Neural Search
Stars: โœญ 196 (+1052.94%)
Mutual labels:  question-answering
head-qa
HEAD-QA: A Healthcare Dataset for Complex Reasoning
Stars: โœญ 20 (+17.65%)
Mutual labels:  question-answering
unanswerable qa
The official implementation for ACL 2021 "Challenges in Information Seeking QA: Unanswerable Questions and Paragraph Retrieval".
Stars: โœญ 21 (+23.53%)
Mutual labels:  question-answering

Exploring Neural Models for Query-Focused Summarization

This is the official code repository for Exploring Neural Models for Query-Focused Summarization by Jesse Vig*, Alexander R. Fabbri*, Wojciech Kryล›ciล„ski*, Chien-Sheng Wu, and Wenhao Liu (*equal contribution).

We present code and instructions for reproducing the paper experiments and running the models against your own datasets.

Table of contents

Introduction

Query-focused summarization (QFS) aims to produce summaries that answer particular questions of interest, enabling greater user control and personalization. In our paper we conduct a systematic exploration of neural approaches to QFS, considering two general classes of methods: two-stage extractive-abstractive solutions and end-to-end models. Within those categories, we investigate existing methods and present two model extensions that achieve state-of-the-art performance on the QMSum dataset by a margin of up to 3.38 ROUGE-1, 3.72 ROUGE-2, and 3.28 ROUGE-L.

Two-stage models

Two-step approaches consist of an extractor model, which extracts parts of the source document relevant to the input query, and an abstractor model, which synthesizes the extracted segments into a final summary.

See extractors directory for instructions and code for training and evaluating two-stage models.

Segment Encoder

The Segment Encoder is an end-to-end model that uses sparse local attention to achieve SOTA ROUGE scores on the QMSum dataset.

To replicate the QMSum experiments, or train and evaluate Segment Encoder on your own dataset, see the multiencoder directory.

Citation

When referencing this repository, please cite this paper:

@misc{vig-etal-2021-exploring,
      title={Exploring Neural Models for Query-Focused Summarization}, 
      author={Jesse Vig and Alexander R. Fabbri and Wojciech Kry{\'s}ci{\'n}ski and Chien-Sheng Wu and Wenhao Liu},
      year={2021},
      eprint={2112.07637},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2112.07637}
}

License

This repository is released under the BSD-3 License.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].