All Projects → kenlimmj → rouge

kenlimmj / rouge

Licence: MIT license
A Javascript implementation of the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) evaluation metric for summaries.

Programming Languages

javascript
184084 projects - #8 most used programming language

Projects that are alternatives of or similar to rouge

DeepChannel
The pytorch implementation of paper "DeepChannel: Salience Estimation by Contrastive Learning for Extractive Document Summarization"
Stars: ✭ 24 (-33.33%)
Mutual labels:  summarization
SelSum
Abstractive opinion summarization system (SelSum) and the largest dataset of Amazon product summaries (AmaSum). EMNLP 2021 conference paper.
Stars: ✭ 36 (+0%)
Mutual labels:  summarization
verseagility
Ramp up your custom natural language processing (NLP) task, allowing you to bring your own data, use your preferred frameworks and bring models into production.
Stars: ✭ 23 (-36.11%)
Mutual labels:  summarization
TitleStylist
Source code for our "TitleStylist" paper at ACL 2020
Stars: ✭ 72 (+100%)
Mutual labels:  summarization
TR-TPBS
A Dataset for Thai Text Summarization with over 310K articles.
Stars: ✭ 25 (-30.56%)
Mutual labels:  summarization
seq3
Source code for the NAACL 2019 paper "SEQ^3: Differentiable Sequence-to-Sequence-to-Sequence Autoencoder for Unsupervised Abstractive Sentence Compression"
Stars: ✭ 121 (+236.11%)
Mutual labels:  summarization
Machine-Learning-Notes
Lecture Notes of Andrew Ng's Machine Learning Course
Stars: ✭ 60 (+66.67%)
Mutual labels:  summarization
data-summ-cnn dailymail
non-anonymized cnn/dailymail dataset for text summarization
Stars: ✭ 12 (-66.67%)
Mutual labels:  summarization
frame
Notetaking Electron app that can answer your questions and makes summaries for you
Stars: ✭ 88 (+144.44%)
Mutual labels:  summarization
nlp-akash
Natural Language Processing notes and implementations.
Stars: ✭ 66 (+83.33%)
Mutual labels:  summarization
teanaps
자연어 처리와 텍스트 분석을 위한 오픈소스 파이썬 라이브러리 입니다.
Stars: ✭ 91 (+152.78%)
Mutual labels:  summarization
hf-experiments
Experiments with Hugging Face 🔬 🤗
Stars: ✭ 37 (+2.78%)
Mutual labels:  summarization
FewSum
Few-shot learning framework for opinion summarization published at EMNLP 2020.
Stars: ✭ 29 (-19.44%)
Mutual labels:  summarization
BillSum
US Bill Summarization Corpus
Stars: ✭ 31 (-13.89%)
Mutual labels:  summarization
DocSum
A tool to automatically summarize documents abstractively using the BART or PreSumm Machine Learning Model.
Stars: ✭ 58 (+61.11%)
Mutual labels:  summarization
factsumm
FactSumm: Factual Consistency Scorer for Abstractive Summarization
Stars: ✭ 83 (+130.56%)
Mutual labels:  summarization
struct infused summ
(COLING'18) The source code for the paper "Structure-Infused Copy Mechanisms for Abstractive Summarization".
Stars: ✭ 29 (-19.44%)
Mutual labels:  summarization
PlanSum
[AAAI2021] Unsupervised Opinion Summarization with Content Planning
Stars: ✭ 25 (-30.56%)
Mutual labels:  summarization
HSSC
Code for "A Hierarchical End-to-End Model for Jointly Improving Text Summarization and Sentiment Classification" (IJCAI 2018)
Stars: ✭ 23 (-36.11%)
Mutual labels:  summarization
code summarization public
source code for 'Improving automatic source code summarization via deep reinforcement learning'
Stars: ✭ 71 (+97.22%)
Mutual labels:  summarization

ROUGE.js

A Javascript implementation of the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) evaluation metric for summaries. This package implements the following metrics:

  • n-gram (ROUGE-N)
  • Longest Common Subsequence (ROUGE-L)
  • Skip Bigram (ROUGE-S)

Rationale

ROUGE is somewhat a standard metric for evaluating the performance of auto-summarization algorithms. However, with the exception of MEAD (which is written in Perl. Yes. Perl.), requesting a copy of ROUGE to work with requires one to navigate a barely functional webpage, fill up forms, and sign a legal release somewhere along the way while at it. These definitely exist for good reason, but it gets irritating when all one wishes to do is benchmark an algorithm.

Nevertheless, the paper describing ROUGE is available for public consumption. The appropriate course of action is then to convert the equations in the paper to a more user-friendly format, which takes the form of the present repository. So there. No more forms. See how life could have been made a lot easier for everyone if we were all willing to stop writing legalese or making people click submit buttons?

Quick Start

This package is available on NPM, like so:

npm install --save rouge

To use it, simply require the package:

import 'rouge';                 // ES2015

// OR

var rouge = require('rouge');   // ES5

A small but growing number of tests exist. To run them:

npm test

This should give you many lines of colorful text in your CLI. Naturally, you'll need to have Mocha installed, but you knew that already.

NOTE: Function test coverage is 100%, but branch coverage numbers look horrible because the current testing implementation has no way of accounting for the additional code injected by Babel when transpiling from ES2015 to ES5. A fix is in the pipeline, but if anyone has anything good, feel free to PR!

Usage

Rouge.js provides three functions:

  • ROUGE-N: rouge.n(cand, ref, opts)
  • ROUGE-L: rouge.l(cand, ref, opts)
  • ROUGE-S: rouge.s(cand, ref, opts)

All functions take in a candidate string, a reference string, and an configuration object specifying additional options. Documentation for the options are provided inline in lib\rouge.js. Type signatures are specified and checked using Flowtype.

Here's an example evaluating ROUGE-L using an averaged-F1 score instead of the DUC-F1:

import l as rougeL from 'rouge';

const ref = 'police killed the gunman';
const cand = 'police kill the gunman';

rougeL(cand, ref, { beta: 0.5 });

In addition, the main functions rely on a battery of utility functions specified in lib\utils.js. These perform a bunch of things like quick evaluation of skip bigrams, string tokenization, sentence segmentation, and set intersections.

Here's an example applying jackknife resampling as described in the original paper:

import n as rougeN from 'rouge';
import jackKnife from 'utils';

const ref = 'police killed the gunman';
const cands = [
  'police kill the gunman',
  'the gunman kill police',
  'the gunman police killed',
];

// Standard evaluation taking the arithmetic mean
jackKnife(cands, ref, rougeN);

// A function that returns the max value in an array
const distMax = (arr) => Math.max(...arr);

// Modified evaluation taking the distribution maximum
jackKnife(cands, ref, rougeN, distMax);

Versioning

Development will be maintained under the Semantic Versioning guidelines as much as possible in order to ensure transparency and backwards compatibility.

Releases will be numbered with the following format:

<major>.<minor>.<patch>

And constructed with the following guidelines:

  • Breaking backward compatibility bumps the major (and resets the minor and patch)
  • New additions without breaking backward compatibility bump the minor (and resets the patch)
  • Bug fixes and miscellaneous changes bump the patch

For more information on SemVer, visit http://semver.org/.

Bug Tracking and Feature Requests

Have a bug or a feature request? Please open a new issue.

Before opening any issue, please search for existing issues and read the Issue Guidelines.

Contributing

Please submit all pull requests against *-wip branches. All code should pass JSHint/ESLint validation. Note that files in /lib are written in ES2015 syntax and transpiled to corresponding files in /dist using Babel. Gulp build pipelines exist and should be used.

The amount of data available for writing tests is unfortunately woefully inadequate. I've tried to be as thorough as possible, but that eliminates neither the possibility of nor existence of errors. The gold standard is the DUC data-set, but that too is form-walled and legal-release-walled, which is infuriating. If you have data in the form of a candidate summary, reference(s), and a verified ROUGE score you do not mind sharing, I would love to add that to the test harness.

License

MIT

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].