All Projects → SapienzaNLP → spring

SapienzaNLP / spring

Licence: other
SPRING is a seq2seq model for Text-to-AMR and AMR-to-Text (AAAI2021).

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to spring

amr
Cornell AMR Semantic Parser (Artzi et al., EMNLP 2015)
Stars: ✭ 23 (-77.67%)
Mutual labels:  amr, semantic-parser, abstract-meaning-representation
factedit
🧐 Code & Data for Fact-based Text Editing (Iso et al; ACL 2020)
Stars: ✭ 16 (-84.47%)
Mutual labels:  natural-language-generation, data-to-text
amrlib
A python library that makes AMR parsing, generation and visualization simple.
Stars: ✭ 107 (+3.88%)
Mutual labels:  amr, abstract-meaning-representation
Libstreaming
A solution for streaming H.264, H.263, AMR, AAC using RTP on Android
Stars: ✭ 3,167 (+2974.76%)
Mutual labels:  amr
Libmesh
libMesh github repository
Stars: ✭ 450 (+336.89%)
Mutual labels:  amr
Stog
AMR Parsing as Sequence-to-Graph Transduction
Stars: ✭ 123 (+19.42%)
Mutual labels:  amr
Amrex
AMReX: Software Framework for Block Structured AMR
Stars: ✭ 243 (+135.92%)
Mutual labels:  amr
bitpit
Open source library for scientific HPC
Stars: ✭ 80 (-22.33%)
Mutual labels:  amr
Samrai
Structured Adaptive Mesh Refinement Application Infrastructure - a scalable C++ framework for block-structured AMR application development
Stars: ✭ 160 (+55.34%)
Mutual labels:  amr
Trixi.jl
A tree-based numerical simulation framework for hyperbolic PDEs written in Julia
Stars: ✭ 72 (-30.1%)
Mutual labels:  amr
Penman
PENMAN notation (e.g. AMR) in Python
Stars: ✭ 63 (-38.83%)
Mutual labels:  amr
Moose
Multiphysics Object Oriented Simulation Environment
Stars: ✭ 652 (+533.01%)
Mutual labels:  amr
Gtos
Code for AAAI2020 paper "Graph Transformer for Graph-to-Sequence Learning"
Stars: ✭ 129 (+25.24%)
Mutual labels:  amr
Ffmpegcommand
FFmpegCommand适用于Android的FFmpeg命令库,实现了对音视频相关的处理,能够快速的处理音视频,大概功能包括:音视频剪切,音视频转码,音视频解码原始数据,音视频编码,视频转图片或gif,视频添加水印,多画面拼接,音频混音,视频亮度和对比度,音频淡入和淡出效果等
Stars: ✭ 394 (+282.52%)
Mutual labels:  amr
Recorder
html5 js 录音 mp3 wav ogg webm amr 格式,支持pc和Android、ios部分浏览器、和Hybrid App(提供Android IOS App源码),微信也是支持的,提供H5版语音通话聊天示例 和DTMF编解码
Stars: ✭ 2,891 (+2706.8%)
Mutual labels:  amr
Rtlamr
An rtl-sdr receiver for Itron ERT compatible smart meters operating in the 900MHz ISM band.
Stars: ✭ 1,326 (+1187.38%)
Mutual labels:  amr
Castro
An adaptive mesh, astrophysical compressible (radiation-, magneto-) hydrodynamics simulation code for massively parallel CPU and GPU architectures.
Stars: ✭ 156 (+51.46%)
Mutual labels:  amr
Neuralamr
Sequence-to-sequence models for AMR parsing and generation
Stars: ✭ 60 (-41.75%)
Mutual labels:  amr
Rtpdump
Extract audio file from RTP streams in pcap format
Stars: ✭ 54 (-47.57%)
Mutual labels:  amr
ZWAudioRecordTool
集合了MP3和AMR转录,线上正常运行(Recording MP3 and AMR In iOS App)
Stars: ✭ 32 (-68.93%)
Mutual labels:  amr

SPRING

PWC

PWC

PWC

PWC

This is the repo for SPRING (Symmetric ParsIng aNd Generation), a novel approach to semantic parsing and generation, presented at AAAI 2021.

With SPRING you can perform both state-of-the-art Text-to-AMR parsing and AMR-to-Text generation without many cumbersome external components. If you use the code, please reference this work in your paper:

@inproceedings{bevilacqua-etal-2021-one,
    title = {One {SPRING} to Rule Them Both: {S}ymmetric {AMR} Semantic Parsing and Generation without a Complex Pipeline},
    author = {Bevilacqua, Michele and Blloshmi, Rexhina and Navigli, Roberto},
    booktitle = {Proceedings of AAAI},
    year = {2021}
}

Pretrained Checkpoints

Here we release our best SPRING models which are based on the DFS linearization.

Text-to-AMR Parsing

AMR-to-Text Generation

If you need the checkpoints of other experiments in the paper, please send us an email.

Installation

cd spring
pip install -r requirements.txt
pip install -e .

The code only works with transformers < 3.0 because of a disrupting change in positional embeddings. The code works fine with torch 1.5. We recommend the usage of a new conda env.

Train

Modify config.yaml in configs. Instructions in comments within the file. Also see the appendix.

Text-to-AMR

python bin/train.py --config configs/config.yaml --direction amr

Results in runs/

AMR-to-Text

python bin/train.py --config configs/config.yaml --direction text

Results in runs/

Evaluate

Text-to-AMR

python bin/predict_amrs.py \
    --datasets <AMR-ROOT>/data/amrs/split/test/*.txt \
    --gold-path data/tmp/amr2.0/gold.amr.txt \
    --pred-path data/tmp/amr2.0/pred.amr.txt \
    --checkpoint runs/<checkpoint>.pt \
    --beam-size 5 \
    --batch-size 500 \
    --device cuda \
    --penman-linearization --use-pointer-tokens

gold.amr.txt and pred.amr.txt will contain, respectively, the concatenated gold and the predictions.

To reproduce our paper's results, you will also need need to run the BLINK entity linking system on the prediction file (data/tmp/amr2.0/pred.amr.txt in the previous code snippet). To do so, you will need to install BLINK, and download their models:

git clone https://github.com/facebookresearch/BLINK.git
cd BLINK
pip install -r requirements.txt
sh download_blink_models.sh
cd models
wget http://dl.fbaipublicfiles.com/BLINK//faiss_flat_index.pkl
cd ../..

Then, you will be able to launch the blinkify.py script:

python bin/blinkify.py \
    --datasets data/tmp/amr2.0/pred.amr.txt \
    --out data/tmp/amr2.0/pred.amr.blinkified.txt \
    --device cuda \
    --blink-models-dir BLINK/models

To have comparable Smatch scores you will also need to use the scripts available at https://github.com/mdtux89/amr-evaluation, which provide results that are around ~0.3 Smatch points lower than those returned by bin/predict_amrs.py.

AMR-to-Text

python bin/predict_sentences.py \
    --datasets <AMR-ROOT>/data/amrs/split/test/*.txt \
    --gold-path data/tmp/amr2.0/gold.text.txt \
    --pred-path data/tmp/amr2.0/pred.text.txt \
    --checkpoint runs/<checkpoint>.pt \
    --beam-size 5 \
    --batch-size 500 \
    --device cuda \
    --penman-linearization --use-pointer-tokens

gold.text.txt and pred.text.txt will contain, respectively, the concatenated gold and the predictions. For BLEU, chrF++, and Meteor in order to be comparable you will need to tokenize both gold and predictions using JAMR tokenizer. To compute BLEU and chrF++, please use bin/eval_bleu.py. For METEOR, use https://www.cs.cmu.edu/~alavie/METEOR/ . For BLEURT don't use tokenization and run the eval with https://github.com/google-research/bleurt. Also see the appendix.

Linearizations

The previously shown commands assume the use of the DFS-based linearization. To use BFS or PENMAN decomment the relevant lines in configs/config.yaml (for training). As for the evaluation scripts, substitute the --penman-linearization --use-pointer-tokens line with --use-pointer-tokens for BFS or with --penman-linearization for PENMAN.

License

This project is released under the CC-BY-NC-SA 4.0 license (see LICENSE). If you use SPRING, please put a link to this repo.

Acknowledgements

The authors gratefully acknowledge the support of the ERC Consolidator Grant MOUSSE No. 726487 and the ELEXIS project No. 731015 under the European Union’s Horizon 2020 research and innovation programme.

This work was supported in part by the MIUR under the grant "Dipartimenti di eccellenza 2018-2022" of the Department of Computer Science of the Sapienza University of Rome.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].