All Projects → sheng-z → Stog

sheng-z / Stog

Licence: mit
AMR Parsing as Sequence-to-Graph Transduction

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Stog

foo input amr
amr-nb decoder for foobar2000
Stars: ✭ 15 (-87.8%)
Mutual labels:  amr
bitpit
Open source library for scientific HPC
Stars: ✭ 80 (-34.96%)
Mutual labels:  amr
Rtpdump
Extract audio file from RTP streams in pcap format
Stars: ✭ 54 (-56.1%)
Mutual labels:  amr
grins
Multiphysics Finite Element package built on libMesh
Stars: ✭ 45 (-63.41%)
Mutual labels:  amr
linorobot2
Autonomous mobile robots (2WD, 4WD, Mecanum Drive)
Stars: ✭ 97 (-21.14%)
Mutual labels:  amr
Ffmpegcommand
FFmpegCommand适用于Android的FFmpeg命令库,实现了对音视频相关的处理,能够快速的处理音视频,大概功能包括:音视频剪切,音视频转码,音视频解码原始数据,音视频编码,视频转图片或gif,视频添加水印,多画面拼接,音频混音,视频亮度和对比度,音频淡入和淡出效果等
Stars: ✭ 394 (+220.33%)
Mutual labels:  amr
amr
Cornell AMR Semantic Parser (Artzi et al., EMNLP 2015)
Stars: ✭ 23 (-81.3%)
Mutual labels:  amr
Trixi.jl
A tree-based numerical simulation framework for hyperbolic PDEs written in Julia
Stars: ✭ 72 (-41.46%)
Mutual labels:  amr
amrlib
A python library that makes AMR parsing, generation and visualization simple.
Stars: ✭ 107 (-13.01%)
Mutual labels:  amr
Mfem
Lightweight, general, scalable C++ library for finite element methods
Stars: ✭ 667 (+442.28%)
Mutual labels:  amr
asap
A scalable bacterial genome assembly, annotation and analysis pipeline
Stars: ✭ 47 (-61.79%)
Mutual labels:  amr
AMPE
Adaptive Mesh Phase-field Evolution
Stars: ✭ 18 (-85.37%)
Mutual labels:  amr
Libmesh
libMesh github repository
Stars: ✭ 450 (+265.85%)
Mutual labels:  amr
amr-wind
AMReX-based structured wind solver
Stars: ✭ 46 (-62.6%)
Mutual labels:  amr
Neuralamr
Sequence-to-sequence models for AMR parsing and generation
Stars: ✭ 60 (-51.22%)
Mutual labels:  amr
MeterLogger
Wireless MeterLogger for Kamstrup heat energy meters and pulse based meters
Stars: ✭ 20 (-83.74%)
Mutual labels:  amr
Libstreaming
A solution for streaming H.264, H.263, AMR, AAC using RTP on Android
Stars: ✭ 3,167 (+2474.8%)
Mutual labels:  amr
Rtlamr
An rtl-sdr receiver for Itron ERT compatible smart meters operating in the 900MHz ISM band.
Stars: ✭ 1,326 (+978.05%)
Mutual labels:  amr
Penman
PENMAN notation (e.g. AMR) in Python
Stars: ✭ 63 (-48.78%)
Mutual labels:  amr
Moose
Multiphysics Object Oriented Simulation Environment
Stars: ✭ 652 (+430.08%)
Mutual labels:  amr

AMR Parsing as Sequence-to-Graph Transduction

Code for the AMR Parser in our ACL 2019 paper "AMR Parsing as Sequence-to-Graph Transduction".

If you find our code is useful, please cite:

@inproceedings{zhang-etal-2018-stog,
    title = "{AMR Parsing as Sequence-to-Graph Transduction}",
    author = "Zhang, Sheng and
      Ma, Xutai and
      Duh, Kevin and
      Van Durme, Benjamin",
    booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2019",
    address = "Florence, Italy",
    publisher = "Association for Computational Linguistics"
}

1. Environment Setup

The code has been tested on Python 3.6 and PyTorch 0.4.1. All other dependencies are listed in requirements.txt.

Via conda:

conda create -n stog python=3.6
source activate stog
pip install -r requirements.txt

2. Data Preparation

Download Artifacts:

./scripts/download_artifacts.sh

Assuming that you're working on AMR 2.0 (LDC2017T10), unzip the corpus to data/AMR/LDC2017T10, and make sure it has the following structure:

(stog)$ tree data/AMR/LDC2017T10 -L 2
data/AMR/LDC2017T10
├── data
│   ├── alignments
│   ├── amrs
│   └── frames
├── docs
│   ├── AMR-alignment-format.txt
│   ├── amr-guidelines-v1.2.pdf
│   ├── file.tbl
│   ├── frameset.dtd
│   ├── PropBank-unification-notes.txt
│   └── README.txt
└── index.html

Prepare training/dev/test data:

./scripts/prepare_data.sh -v 2 -p data/AMR/LDC2017T10

3. Feature Annotation

We use Stanford CoreNLP (version 3.9.2) for lemmatizing, POS tagging, etc.

First, start a CoreNLP server following the API documentation.

Then, annotate AMR sentences:

./scripts/annotate_features.sh data/AMR/amr_2.0

4. Data Preprocessing

./scripts/preprocess_2.0.sh

5. Training

Make sure that you have at least two GeForce GTX TITAN X GPUs to train the full model.

python -u -m stog.commands.train params/stog_amr_2.0.yaml

6. Prediction

python -u -m stog.commands.predict \
    --archive-file ckpt-amr-2.0 \
    --weights-file ckpt-amr-2.0/best.th \
    --input-file data/AMR/amr_2.0/test.txt.features.preproc \
    --batch-size 32 \
    --use-dataset-reader \
    --cuda-device 0 \
    --output-file test.pred.txt \
    --silent \
    --beam-size 5 \
    --predictor STOG

7. Data Postprocessing

./scripts/postprocess_2.0.sh test.pred.txt

8. Evaluation

Note that the evaluation tool works on python2, so please make sure python2 is visible in your $PATH.

./scripts/compute_smatch.sh test.pred.txt data/AMR/amr_2.0/test.txt

Pre-trained Models

Here are pre-trained models: ckpt-amr-2.0.tar.gz and ckpt-amr-1.0.tar.gz. To use them for prediction, simply download & unzip them, and then run Step 6-8.

In case that you only need the pre-trained model prediction (i.e., test.pred.txt), you can find it in the download.

Acknowledgements

We adopted some modules or code snippets from AllenNLP, OpenNMT-py and NeuroNLP2. Thanks to these open-source projects!

License

MIT

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].