All Projects → xiongma → Transformer Pointer Generator

xiongma / Transformer Pointer Generator

Licence: mit
A Abstractive Summarization Implementation with Transformer and Pointer-generator

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Transformer Pointer Generator

galerkin-transformer
[NeurIPS 2021] Galerkin Transformer: a linear attention without softmax
Stars: ✭ 111 (-62.63%)
Mutual labels:  transformer
Nlp Interview Notes
本项目是作者们根据个人面试和经验总结出的自然语言处理(NLP)面试准备的学习笔记与资料,该资料目前包含 自然语言处理各领域的 面试题积累。
Stars: ✭ 207 (-30.3%)
Mutual labels:  transformer
Transformer
Easy Attributed String Creator
Stars: ✭ 278 (-6.4%)
Mutual labels:  transformer
Swin-Transformer-Tensorflow
Unofficial implementation of "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" (https://arxiv.org/abs/2103.14030)
Stars: ✭ 45 (-84.85%)
Mutual labels:  transformer
bert in a flask
A dockerized flask API, serving ALBERT and BERT predictions using TensorFlow 2.0.
Stars: ✭ 32 (-89.23%)
Mutual labels:  transformer
Remi
"Pop Music Transformer: Beat-based Modeling and Generation of Expressive Pop Piano Compositions", ACM Multimedia 2020
Stars: ✭ 273 (-8.08%)
Mutual labels:  transformer
TextPruner
A PyTorch-based model pruning toolkit for pre-trained language models
Stars: ✭ 94 (-68.35%)
Mutual labels:  transformer
Dab
Data Augmentation by Backtranslation (DAB) ヽ( •_-)ᕗ
Stars: ✭ 294 (-1.01%)
Mutual labels:  transformer
ai challenger 2018 sentiment analysis
Fine-grained Sentiment Analysis of User Reviews --- AI CHALLENGER 2018
Stars: ✭ 16 (-94.61%)
Mutual labels:  transformer
Demo Chinese Text Binary Classification With Bert
Stars: ✭ 276 (-7.07%)
Mutual labels:  transformer
uformer-pytorch
Implementation of Uformer, Attention-based Unet, in Pytorch
Stars: ✭ 54 (-81.82%)
Mutual labels:  transformer
AITQA
resources for the IBM Airlines Table-Question-Answering Benchmark
Stars: ✭ 12 (-95.96%)
Mutual labels:  transformer
Keras Transformer
Transformer implemented in Keras
Stars: ✭ 273 (-8.08%)
Mutual labels:  transformer
SwinIR
SwinIR: Image Restoration Using Swin Transformer (official repository)
Stars: ✭ 1,260 (+324.24%)
Mutual labels:  transformer
Viewpagertransition
viewpager with parallax pages, together with vertical sliding (or click) and activity transition
Stars: ✭ 3,017 (+915.82%)
Mutual labels:  transformer
SIGIR2021 Conure
One Person, One Model, One World: Learning Continual User Representation without Forgetting
Stars: ✭ 23 (-92.26%)
Mutual labels:  transformer
Allrank
allRank is a framework for training learning-to-rank neural models based on PyTorch.
Stars: ✭ 269 (-9.43%)
Mutual labels:  transformer
Vedastr
A scene text recognition toolbox based on PyTorch
Stars: ✭ 290 (-2.36%)
Mutual labels:  transformer
Transformer Tensorflow
Implementation of Transformer Model in Tensorflow
Stars: ✭ 286 (-3.7%)
Mutual labels:  transformer
Transformer
Implementation of Transformer model (originally from Attention is All You Need) applied to Time Series.
Stars: ✭ 273 (-8.08%)
Mutual labels:  transformer

A Abstractive Summarization Implementation with Transformer and Pointer-generator

when I wanted to get summary by neural network, I tried many ways to generate abstract summary, but the result was not good. when I heared 2018 byte cup, I found some information about it, and the champion's solution attracted me, but I found some websites, like github gitlab, I didn't find the official code, so I decided to implement it.

Requirements

  • python==3.x (Let's move on to python 3 if you still use python 2)
  • tensorflow==1.12.0
  • tqdm>=4.28.1
  • jieba>=0.3x
  • sumeval>=0.2.0

Model Structure

Based

My model is based on Attention Is All You Need and Get To The Point: Summarization with Pointer-Generator Networks

Change

  • The pointer-generator model has two mechanisms, which are copy mechanism and coverage mechanism, I found some materials, they show the Coverage mechanism doesn't suit short summary, so I didn't use this mechanism, just use the first one.
  • Pointer generator model has a inadequacy, which can let the loss got nan, I tried some times and wanted to fix it, but the result was I can't, I think the reason was when calculate final logists, it will extend vocab length to oov and vocab length, it will get more zeroes. so I delete the mechanism of extend final logists, just use their mechanism of deocode from article and vocab. there is more detail about it, in this model, I just use word than vocab, this idea is from bert.

Structure

Training

  • STEP 1. download the dataset, pwd is ayn6, the dataset is LCSTS by pre processed, so you will see very different dataset structure with LCSTS in the file each line is abstract and article, they split by ",", if you worry the amount of the dataset is different between my and LCSTS, don't worry, the amout of the dataset is same as LCSTS.
  • STEP 2. Run the following command.
python train.py

Check hparams.py to see which parameters are possible. For example,

python train.py --logdir myLog --batch_size 32 --train myTrain --eval myEval

My code also improve multi gpu to train this model, if you have more than one gpu, just run like this

python train.py --logdir myLog --batch_size 32 --train myTrain --eval myEval --gpu_nums=myGPUNums
name type detail
vocab_size int vocab size
train str train dataset dir
eval str eval dataset dir
test str data for calculate rouge score
vocab str vocabulary file path
batch_size int train batch size
eval_batch_size int eval batch size
lr float learning rate
warmup_steps int warmup steps by learing rate
logdir str log directory
num_epochs int the number of train epoch
evaldir str evaluation dir
d_model int hidden dimension of encoder/decoder
d_ff int hidden dimension of feedforward layer
num_blocks int number of encoder/decoder blocks
num_heads int number of attention heads
maxlen1 int maximum length of a source sequence
maxlen2 int maximum length of a target sequence
dropout_rate float dropout rate
beam_size int beam size for decode
gpu_nums int gpu amount, which can allow how many gpu to train this model, default 1

Note

Don't change the hyper-parameters of transformer util you have good solution, it will let the loss can't go down! if you have good solution, I hope you can tell me.

Evaluation

Loss

  • Transformer-Pointer generator
* Transformer As you see, transformer-pointer generator model can let the loss go down very quickly!

If you like it, and think it useful for you, hope you can star.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].