All Projects → hanxiao → tf-attentive-conv

hanxiao / tf-attentive-conv

Licence: MIT license
A Tensorflow implementation of Yin Wenpeng's recent paper on TACL "Attentive Convolution"

Programming Languages

python
139335 projects - #7 most used programming language

tf-attentive-conv: Attentive Convolution

Han Xiao [email protected]

What is it?

This is a Tensorflow implementation of Yin Wenpeng's paper "Attentive Convolution" at TACL in 2018. Wenpeng's original code is written in Theano.

I only implement the light attentive convolution described in Sect. 3.1 of the paper. Authors argue that even this light-version AttConv outperforms some of pioneering attentive RNNs in both intra-context (context=query, i.e. self-attention) and extra-context (context!=query) settings. The following figure (from the paper) illustrates this idea:

What did I change?

Nothing big. I do add some features:

  1. add a dropout-resnet-layernorm block before the output
  2. add masking to ensure causality, so that one may use it for decoding as well.

By default these features are all disabled.

Run

Run app.py for a simple test on toy data.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].