All Projects → Gsunshine → Enjoy Hamburger

Gsunshine / Enjoy Hamburger

Licence: gpl-3.0
[ICLR 2021] Is Attention Better Than Matrix Decomposition?

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Enjoy Hamburger

Nlp tensorflow project
Use tensorflow to achieve some NLP project, eg: classification chatbot ner attention QAetc.
Stars: ✭ 27 (-60.87%)
Mutual labels:  attention
Attentions
PyTorch implementation of some attentions for Deep Learning Researchers.
Stars: ✭ 39 (-43.48%)
Mutual labels:  attention
Fluence
A deep learning library based on Pytorch focussed on low resource language research and robustness
Stars: ✭ 54 (-21.74%)
Mutual labels:  attention
Isab Pytorch
An implementation of (Induced) Set Attention Block, from the Set Transformers paper
Stars: ✭ 21 (-69.57%)
Mutual labels:  attention
Attentioncluster
TensorFlow Implementation of "Attention Clusters: Purely Attention Based Local Feature Integration for Video Classification"
Stars: ✭ 33 (-52.17%)
Mutual labels:  attention
Sentences pair similarity calculation siamese lstm
A Keras Implementation of Attention_based Siamese Manhattan LSTM
Stars: ✭ 48 (-30.43%)
Mutual labels:  attention
Pytorch Gat
My implementation of the original GAT paper (Veličković et al.). I've additionally included the playground.py file for visualizing the Cora dataset, GAT embeddings, an attention mechanism, and entropy histograms. I've supported both Cora (transductive) and PPI (inductive) examples!
Stars: ✭ 908 (+1215.94%)
Mutual labels:  attention
Global Self Attention Network
A Pytorch implementation of Global Self-Attention Network, a fully-attention backbone for vision tasks
Stars: ✭ 64 (-7.25%)
Mutual labels:  attention
Min Cost Flow Class
C++ solvers for Minimum Cost Flow Problems
Stars: ✭ 36 (-47.83%)
Mutual labels:  optimization-algorithms
Pointer Networks Experiments
Sorting numbers with pointer networks
Stars: ✭ 53 (-23.19%)
Mutual labels:  attention
Banglatranslator
Bangla Machine Translator
Stars: ✭ 21 (-69.57%)
Mutual labels:  attention
Attentive Neural Processes
implementing "recurrent attentive neural processes" to forecast power usage (w. LSTM baseline, MCDropout)
Stars: ✭ 33 (-52.17%)
Mutual labels:  attention
Time Attention
Implementation of RNN for Time Series prediction from the paper https://arxiv.org/abs/1704.02971
Stars: ✭ 52 (-24.64%)
Mutual labels:  attention
Mindseye
Neural Networks in Java 8 with CuDNN and Aparapi
Stars: ✭ 8 (-88.41%)
Mutual labels:  optimization-algorithms
Yolov4 Pytorch
This is a pytorch repository of YOLOv4, attentive YOLOv4 and mobilenet YOLOv4 with PASCAL VOC and COCO
Stars: ✭ 1,070 (+1450.72%)
Mutual labels:  attention
Cell Detr
Official and maintained implementation of the paper Attention-Based Transformers for Instance Segmentation of Cells in Microstructures [BIBM 2020].
Stars: ✭ 26 (-62.32%)
Mutual labels:  attention
Biblosa Pytorch
Re-implementation of Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Modeling (T. Shen et al., ICLR 2018) on Pytorch.
Stars: ✭ 43 (-37.68%)
Mutual labels:  attention
Deeplearning Nlp Models
A small, interpretable codebase containing the re-implementation of a few "deep" NLP models in PyTorch. Colab notebooks to run with GPUs. Models: word2vec, CNNs, transformer, gpt.
Stars: ✭ 64 (-7.25%)
Mutual labels:  attention
Attention Over Attention Tf Qa
论文“Attention-over-Attention Neural Networks for Reading Comprehension”中AoA模型实现
Stars: ✭ 58 (-15.94%)
Mutual labels:  attention
Text Classification Keras
📚 Text classification library with Keras
Stars: ✭ 53 (-23.19%)
Mutual labels:  attention

Enjoy-Hamburger 🍔

Official implementation of Hamburger, Is Attention Better Than Matrix Decomposition? (ICLR 2021)

Under construction.

Introduction

This repo provides the official implementation of Hamburger for further research. We sincerely hope that this paper can bring you inspiration about the Attention Mechanism, especially how the low-rankness and the optimization-driven method can help model the so-called Global Information in deep learning.

We model the global context issue as a low-rank completion problem and show that its optimization algorithms can help design global information blocks. This paper then proposes a series of Hamburgers, in which we employ the optimization algorithms for solving MDs to factorize the input representations into sub-matrices and reconstruct a low-rank embedding. Hamburgers with different MDs can perform favorably against the popular global context module self-attention when carefully coping with gradients back-propagated through MDs.

contents

We are working on some exciting topics. Please wait for our new papers!

Enjoy Hamburger, please!

Organization

This section introduces the organization of this repo.

We strongly recommend the readers to read the blog (incoming soon) as a supplement to the paper!

  • blog.
    • Some random thoughts about Hamburger and beyond.
    • Possible directions based on Hamburger.
    • FAQ.
  • seg.
    • We provide the PyTorch implementation of Hamburger (V1) in the paper and an enhanced version (V2) flavored with Cheese. Some experimental features are included in V2+.
    • We release the codebase for systematical research on the PASCAL VOC dataset, including the two-stage training on the trainaug and trainval datasets and the MSFlip test.
    • We offer three checkpoints of HamNet, in which one is 85.90+ with the test server link, while the other two are 85.80+ with the test server link 1 and link 2. You can reproduce the test results using the checkpoints combined with the MSFlip test code.
    • Statistics about HamNet that might ease further research.
  • gan.
    • Official implementation of Hamburger in TensorFlow.
    • Data preprocessing code for using ImageNet in tensorflow-datasets. (Possibly useful if you hope to run the JAX code of BYOL or other ImageNet training code with the Cloud TPUs.)
    • Training and evaluation protocol of HamGAN on the ImageNet.
    • Checkpoints of HamGAN-strong and HamGAN-baby.

TODO:

  • [ ] README doc for HamGAN.
  • [ ] PyTorch Hamburger with less encapsulation.
  • [ ] Suggestions for using and further developing Hamburger.
  • [ ] Blog in both English and Chinese.
  • [ ] We also consider adding a collection of popular context modules to this repo. It depends on the time. No Guarantee. Perhaps GuGu 🕊️ (which means standing someone up).

Citation

If you find our work interesting or helpful to your research, please consider citing Hamburger. :)

@inproceedings{
    ham,
    title={Is Attention Better Than Matrix Decomposition?},
    author={Zhengyang Geng and Meng-Hao Guo and Hongxu Chen and Xia Li and Ke Wei and Zhouchen Lin},
    booktitle={International Conference on Learning Representations},
    year={2021},
}

Contact

Feel free to contact me if you have additional questions or have interests in collaboration. Please drop me an email at [email protected]. Find me at Twitter. Thank you!

Response to recent emails may be slightly delayed to March 26th due to the deadlines of ICLR. I feel sorry, but people are always deadline-driven. QAQ

Acknowledgments

Our research is supported with Cloud TPUs from Google's Tensorflow Research Cloud (TFRC). Nice and joyful experience with the TFRC program. Thank you!

We would like to sincerely thank EMANet, PyTorch-Encoding, YLG, and TF-GAN for their awesome released code.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].