All Projects → shaoxiongji → Fed Att

shaoxiongji / Fed Att

Licence: mit
Attentive Federated Learning for Private NLM

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Fed Att

Bottleneck Transformer Pytorch
Implementation of Bottleneck Transformer in Pytorch
Stars: ✭ 408 (+1100%)
Mutual labels:  attention-mechanism
Moran v2
MORAN: A Multi-Object Rectified Attention Network for Scene Text Recognition
Stars: ✭ 536 (+1476.47%)
Mutual labels:  attention-mechanism
Pytorch Gat
My implementation of the original GAT paper (Veličković et al.). I've additionally included the playground.py file for visualizing the Cora dataset, GAT embeddings, an attention mechanism, and entropy histograms. I've supported both Cora (transductive) and PPI (inductive) examples!
Stars: ✭ 908 (+2570.59%)
Mutual labels:  attention-mechanism
Transformer Tts
A Pytorch Implementation of "Neural Speech Synthesis with Transformer Network"
Stars: ✭ 418 (+1129.41%)
Mutual labels:  attention-mechanism
Keras Self Attention
Attention mechanism for processing sequential data that considers the context for each timestamp.
Stars: ✭ 489 (+1338.24%)
Mutual labels:  attention-mechanism
Awesome Bert Nlp
A curated list of NLP resources focused on BERT, attention mechanism, Transformer networks, and transfer learning.
Stars: ✭ 567 (+1567.65%)
Mutual labels:  attention-mechanism
Paperrobot
Code for PaperRobot: Incremental Draft Generation of Scientific Ideas
Stars: ✭ 372 (+994.12%)
Mutual labels:  attention-mechanism
Show Attend And Tell
TensorFlow Implementation of "Show, Attend and Tell"
Stars: ✭ 869 (+2455.88%)
Mutual labels:  attention-mechanism
Nmt Keras
Neural Machine Translation with Keras
Stars: ✭ 501 (+1373.53%)
Mutual labels:  attention-mechanism
Chatbot cn
基于金融-司法领域(兼有闲聊性质)的聊天机器人,其中的主要模块有信息抽取、NLU、NLG、知识图谱等,并且利用Django整合了前端展示,目前已经封装了nlp和kg的restful接口
Stars: ✭ 791 (+2226.47%)
Mutual labels:  attention-mechanism
Awesome Graph Classification
A collection of important graph embedding, classification and representation learning papers with implementations.
Stars: ✭ 4,309 (+12573.53%)
Mutual labels:  attention-mechanism
Deeplearning.ai Natural Language Processing Specialization
This repository contains my full work and notes on Coursera's NLP Specialization (Natural Language Processing) taught by the instructor Younes Bensouda Mourri and Łukasz Kaiser offered by deeplearning.ai
Stars: ✭ 473 (+1291.18%)
Mutual labels:  attention-mechanism
Pointer summarizer
pytorch implementation of "Get To The Point: Summarization with Pointer-Generator Networks"
Stars: ✭ 629 (+1750%)
Mutual labels:  attention-mechanism
Pytorch Original Transformer
My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
Stars: ✭ 411 (+1108.82%)
Mutual labels:  attention-mechanism
Ag Cnn
This is a reimplementation of AG-CNN. ("Thorax Disease Classification with Attention Guided Convolutional Neural Network","Diagnose like a Radiologist: Attention Guided Convolutional Neural Network for Thorax Disease Classification")
Stars: ✭ 27 (-20.59%)
Mutual labels:  attention-mechanism
Neural sp
End-to-end ASR/LM implementation with PyTorch
Stars: ✭ 408 (+1100%)
Mutual labels:  attention-mechanism
Performer Pytorch
An implementation of Performer, a linear attention-based transformer, in Pytorch
Stars: ✭ 546 (+1505.88%)
Mutual labels:  attention-mechanism
Isab Pytorch
An implementation of (Induced) Set Attention Block, from the Set Transformers paper
Stars: ✭ 21 (-38.24%)
Mutual labels:  attention-mechanism
Text classification
all kinds of text classification models and more with deep learning
Stars: ✭ 7,179 (+21014.71%)
Mutual labels:  attention-mechanism
Keras Attention
Visualizing RNNs using the attention mechanism
Stars: ✭ 697 (+1950%)
Mutual labels:  attention-mechanism

Attentive Federated Learning

This repository contains the code for the paper Learning Private Neural Language Modeling with Attentive Aggregation, which is an attentive extention of federated aggregation. A brief introductionary blog is avaiable here.

Further reference: a universal federated learning repository implemented by PyTorch - Federated Learning - PyTorch.

Run

Refer to the README.md under the data folder and download the datasets into their corresponding folders. Enter the source code folder to run the scripts with arguments assigned using argparse package.

cd src
python run.py

See configs in src/utils/options.py

Requirements

Python 3.6
PyTorch 0.4.1

Cite

@inproceedings{ji2019learning,
  title={Learning Private Neural Language Modeling with Attentive Aggregation},
  author={Ji, Shaoxiong and Pan, Shirui and Long, Guodong and Li, Xue and Jiang, Jing and Huang, Zi},
  booktitle={International Joint Conference on Neural Networks (IJCNN)},
  year={2019}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].