FNet-pytorchUnofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms
graphtransRepresenting Long-Range Context for Graph Neural Networks with Global Attention
speech-transformerTransformer implementation speciaized in speech recognition tasks using Pytorch.
XpersonaXPersona: Evaluating Multilingual Personalized Chatbot
PDNThe official PyTorch implementation of "Pathfinder Discovery Networks for Neural Message Passing" (WebConf '21)
towheeTowhee is a framework that is dedicated to making neural data processing pipelines simple and fast.
transform-graphql⚙️ Transformer function to transform GraphQL Directives. Create model CRUD directive for example
YOLOv5-Lite🍅🍅🍅YOLOv5-Lite: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 930+kb (int8) and 1.7M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size is 320×320~
TransPosePyTorch Implementation for "TransPose: Keypoint localization via Transformer", ICCV 2021.
transformerA PyTorch Implementation of "Attention Is All You Need"
BossNAS(ICCV 2021) BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search
OpenPromptAn Open-Source Framework for Prompt-Learning.
GTSRB Keras STNGerman Traffic Sign Recognition Benchmark, Keras implementation with Spatial Transformer Networks
cometaCorpus of Online Medical EnTities: the cometA corpus
pytorch-gpt-xImplementation of autoregressive language model using improved Transformer and DeepSpeed pipeline parallelism.
NiuTrans.NMTA Fast Neural Machine Translation System. It is developed in C++ and resorts to NiuTensor for fast tensor APIs.
YOLOSYou Only Look at One Sequence (NeurIPS 2021)
verseagilityRamp up your custom natural language processing (NLP) task, allowing you to bring your own data, use your preferred frameworks and bring models into production.
zeroZero -- A neural machine translation system
wenetProduction First and Production Ready End-to-End Speech Recognition Toolkit
dodrioExploring attention weights in transformer-based models with linguistic knowledge.
ConformerOfficial code for Conformer: Local Features Coupling Global Representations for Visual Recognition
AdaSpeechAdaSpeech: Adaptive Text to Speech for Custom Voice
Context-TransformerContext-Transformer: Tackling Object Confusion for Few-Shot Detection, AAAI 2020
Transformer-TransducerPyTorch implementation of "Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T Loss" (ICASSP 2020)
ICON(TPAMI2022) Salient Object Detection via Integrity Learning.
NLP-paper🎨 🎨NLP 自然语言处理教程 🎨🎨 https://dataxujing.github.io/NLP-paper/
SegFormerOfficial PyTorch implementation of SegFormer
CrabNetPredict materials properties using only the composition information!
TDRGTransformer-based Dual Relation Graph for Multi-label Image Recognition. ICCV 2021
sticker2Further developed as SyntaxDot: https://github.com/tensordot/syntaxdot
GraphormerGraphormer is a deep learning package that allows researchers and developers to train custom models for molecule modeling tasks. It aims to accelerate the research and application in AI for molecule science, such as material design, drug discovery, etc.
KitanaQAKitanaQA: Adversarial training and data augmentation for neural question-answering models
h-transformer-1dImplementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning
php-json-apiJSON API transformer outputting valid (PSR-7) API Responses.
DolboNetРусскоязычный чат-бот для Discord на архитектуре Transformer
TRAR-VQA[ICCV 2021] TRAR: Routing the Attention Spans in Transformers for Visual Question Answering -- Official Implementation