robustness-vitContains code for the paper "Vision Transformers are Robust Learners" (AAAI 2022).
Stars: ✭ 78 (+271.43%)
Mutual labels: transformers, self-attention
iPerceiveApplying Common-Sense Reasoning to Multi-Modal Dense Video Captioning and Video Question Answering | Python3 | PyTorch | CNNs | Causality | Reasoning | LSTMs | Transformers | Multi-Head Self Attention | Published in IEEE Winter Conference on Applications of Computer Vision (WACV) 2021
Stars: ✭ 52 (+147.62%)
Mutual labels: transformers, self-attention
Nlp ArchitectA model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks
Stars: ✭ 2,768 (+13080.95%)
Mutual labels: transformers
naruNeural Relation Understanding: neural cardinality estimators for tabular data
Stars: ✭ 76 (+261.9%)
Mutual labels: transformers
gplPowerful unsupervised domain adaptation method for dense retrieval. Requires only unlabeled corpus and yields massive improvement: "GPL: Generative Pseudo Labeling for Unsupervised Domain Adaptation of Dense Retrieval" https://arxiv.org/abs/2112.07577
Stars: ✭ 216 (+928.57%)
Mutual labels: transformers
COCO-LM[NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining
Stars: ✭ 109 (+419.05%)
Mutual labels: transformers
MASTER-pytorchCode for the paper "MASTER: Multi-Aspect Non-local Network for Scene Text Recognition" (Pattern Recognition 2021)
Stars: ✭ 263 (+1152.38%)
Mutual labels: self-attention
Nn🧑🏫 50! Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
Stars: ✭ 5,720 (+27138.1%)
Mutual labels: transformers
TransQuestTransformer based translation quality estimation
Stars: ✭ 85 (+304.76%)
Mutual labels: transformers
query-selectorLONG-TERM SERIES FORECASTING WITH QUERYSELECTOR – EFFICIENT MODEL OF SPARSEATTENTION
Stars: ✭ 63 (+200%)
Mutual labels: self-attention
KB-ALBERTKB국민은행에서 제공하는 경제/금융 도메인에 특화된 한국어 ALBERT 모델
Stars: ✭ 215 (+923.81%)
Mutual labels: transformers
seq2seq-pytorchSequence to Sequence Models in PyTorch
Stars: ✭ 41 (+95.24%)
Mutual labels: self-attention
thermostatCollection of NLP model explanations and accompanying analysis tools
Stars: ✭ 126 (+500%)
Mutual labels: transformers
nlp-papersMust-read papers on Natural Language Processing (NLP)
Stars: ✭ 87 (+314.29%)
Mutual labels: transformers
R-MeNTransformer-based Memory Networks for Knowledge Graph Embeddings (ACL 2020) (Pytorch and Tensorflow)
Stars: ✭ 74 (+252.38%)
Mutual labels: self-attention
Pytorch Sentiment AnalysisTutorials on getting started with PyTorch and TorchText for sentiment analysis.
Stars: ✭ 3,209 (+15180.95%)
Mutual labels: transformers
Fengshenbang-LMFengshenbang-LM(封神榜大模型)是IDEA研究院认知计算与自然语言研究中心主导的大模型开源体系,成为中文AIGC和认知智能的基础设施。
Stars: ✭ 1,813 (+8533.33%)
Mutual labels: transformers
SnowflakeNet(TPAMI 2022) Snowflake Point Deconvolution for Point Cloud Completion and Generation with Skip-Transformer
Stars: ✭ 74 (+252.38%)
Mutual labels: transformers
jax-modelsUnofficial JAX implementations of deep learning research papers
Stars: ✭ 108 (+414.29%)
Mutual labels: transformers