Vit PytorchImplementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
Stars: ✭ 7,199 (+1421.99%)
transganformerImplementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GanFormer and TransGan paper
Stars: ✭ 137 (-71.04%)
long-short-transformerImplementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch
Stars: ✭ 103 (-78.22%)
STAM-pytorchImplementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification
Stars: ✭ 109 (-76.96%)
uniformer-pytorchImplementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification tasks, debuted in ICLR 2022
Stars: ✭ 90 (-80.97%)
Dalle PytorchImplementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
Stars: ✭ 3,661 (+674%)
nuwa-pytorchImplementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch
Stars: ✭ 347 (-26.64%)
Reformer PytorchReformer, the efficient Transformer, in Pytorch
Stars: ✭ 1,644 (+247.57%)
OpenDialogAn Open-Source Package for Chinese Open-domain Conversational Chatbot (中文闲聊对话系统,一键部署微信闲聊机器人)
Stars: ✭ 94 (-80.13%)
hexiaMid-level PyTorch Based Framework for Visual Question Answering.
Stars: ✭ 24 (-94.93%)
Optic-Disc-UnetAttention Unet model with post process for retina optic disc segmention
Stars: ✭ 77 (-83.72%)
ChangeFormerOfficial PyTorch implementation of our IGARSS'22 paper: A Transformer-Based Siamese Network for Change Detection
Stars: ✭ 220 (-53.49%)
cineastCineast is a multi-feature content-based mulitmedia retrieval engine. It is capable of retrieving images, audio- and video sequences as well as 3d models based on edge or color sketches, textual descriptions and example objects.
Stars: ✭ 51 (-89.22%)
LSTM-AttentionA Comparison of LSTMs and Attention Mechanisms for Forecasting Financial Time Series
Stars: ✭ 53 (-88.79%)
Ask2TransformersA Framework for Textual Entailment based Zero Shot text classification
Stars: ✭ 102 (-78.44%)
NARREThis is our implementation of NARRE:Neural Attentional Regression with Review-level Explanations
Stars: ✭ 100 (-78.86%)
En-transformerImplementation of E(n)-Transformer, which extends the ideas of Welling's E(n)-Equivariant Graph Neural Network to attention
Stars: ✭ 131 (-72.3%)
jax-modelsUnofficial JAX implementations of deep learning research papers
Stars: ✭ 108 (-77.17%)
axial-attentionImplementation of Axial attention - attending to multi-dimensional data efficiently
Stars: ✭ 245 (-48.2%)
CIANImplementation of the Character-level Intra Attention Network (CIAN) for Natural Language Inference (NLI) upon SNLI and MultiNLI corpus
Stars: ✭ 17 (-96.41%)
amta-netAsymmetric Multi-Task Attention Network for Prostate Bed Segmentation in CT Images
Stars: ✭ 26 (-94.5%)
Transformer-MM-Explainability[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+2.33%)
textUsing Transformers from HuggingFace in R
Stars: ✭ 66 (-86.05%)
plexusPlexus - Interactive Emotion Visualization based on Social Media
Stars: ✭ 27 (-94.29%)
X-TransformerX-Transformer: Taming Pretrained Transformers for eXtreme Multi-label Text Classification
Stars: ✭ 127 (-73.15%)
question generatorAn NLP system for generating reading comprehension questions
Stars: ✭ 188 (-60.25%)
Transformer-in-PyTorchTransformer/Transformer-XL/R-Transformer examples and explanations
Stars: ✭ 21 (-95.56%)
code-transformerImplementation of the paper "Language-agnostic representation learning of source code from structure and context".
Stars: ✭ 130 (-72.52%)
image embeddingsUsing efficientnet to provide embeddings for retrieval
Stars: ✭ 107 (-77.38%)
nlp workshop odsc europe20Extensive tutorials for the Advanced NLP Workshop in Open Data Science Conference Europe 2020. We will leverage machine learning, deep learning and deep transfer learning to learn and solve popular tasks using NLP including NER, Classification, Recommendation \ Information Retrieval, Summarization, Classification, Language Translation, Q&A and T…
Stars: ✭ 127 (-73.15%)
beirA Heterogeneous Benchmark for Information Retrieval. Easy to use, evaluate your models across 15+ diverse IR datasets.
Stars: ✭ 738 (+56.03%)
CVPR2020 PADS(CVPR 2020) This repo contains code for "PADS: Policy-Adapted Sampling for Visual Similarity Learning", which proposes learnable triplet mining with Reinforcement Learning.
Stars: ✭ 57 (-87.95%)
TransQuestTransformer based translation quality estimation
Stars: ✭ 85 (-82.03%)
memory-compressed-attentionImplementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"
Stars: ✭ 47 (-90.06%)
tf retrieval baselineA Tensorflow retrieval (space embedding) baseline. Metric learning baseline on CUB and Stanford Online Products.
Stars: ✭ 39 (-91.75%)
dgcnnClean & Documented TF2 implementation of "An end-to-end deep learning architecture for graph classification" (M. Zhang et al., 2018).
Stars: ✭ 21 (-95.56%)
palladianPalladian is a Java-based toolkit with functionality for text processing, classification, information extraction, and data retrieval from the Web.
Stars: ✭ 32 (-93.23%)
rnn-text-classification-tfTensorflow implementation of Attention-based Bidirectional RNN text classification.
Stars: ✭ 26 (-94.5%)
SA-DLSentiment Analysis with Deep Learning models. Implemented with Tensorflow and Keras.
Stars: ✭ 35 (-92.6%)
naruNeural Relation Understanding: neural cardinality estimators for tabular data
Stars: ✭ 76 (-83.93%)
awesome-huggingface🤗 A list of wonderful open-source projects & applications integrated with Hugging Face libraries.
Stars: ✭ 436 (-7.82%)
Im2LaTeXAn implementation of the Show, Attend and Tell paper in Tensorflow, for the OpenAI Im2LaTeX suggested problem
Stars: ✭ 16 (-96.62%)
shrec17Supplementary code for SHREC 2017 RGB-D Object-to-CAD Retrieval track
Stars: ✭ 27 (-94.29%)
KB-ALBERTKB국민은행에서 제공하는 경제/금융 도메인에 특화된 한국어 ALBERT 모델
Stars: ✭ 215 (-54.55%)
LIT[AAAI 2022] This is the official PyTorch implementation of "Less is More: Pay Less Attention in Vision Transformers"
Stars: ✭ 79 (-83.3%)
KnowledgeEditorCode for Editing Factual Knowledge in Language Models
Stars: ✭ 86 (-81.82%)
thermostatCollection of NLP model explanations and accompanying analysis tools
Stars: ✭ 126 (-73.36%)
SnowflakeNet(TPAMI 2022) Snowflake Point Deconvolution for Point Cloud Completion and Generation with Skip-Transformer
Stars: ✭ 74 (-84.36%)
oreilly-bert-nlpThis repository contains code for the O'Reilly Live Online Training for BERT
Stars: ✭ 19 (-95.98%)
clip-italianCLIP (Contrastive Language–Image Pre-training) for Italian
Stars: ✭ 113 (-76.11%)