Vidvrd HelperTo keep updates with VRU Grand Challenge, please use https://github.com/NExTplusplus/VidVRD-helper
Stars: ✭ 81 (-67.34%)
ntu-xNTU-X, which is an extended version of popular NTU dataset
Stars: ✭ 55 (-77.82%)
ttslearnttslearn: Library for Pythonで学ぶ音声合成 (Text-to-speech with Python)
Stars: ✭ 158 (-36.29%)
UntrimmednetWeakly Supervised Action Recognition and Detection
Stars: ✭ 152 (-38.71%)
CodeECG Classification
Stars: ✭ 78 (-68.55%)
CaverCaver: a toolkit for multilabel text classification.
Stars: ✭ 38 (-84.68%)
nuwa-pytorchImplementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch
Stars: ✭ 347 (+39.92%)
QuantumForestFast Differentiable Forest lib with the advantages of both decision trees and neural networks
Stars: ✭ 63 (-74.6%)
PAM[TPAMI 2020] Parallax Attention for Unsupervised Stereo Correspondence Learning
Stars: ✭ 62 (-75%)
NtpEnd-to-End Differentiable Proving
Stars: ✭ 74 (-70.16%)
Transformer-in-TransformerAn Implementation of Transformer in Transformer in TensorFlow for image classification, attention inside local patches
Stars: ✭ 40 (-83.87%)
HnattTrain and visualize Hierarchical Attention Networks
Stars: ✭ 192 (-22.58%)
Robust-Deep-Learning-PipelineDeep Convolutional Bidirectional LSTM for Complex Activity Recognition with Missing Data. Human Activity Recognition Challenge. Springer SIST (2020)
Stars: ✭ 20 (-91.94%)
Sarcasm DetectionDetecting Sarcasm on Twitter using both traditonal machine learning and deep learning techniques.
Stars: ✭ 73 (-70.56%)
FragmentVCAny-to-any voice conversion by end-to-end extracting and fusing fine-grained voice fragments with attention
Stars: ✭ 134 (-45.97%)
nystrom-attentionImplementation of Nyström Self-attention, from the paper Nyströmformer
Stars: ✭ 83 (-66.53%)
Se3 Transformer PytorchImplementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch. This specific repository is geared towards integration with eventual Alphafold2 replication.
Stars: ✭ 73 (-70.56%)
NTUA-slp-nlp💻Speech and Natural Language Processing (SLP & NLP) Lab Assignments for ECE NTUA
Stars: ✭ 19 (-92.34%)
Ta3n[ICCV 2019 (Oral)] Temporal Attentive Alignment for Large-Scale Video Domain Adaptation (PyTorch)
Stars: ✭ 217 (-12.5%)
keras-deep-learningVarious implementations and projects on CNN, RNN, LSTM, GAN, etc
Stars: ✭ 22 (-91.13%)
Hake ActionAs a part of the HAKE project, includes the reproduced SOTA models and the corresponding HAKE-enhanced versions (CVPR2020).
Stars: ✭ 72 (-70.97%)
AoA-pytorchA Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering
Stars: ✭ 33 (-86.69%)
Seq2seq chatbot new基于seq2seq模型的简单对话系统的tf实现,具有embedding、attention、beam_search等功能,数据集是Cornell Movie Dialogs
Stars: ✭ 144 (-41.94%)
video repres mascode for CVPR-2019 paper: Self-supervised Spatio-temporal Representation Learning for Videos by Predicting Motion and Appearance Statistics
Stars: ✭ 63 (-74.6%)
OverlapPredator[CVPR 2021, Oral] PREDATOR: Registration of 3D Point Clouds with Low Overlap.
Stars: ✭ 293 (+18.15%)
SelfAttentiveImplementation of A Structured Self-attentive Sentence Embedding
Stars: ✭ 107 (-56.85%)
BamnetCode & data accompanying the NAACL 2019 paper "Bidirectional Attentive Memory Networks for Question Answering over Knowledge Bases"
Stars: ✭ 140 (-43.55%)
SANETArbitrary Style Transfer with Style-Attentional Networks
Stars: ✭ 105 (-57.66%)
deep-stegGlobal NIPS Paper Implementation Challenge of "Hiding Images in Plain Sight: Deep Steganography"
Stars: ✭ 43 (-82.66%)
Action recognition zooCodes for popular action recognition models, verified on the something-something data set.
Stars: ✭ 227 (-8.47%)
Global Self Attention NetworkA Pytorch implementation of Global Self-Attention Network, a fully-attention backbone for vision tasks
Stars: ✭ 64 (-74.19%)
AttentionRepository for Attention Algorithm
Stars: ✭ 39 (-84.27%)
Document Classifier LstmA bidirectional LSTM with attention for multiclass/multilabel text classification.
Stars: ✭ 136 (-45.16%)
3HANAn original implementation of "3HAN: A Deep Neural Network for Fake News Detection" (ICONIP 2017)
Stars: ✭ 29 (-88.31%)
Learning2runOur NIPS 2017: Learning to Run source code
Stars: ✭ 57 (-77.02%)
domain-attentioncodes for paper "Domain Attention Model for Multi-Domain Sentiment Classification"
Stars: ✭ 22 (-91.13%)
Graph attention poolAttention over nodes in Graph Neural Networks using PyTorch (NeurIPS 2019)
Stars: ✭ 186 (-25%)
abcnn pytorchImplementation of ABCNN(Attention-Based Convolutional Neural Network) on Pytorch
Stars: ✭ 35 (-85.89%)
Deep SteganographyHiding Images within other images using Deep Learning
Stars: ✭ 136 (-45.16%)
torch-lrcnAn implementation of the LRCN in Torch
Stars: ✭ 85 (-65.73%)
Resgcnv1ResGCN: an efficient baseline for skeleton-based human action recognition.
Stars: ✭ 50 (-79.84%)
Generative Inpainting PytorchA PyTorch reimplementation for paper Generative Image Inpainting with Contextual Attention (https://arxiv.org/abs/1801.07892)
Stars: ✭ 242 (-2.42%)
AoanetCode for paper "Attention on Attention for Image Captioning". ICCV 2019
Stars: ✭ 242 (-2.42%)
Ms G3d[CVPR 2020 Oral] PyTorch implementation of "Disentangling and Unifying Graph Convolutions for Skeleton-Based Action Recognition"
Stars: ✭ 225 (-9.27%)
LightnetplusplusLightNet++: Boosted Light-weighted Networks for Real-time Semantic Segmentation
Stars: ✭ 218 (-12.1%)
StepSTEP: Spatio-Temporal Progressive Learning for Video Action Detection. CVPR'19 (Oral)
Stars: ✭ 196 (-20.97%)
HartHierarchical Attentive Recurrent Tracking
Stars: ✭ 149 (-39.92%)
DeepaffinityProtein-compound affinity prediction through unified RNN-CNN
Stars: ✭ 75 (-69.76%)