FragmentVCAny-to-any voice conversion by end-to-end extracting and fusing fine-grained voice fragments with attention
Stars: β 134 (+2.29%)
NLP-paperπ¨ π¨NLP θͺηΆθ―θ¨ε€ηζη¨ π¨π¨ https://dataxujing.github.io/NLP-paper/
Stars: β 23 (-82.44%)
h-transformer-1dImplementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning
Stars: β 121 (-7.63%)
OverlapPredator[CVPR 2021, Oral] PREDATOR: Registration of 3D Point Clouds with Low Overlap.
Stars: β 293 (+123.66%)
Awesome Bert NlpA curated list of NLP resources focused on BERT, attention mechanism, Transformer networks, and transfer learning.
Stars: β 567 (+332.82%)
EqtransformerEQTransformer, a python package for earthquake signal detection and phase picking using AI.
Stars: β 95 (-27.48%)
visualizationa collection of visualization function
Stars: β 189 (+44.27%)
linformerImplementation of Linformer for Pytorch
Stars: β 119 (-9.16%)
Transformer TtsA Pytorch Implementation of "Neural Speech Synthesis with Transformer Network"
Stars: β 418 (+219.08%)
Nmt KerasNeural Machine Translation with Keras
Stars: β 501 (+282.44%)
Eeg DlA Deep Learning library for EEG Tasks (Signals) Classification, based on TensorFlow.
Stars: β 165 (+25.95%)
pynmta simple and complete pytorch implementation of neural machine translation system
Stars: β 13 (-90.08%)
Routing TransformerFully featured implementation of Routing Transformer
Stars: β 149 (+13.74%)
CrabNetPredict materials properties using only the composition information!
Stars: β 57 (-56.49%)
Neural spEnd-to-end ASR/LM implementation with PyTorch
Stars: β 408 (+211.45%)
TransformerA TensorFlow Implementation of the Transformer: Attention Is All You Need
Stars: β 3,646 (+2683.21%)
Se3 Transformer PytorchImplementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch. This specific repository is geared towards integration with eventual Alphafold2 replication.
Stars: β 73 (-44.27%)
enformer-pytorchImplementation of Enformer, Deepmind's attention network for predicting gene expression, in Pytorch
Stars: β 146 (+11.45%)
Image-CaptionUsing LSTM or Transformer to solve Image Captioning in Pytorch
Stars: β 36 (-72.52%)
Transformer-in-TransformerAn Implementation of Transformer in Transformer in TensorFlow for image classification, attention inside local patches
Stars: β 40 (-69.47%)
Pytorch Original TransformerMy implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
Stars: β 411 (+213.74%)
galerkin-transformer[NeurIPS 2021] Galerkin Transformer: a linear attention without softmax
Stars: β 111 (-15.27%)
Overlappredator[CVPR 2021, Oral] PREDATOR: Registration of 3D Point Clouds with Low Overlap.
Stars: β 106 (-19.08%)
dodrioExploring attention weights in transformer-based models with linguistic knowledge.
Stars: β 233 (+77.86%)
Self Attention CvImplementation of various self-attention mechanisms focused on computer vision. Ongoing repository.
Stars: β 209 (+59.54%)
SockeyeSequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet
Stars: β 990 (+655.73%)
Linear Attention TransformerTransformer based on a variant of attention that is linear complexity in respect to sequence length
Stars: β 205 (+56.49%)
Transformers-RLAn easy PyTorch implementation of "Stabilizing Transformers for Reinforcement Learning"
Stars: β 107 (-18.32%)
sisterSImple SenTence EmbeddeR
Stars: β 66 (-49.62%)
hexiaMid-level PyTorch Based Framework for Visual Question Answering.
Stars: β 24 (-81.68%)
TransBTSThis repo provides the official code for : 1) TransBTS: Multimodal Brain Tumor Segmentation Using Transformer (https://arxiv.org/abs/2103.04430) , accepted by MICCAI2021. 2) TransBTSV2: Towards Better and More Efficient Volumetric Segmentation of Medical Images(https://arxiv.org/abs/2201.12785).
Stars: β 254 (+93.89%)
kaggle-champsCode for the CHAMPS Predicting Molecular Properties Kaggle competition
Stars: β 49 (-62.6%)
transformerA simple TensorFlow implementation of the Transformer
Stars: β 25 (-80.92%)
capeContinuous Augmented Positional Embeddings (CAPE) implementation for PyTorch
Stars: β 29 (-77.86%)
libaiLiBai(ζη½): A Toolbox for Large-Scale Distributed Parallel Training
Stars: β 284 (+116.79%)
Cross-lingual-SummarizationZero-Shot Cross-Lingual Abstractive Sentence Summarization through Teaching Generation and Attention
Stars: β 28 (-78.63%)
svelte-jestJest Svelte component transformer
Stars: β 37 (-71.76%)
YaEtlYet Another ETL in PHP
Stars: β 60 (-54.2%)
LSTM-AttentionA Comparison of LSTMs and Attention Mechanisms for Forecasting Financial Time Series
Stars: β 53 (-59.54%)
STAM-pytorchImplementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification
Stars: β 109 (-16.79%)
Neural-Scam-ArtistWeb Scraping, Document Deduplication & GPT-2 Fine-tuning with a newly created scam dataset.
Stars: β 18 (-86.26%)
amta-netAsymmetric Multi-Task Attention Network for Prostate Bed Segmentation in CT Images
Stars: β 26 (-80.15%)
php-serializerSerialize PHP variables, including objects, in any format. Support to unserialize it too.
Stars: β 47 (-64.12%)
imdb-transformerA simple Neural Network for sentiment analysis, embedding sentences using a Transformer network.
Stars: β 26 (-80.15%)
VideoTransformer-pytorchPyTorch implementation of a collections of scalable Video Transformer Benchmarks.
Stars: β 159 (+21.37%)
SA-DLSentiment Analysis with Deep Learning models. Implemented with Tensorflow and Keras.
Stars: β 35 (-73.28%)
transformer-lsOfficial PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).
Stars: β 201 (+53.44%)
ru-dalleGenerate images from texts. In Russian
Stars: β 1,606 (+1125.95%)
fastT5β‘ boost inference speed of T5 models by 5x & reduce the model size by 3x.
Stars: β 421 (+221.37%)
dgcnnClean & Documented TF2 implementation of "An end-to-end deep learning architecture for graph classification" (M. Zhang et al., 2018).
Stars: β 21 (-83.97%)
SymmetricRLRepo for "On Learning Symmetric Locomotion"
Stars: β 30 (-77.1%)
TabFormerCode & Data for "Tabular Transformers for Modeling Multivariate Time Series" (ICASSP, 2021)
Stars: β 209 (+59.54%)
Optic-Disc-UnetAttention Unet model with post process for retina optic disc segmention
Stars: β 77 (-41.22%)