Numpy MlMachine learning, in numpy
Stars: ✭ 11,100 (+13436.59%)
char-VAEInspired by the neural style algorithm in the computer vision field, we propose a high-level language model with the aim of adapting the linguistic style.
Stars: ✭ 18 (-78.05%)
EBIM-NLIEnhanced BiLSTM Inference Model for Natural Language Inference
Stars: ✭ 24 (-70.73%)
DeepjazzDeep learning driven jazz generation using Keras & Theano!
Stars: ✭ 2,766 (+3273.17%)
InpaintNetCode accompanying ISMIR'19 paper titled "Learning to Traverse Latent Spaces for Musical Score Inpaintning"
Stars: ✭ 48 (-41.46%)
ntua-slp-semeval2018Deep-learning models of NTUA-SLP team submitted in SemEval 2018 tasks 1, 2 and 3.
Stars: ✭ 79 (-3.66%)
visualizationa collection of visualization function
Stars: ✭ 189 (+130.49%)
generative deep learningGenerative Deep Learning Sessions led by Anugraha Sinha (Machine Learning Tokyo)
Stars: ✭ 24 (-70.73%)
DiffuseVAEA combination of VAE's and Diffusion Models for efficient, controllable and high-fidelity generation from low-dimensional latents
Stars: ✭ 81 (-1.22%)
Pytorch Original TransformerMy implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
Stars: ✭ 411 (+401.22%)
AttentionnAll about attention in neural networks. Soft attention, attention maps, local and global attention and multi-head attention.
Stars: ✭ 175 (+113.41%)
keras-utility-layer-collectionCollection of custom layers and utility functions for Keras which are missing in the main framework.
Stars: ✭ 63 (-23.17%)
Anime4kA High-Quality Real Time Upscaler for Anime Video
Stars: ✭ 14,083 (+17074.39%)
style-vaeImplementation of VAE and Style-GAN Architecture Achieving State of the Art Reconstruction
Stars: ✭ 25 (-69.51%)
datastories-semeval2017-task6Deep-learning model presented in "DataStories at SemEval-2017 Task 6: Siamese LSTM with Attention for Humorous Text Comparison".
Stars: ✭ 20 (-75.61%)
NTUA-slp-nlp💻Speech and Natural Language Processing (SLP & NLP) Lab Assignments for ECE NTUA
Stars: ✭ 19 (-76.83%)
vqvae-2PyTorch implementation of VQ-VAE-2 from "Generating Diverse High-Fidelity Images with VQ-VAE-2"
Stars: ✭ 65 (-20.73%)
Neural spEnd-to-end ASR/LM implementation with PyTorch
Stars: ✭ 408 (+397.56%)
Awesome VaesA curated list of awesome work on VAEs, disentanglement, representation learning, and generative models.
Stars: ✭ 418 (+409.76%)
Performer PytorchAn implementation of Performer, a linear attention-based transformer, in Pytorch
Stars: ✭ 546 (+565.85%)
GatGraph Attention Networks (https://arxiv.org/abs/1710.10903)
Stars: ✭ 2,229 (+2618.29%)
Deep Learning With PythonExample projects I completed to understand Deep Learning techniques with Tensorflow. Please note that I do no longer maintain this repository.
Stars: ✭ 134 (+63.41%)
GamA PyTorch implementation of "Graph Classification Using Structural Attention" (KDD 2018).
Stars: ✭ 227 (+176.83%)
PygatPytorch implementation of the Graph Attention Network model by Veličković et. al (2017, https://arxiv.org/abs/1710.10903)
Stars: ✭ 1,853 (+2159.76%)
Im2LaTeXAn implementation of the Show, Attend and Tell paper in Tensorflow, for the OpenAI Im2LaTeX suggested problem
Stars: ✭ 16 (-80.49%)
lstm-attentionAttention-based bidirectional LSTM for Classification Task (ICASSP)
Stars: ✭ 87 (+6.1%)
h-transformer-1dImplementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning
Stars: ✭ 121 (+47.56%)
Attentive Neural Processesimplementing "recurrent attentive neural processes" to forecast power usage (w. LSTM baseline, MCDropout)
Stars: ✭ 33 (-59.76%)
Dfc VaeVariational Autoencoder trained by Feature Perceputal Loss
Stars: ✭ 74 (-9.76%)
CrabNetPredict materials properties using only the composition information!
Stars: ✭ 57 (-30.49%)
AoA-pytorchA Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering
Stars: ✭ 33 (-59.76%)
SmrtHandle class imbalance intelligently by using variational auto-encoders to generate synthetic observations of your minority class.
Stars: ✭ 102 (+24.39%)
Seq2seq SummarizerPointer-generator reinforced seq2seq summarization in PyTorch
Stars: ✭ 306 (+273.17%)
TorchganResearch Framework for easy and efficient training of GANs based on Pytorch
Stars: ✭ 1,156 (+1309.76%)
Sentence VaePyTorch Re-Implementation of "Generating Sentences from a Continuous Space" by Bowman et al 2015 https://arxiv.org/abs/1511.06349
Stars: ✭ 462 (+463.41%)
Awesome Bert NlpA curated list of NLP resources focused on BERT, attention mechanism, Transformer networks, and transfer learning.
Stars: ✭ 567 (+591.46%)
srVAEVAE with RealNVP prior and Super-Resolution VAE in PyTorch. Code release for https://arxiv.org/abs/2006.05218.
Stars: ✭ 56 (-31.71%)
Isab PytorchAn implementation of (Induced) Set Attention Block, from the Set Transformers paper
Stars: ✭ 21 (-74.39%)
Pytorch GatMy implementation of the original GAT paper (Veličković et al.). I've additionally included the playground.py file for visualizing the Cora dataset, GAT embeddings, an attention mechanism, and entropy histograms. I've supported both Cora (transductive) and PPI (inductive) examples!
Stars: ✭ 908 (+1007.32%)
Time AttentionImplementation of RNN for Time Series prediction from the paper https://arxiv.org/abs/1704.02971
Stars: ✭ 52 (-36.59%)
Generative ModelsCollection of generative models, e.g. GAN, VAE in Pytorch and Tensorflow.
Stars: ✭ 6,701 (+8071.95%)
Pytorch Seq2seqTutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
Stars: ✭ 3,418 (+4068.29%)
CodesearchnetDatasets, tools, and benchmarks for representation learning of code.
Stars: ✭ 1,378 (+1580.49%)
Attention一些不同的Attention机制代码
Stars: ✭ 17 (-79.27%)
Tf Rnn AttentionTensorflow implementation of attention mechanism for text classification tasks.
Stars: ✭ 735 (+796.34%)
Vae protein functionProtein function prediction using a variational autoencoder
Stars: ✭ 57 (-30.49%)
Global Self Attention NetworkA Pytorch implementation of Global Self-Attention Network, a fully-attention backbone for vision tasks
Stars: ✭ 64 (-21.95%)