Seq2seq SummarizerPointer-generator reinforced seq2seq summarization in PyTorch
Stars: ✭ 306 (-51.35%)
galerkin-transformer[NeurIPS 2021] Galerkin Transformer: a linear attention without softmax
Stars: ✭ 111 (-82.35%)
Keras GatKeras implementation of the graph attention networks (GAT) by Veličković et al. (2017; https://arxiv.org/abs/1710.10903)
Stars: ✭ 334 (-46.9%)
transganformerImplementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GanFormer and TransGan paper
Stars: ✭ 137 (-78.22%)
Pytorch Original TransformerMy implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
Stars: ✭ 411 (-34.66%)
query-focused-sumOfficial code repository for "Exploring Neural Models for Query-Focused Summarization".
Stars: ✭ 17 (-97.3%)
MirnetOfficial repository for "Learning Enriched Features for Real Image Restoration and Enhancement" (ECCV 2020). SOTA results for image denoising, super-resolution, and image enhancement.
Stars: ✭ 247 (-60.73%)
summary-explorerSummary Explorer is a tool to visually explore the state-of-the-art in text summarization.
Stars: ✭ 34 (-94.59%)
Awesome Graph ClassificationA collection of important graph embedding, classification and representation learning papers with implementations.
Stars: ✭ 4,309 (+585.06%)
FYP-AutoTextSumAutomatic Text Summarization with Machine Learning
Stars: ✭ 16 (-97.46%)
Yolo Multi Backbones AttentionModel Compression—YOLOv3 with multi lightweight backbones(ShuffleNetV2 HuaWei GhostNet), attention, prune and quantization
Stars: ✭ 317 (-49.6%)
vista-netCode for the paper "VistaNet: Visual Aspect Attention Network for Multimodal Sentiment Analysis", AAAI'19
Stars: ✭ 67 (-89.35%)
Nmt KerasNeural Machine Translation with Keras
Stars: ✭ 501 (-20.35%)
Attention一些不同的Attention机制代码
Stars: ✭ 17 (-97.3%)
Alphafold2To eventually become an unofficial Pytorch implementation / replication of Alphafold2, as details of the architecture get released
Stars: ✭ 298 (-52.62%)
Neural spEnd-to-end ASR/LM implementation with PyTorch
Stars: ✭ 408 (-35.14%)
Multi Scale AttentionCode for our paper "Multi-scale Guided Attention for Medical Image Segmentation"
Stars: ✭ 281 (-55.33%)
textdigesterTextDigester: document summarization java library
Stars: ✭ 23 (-96.34%)
Da Rnn📃 **Unofficial** PyTorch Implementation of DA-RNN (arXiv:1704.02971)
Stars: ✭ 256 (-59.3%)
SimgnnA PyTorch implementation of "SimGNN: A Neural Network Approach to Fast Graph Similarity Computation" (WSDM 2019).
Stars: ✭ 351 (-44.2%)
TextRank-nodeNo description or website provided.
Stars: ✭ 21 (-96.66%)
TransformerA TensorFlow Implementation of the Transformer: Attention Is All You Need
Stars: ✭ 3,646 (+479.65%)
HeadlinesAutomatically generate headlines to short articles
Stars: ✭ 516 (-17.97%)
linformerImplementation of Linformer for Pytorch
Stars: ✭ 119 (-81.08%)
Statsbase.jlBasic statistics for Julia
Stars: ✭ 326 (-48.17%)
sidenetSideNet: Neural Extractive Summarization with Side Information
Stars: ✭ 52 (-91.73%)
Transformer TtsA Pytorch Implementation of "Neural Speech Synthesis with Transformer Network"
Stars: ✭ 418 (-33.55%)
ADL2019Applied Deep Learning (2019 Spring) @ NTU
Stars: ✭ 20 (-96.82%)
Seq2seq chatbot基于seq2seq模型的简单对话系统的tf实现,具有embedding、attention、beam_search等功能,数据集是Cornell Movie Dialogs
Stars: ✭ 308 (-51.03%)
Performer PytorchAn implementation of Performer, a linear attention-based transformer, in Pytorch
Stars: ✭ 546 (-13.2%)
pynmta simple and complete pytorch implementation of neural machine translation system
Stars: ✭ 13 (-97.93%)
ttslearnttslearn: Library for Pythonで学ぶ音声合成 (Text-to-speech with Python)
Stars: ✭ 158 (-74.88%)
co-attentionPytorch implementation of "Dynamic Coattention Networks For Question Answering"
Stars: ✭ 54 (-91.41%)
AdaptiveattentionImplementation of "Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning"
Stars: ✭ 303 (-51.83%)
text2textText2Text: Cross-lingual natural language processing and generation toolkit
Stars: ✭ 188 (-70.11%)
Keras Self AttentionAttention mechanism for processing sequential data that considers the context for each timestamp.
Stars: ✭ 489 (-22.26%)
MoChA-pytorchPyTorch Implementation of "Monotonic Chunkwise Attention" (ICLR 2018)
Stars: ✭ 65 (-89.67%)
Vit PytorchImplementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
Stars: ✭ 7,199 (+1044.52%)
video-summarizerSummarizes videos into much shorter videos. Ideal for long lecture videos.
Stars: ✭ 92 (-85.37%)
PaperrobotCode for PaperRobot: Incremental Draft Generation of Scientific Ideas
Stars: ✭ 372 (-40.86%)
Timesformer PytorchImplementation of TimeSformer from Facebook AI, a pure attention-based solution for video classification
Stars: ✭ 225 (-64.23%)
Awesome Bert NlpA curated list of NLP resources focused on BERT, attention mechanism, Transformer networks, and transfer learning.
Stars: ✭ 567 (-9.86%)
Moran v2MORAN: A Multi-Object Rectified Attention Network for Scene Text Recognition
Stars: ✭ 536 (-14.79%)
TransformerA Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"
Stars: ✭ 271 (-56.92%)