Abstractive SummarizationImplementation of abstractive summarization using LSTM in the encoder-decoder architecture with local attention.
Stars: ✭ 128 (+82.86%)
Da Rnn📃 **Unofficial** PyTorch Implementation of DA-RNN (arXiv:1704.02971)
Stars: ✭ 256 (+265.71%)
Yolov3 Point从零开始学习YOLOv3教程解读代码+注意力模块(SE,SPP,RFB etc)
Stars: ✭ 119 (+70%)
Graph attention poolAttention over nodes in Graph Neural Networks using PyTorch (NeurIPS 2019)
Stars: ✭ 186 (+165.71%)
Triplet AttentionOfficial PyTorch Implementation for "Rotate to Attend: Convolutional Triplet Attention Module." [WACV 2021]
Stars: ✭ 222 (+217.14%)
Pytorch Original TransformerMy implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
Stars: ✭ 411 (+487.14%)
Linear Attention Recurrent Neural NetworkA recurrent attention module consisting of an LSTM cell which can query its own past cell states by the means of windowed multi-head attention. The formulas are derived from the BN-LSTM and the Transformer Network. The LARNN cell with attention can be easily used inside a loop on the cell state, just like any other RNN. (LARNN)
Stars: ✭ 119 (+70%)
AdaptiveattentionImplementation of "Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning"
Stars: ✭ 303 (+332.86%)
Attention is all you needTransformer of "Attention Is All You Need" (Vaswani et al. 2017) by Chainer.
Stars: ✭ 303 (+332.86%)
Csa InpaintingCoherent Semantic Attention for image inpainting(ICCV 2019)
Stars: ✭ 202 (+188.57%)
Pytorch GatMy implementation of the original GAT paper (Veličković et al.). I've additionally included the playground.py file for visualizing the Cora dataset, GAT embeddings, an attention mechanism, and entropy histograms. I've supported both Cora (transductive) and PPI (inductive) examples!
Stars: ✭ 908 (+1197.14%)
TensorflowThis Repository contains all tensorflow tutorials.
Stars: ✭ 68 (-2.86%)
Cnn Interpretability🏥 Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer’s Disease
Stars: ✭ 68 (-2.86%)
Etl with pythonETL with Python - Taught at DWH course 2017 (TAU)
Stars: ✭ 68 (-2.86%)
Ml101intro to machine learning - reverse engineering phenomena
Stars: ✭ 69 (-1.43%)
DlwithpytorchCode to accompany my upcoming book "Deep learning with PyTorch Book " from Packt
Stars: ✭ 69 (-1.43%)
Fab NetPytorch code for BMVC 2018 paper
Stars: ✭ 69 (-1.43%)
BackdropImplementation and demonstration of backdrop in pytorch. Code and demonstration of GP dataset generator.
Stars: ✭ 68 (-2.86%)
Vkapi CourseКурс по Python по работе с VK API
Stars: ✭ 68 (-2.86%)
Dsb17 WalkthroughAn end-to-end walkthrough of the winning submission by grt123 for the Kaggle Data Science Bowl 2017
Stars: ✭ 69 (-1.43%)
TimeflowTensorflow for Time Series Applications
Stars: ✭ 68 (-2.86%)
Nyumath2048NYU Math-GA 2048: Scientific Computing in Finance
Stars: ✭ 69 (-1.43%)
Red bag支付宝红包/淘宝领喵币/雪球红包/苏宁易购/京东/淘宝自动签到 领取金币
Stars: ✭ 68 (-2.86%)
Handson Ml2https://github.com/ageron/handson-ml2
Stars: ✭ 70 (+0%)
EqualareacartogramConverts a Shapefile, GeoJSON, or CSV to an equal area cartogram
Stars: ✭ 68 (-2.86%)
Datacamp🍧 A repository that contains courses I have taken on DataCamp
Stars: ✭ 69 (-1.43%)
Gds envA containerised platform for Geographic Data Science
Stars: ✭ 68 (-2.86%)
PynqPython Productivity for ZYNQ
Stars: ✭ 1,152 (+1545.71%)
ImpulciferMeasurement and processing of binaural impulse responses for personalized surround virtualization on headphones.
Stars: ✭ 70 (+0%)
Starter Academic🎓 Easily create a beautiful academic résumé or educational website using Hugo, GitHub, and Netlify
Stars: ✭ 1,158 (+1554.29%)
Equivariant Transformers Equivariant Transformer (ET) layers are image-to-image mappings that incorporate prior knowledge on invariances with respect to continuous transformations groups (ICML 2019). Paper: https://arxiv.org/abs/1901.11399
Stars: ✭ 68 (-2.86%)
Kalman Filters Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The filter is named after Rudolf E. Kálmán, one of the primary developers of its theory.
Stars: ✭ 69 (-1.43%)
QpgaSimulations of photonic quantum programmable gate arrays
Stars: ✭ 68 (-2.86%)
RecommenderA recommendation system using tensorflow
Stars: ✭ 69 (-1.43%)
PythonPython Tutorials
Stars: ✭ 69 (-1.43%)
PuzzlemixOfficial PyTorch implementation of "Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup" (ICML'20)
Stars: ✭ 67 (-4.29%)
Covid 19 DatavizSimple data visualization on Covid-19 data using Pandas and Google Colaboratory
Stars: ✭ 68 (-2.86%)
E2e Ml App Pytorch🚀 An end-to-end ML applications using PyTorch, W&B, FastAPI, Docker, Streamlit and Heroku → https://e2e-ml-app-pytorch.herokuapp.com/ (may take few minutes to spin up occasionally).
Stars: ✭ 68 (-2.86%)