TS3000 TheChatBOTIts a social networking chat-bot trained on Reddit dataset . It supports open bounded queries developed on the concept of Neural Machine Translation. Beware of its being sarcastic just like its creator 😝 BDW it uses Pytorch framework and Python3.
Stars: ✭ 20 (-9.09%)
NoahvAn efficient front-end application framework based on vue.js
Stars: ✭ 593 (+2595.45%)
memory-compressed-attentionImplementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"
Stars: ✭ 47 (+113.64%)
CistaSimple C++ Serialization & Reflection.
Stars: ✭ 535 (+2331.82%)
efficient-attentionAn implementation of the efficient attention module.
Stars: ✭ 191 (+768.18%)
Runany【RunAny】一劳永逸的快速启动软件,拥有三键启动、一键直达、批量搜索、全局热键、短语输出、热键映射、脚本插件等功能
Stars: ✭ 456 (+1972.73%)
exificientJava Implementation of EXI
Stars: ✭ 49 (+122.73%)
LizardLizard (formerly LZ5) is an efficient compressor with very fast decompression. It achieves compression ratio that is comparable to zip/zlib and zstd/brotli (at low and medium compression levels) at decompression speed of 1000 MB/s and faster.
Stars: ✭ 408 (+1754.55%)
MockoloEfficient Mock Generator for Swift
Stars: ✭ 327 (+1386.36%)
STAM-pytorchImplementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification
Stars: ✭ 109 (+395.45%)
sabotagea radical and experimental distribution based on musl libc and busybox
Stars: ✭ 502 (+2181.82%)
SuperParticlesAmazing CPU-friendly particle network animations
Stars: ✭ 32 (+45.45%)
ElegantRLScalable and Elastic Deep Reinforcement Learning Using PyTorch. Please star. 🔥
Stars: ✭ 2,074 (+9327.27%)
En-transformerImplementation of E(n)-Transformer, which extends the ideas of Welling's E(n)-Equivariant Graph Neural Network to attention
Stars: ✭ 131 (+495.45%)
QuickNotes一款简单、轻量、高效的Android记事、记账应用
Stars: ✭ 19 (-13.64%)
Mimir📱 A simple & efficient iOS logging framework for high usage apps
Stars: ✭ 13 (-40.91%)
SequenceToSequenceA seq2seq with attention dialogue/MT model implemented by TensorFlow.
Stars: ✭ 11 (-50%)
jazzleAn Innovative, Fast Transpiler for ECMAScript 2015 and later
Stars: ✭ 65 (+195.45%)
NfancurveA small and lightweight POSIX script for using a custom fan curve in Linux for those with an Nvidia GPU.
Stars: ✭ 180 (+718.18%)
Slot AttentionImplementation of Slot Attention from GoogleAI
Stars: ✭ 168 (+663.64%)
AoanetCode for paper "Attention on Attention for Image Captioning". ICCV 2019
Stars: ✭ 242 (+1000%)
dgcnnClean & Documented TF2 implementation of "An end-to-end deep learning architecture for graph classification" (M. Zhang et al., 2018).
Stars: ✭ 21 (-4.55%)
Self Attention CvImplementation of various self-attention mechanisms focused on computer vision. Ongoing repository.
Stars: ✭ 209 (+850%)
neutron-languageA simple, extensible and efficient programming language based on C and Python
Stars: ✭ 32 (+45.45%)
X TransformersA simple but complete full-attention transformer with a set of promising experimental features from various papers
Stars: ✭ 211 (+859.09%)
Brain-Tumor-SegmentationAttention-Guided Version of 2D UNet for Automatic Brain Tumor Segmentation
Stars: ✭ 125 (+468.18%)
LightnetplusplusLightNet++: Boosted Light-weighted Networks for Real-time Semantic Segmentation
Stars: ✭ 218 (+890.91%)
question-generationNeural Models for Key Phrase Detection and Question Generation
Stars: ✭ 29 (+31.82%)
gnuboylatest version of original laguna source, with a handful fixes for modern compilers and systems
Stars: ✭ 70 (+218.18%)
Guided Attention Inference NetworkContains implementation of Guided Attention Inference Network (GAIN) presented in Tell Me Where to Look(CVPR 2018). This repository aims to apply GAIN on fcn8 architecture used for segmentation.
Stars: ✭ 204 (+827.27%)
DARNNA Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction
Stars: ✭ 90 (+309.09%)
RETRO-pytorchImplementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch
Stars: ✭ 473 (+2050%)
Sca Cnn.cvpr17Image Captions Generation with Spatial and Channel-wise Attention
Stars: ✭ 198 (+800%)
PyGLMFast OpenGL Mathematics (GLM) for Python
Stars: ✭ 167 (+659.09%)
HnattTrain and visualize Hierarchical Attention Networks
Stars: ✭ 192 (+772.73%)
hexiaMid-level PyTorch Based Framework for Visual Question Answering.
Stars: ✭ 24 (+9.09%)
Graph attention poolAttention over nodes in Graph Neural Networks using PyTorch (NeurIPS 2019)
Stars: ✭ 186 (+745.45%)
h-transformer-1dImplementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning
Stars: ✭ 121 (+450%)
S2vICLR 2018 Quick-Thought vectors
Stars: ✭ 191 (+768.18%)
Eeg DlA Deep Learning library for EEG Tasks (Signals) Classification, based on TensorFlow.
Stars: ✭ 165 (+650%)
LSTM-AttentionA Comparison of LSTMs and Attention Mechanisms for Forecasting Financial Time Series
Stars: ✭ 53 (+140.91%)
Pyecopython implementation of efficient convolution operators for tracking
Stars: ✭ 150 (+581.82%)
GatGraph Attention Networks (https://arxiv.org/abs/1710.10903)
Stars: ✭ 2,229 (+10031.82%)
Picanet ImplementationPytorch Implementation of PiCANet: Learning Pixel-wise Contextual Attention for Saliency Detection
Stars: ✭ 157 (+613.64%)
axial-attentionImplementation of Axial attention - attending to multi-dimensional data efficiently
Stars: ✭ 245 (+1013.64%)
Optic-Disc-UnetAttention Unet model with post process for retina optic disc segmention
Stars: ✭ 77 (+250%)
EfficientnetImplementation of EfficientNet model. Keras and TensorFlow Keras.
Stars: ✭ 1,920 (+8627.27%)
Sinkhorn TransformerSinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention
Stars: ✭ 156 (+609.09%)
reasoning attentionUnofficial implementation algorithms of attention models on SNLI dataset
Stars: ✭ 34 (+54.55%)
BorerEfficient CBOR and JSON (de)serialization in Scala
Stars: ✭ 131 (+495.45%)