All Projects → Enjoy Hamburger → Similar Projects or Alternatives

272 Open source projects that are alternatives of or similar to Enjoy Hamburger

Residual Attention Network
Residual Attention Network for Image Classification
Stars: ✭ 525 (+660.87%)
Mutual labels:  attention
Ensmallen
A header-only C++ library for numerical optimization --
Stars: ✭ 436 (+531.88%)
Mutual labels:  optimization-algorithms
Pyswarms
A research toolkit for particle swarm optimization in Python
Stars: ✭ 742 (+975.36%)
Mutual labels:  optimization-algorithms
Speech Transformer
A PyTorch implementation of Speech Transformer, an End-to-End ASR with Transformer network on Mandarin Chinese.
Stars: ✭ 565 (+718.84%)
Mutual labels:  attention
Text Classification Models Pytorch
Implementation of State-of-the-art Text Classification Models in Pytorch
Stars: ✭ 379 (+449.28%)
Mutual labels:  attention
Nlp tensorflow project
Use tensorflow to achieve some NLP project, eg: classification chatbot ner attention QAetc.
Stars: ✭ 27 (-60.87%)
Mutual labels:  attention
Rnn Nlu
A TensorFlow implementation of Recurrent Neural Networks for Sequence Classification and Sequence Labeling
Stars: ✭ 463 (+571.01%)
Mutual labels:  attention
Attentions
PyTorch implementation of some attentions for Deep Learning Researchers.
Stars: ✭ 39 (-43.48%)
Mutual labels:  attention
Optim
OptimLib: a lightweight C++ library of numerical optimization methods for nonlinear functions
Stars: ✭ 411 (+495.65%)
Mutual labels:  optimization-algorithms
Text Classification
Implementation of papers for text classification task on DBpedia
Stars: ✭ 682 (+888.41%)
Mutual labels:  attention
Simplecvreproduction
Reproduce simple cv project including attention module, classification, object detection, segmentation, keypoint detection, tracking 😄 etc.
Stars: ✭ 602 (+772.46%)
Mutual labels:  attention
Fmin
Unconstrained function minimization in Javascript
Stars: ✭ 317 (+359.42%)
Mutual labels:  optimization-algorithms
Isab Pytorch
An implementation of (Induced) Set Attention Block, from the Set Transformers paper
Stars: ✭ 21 (-69.57%)
Mutual labels:  attention
Solid
🎯 A comprehensive gradient-free optimization framework written in Python
Stars: ✭ 546 (+691.3%)
Mutual labels:  optimization-algorithms
Sentences pair similarity calculation siamese lstm
A Keras Implementation of Attention_based Siamese Manhattan LSTM
Stars: ✭ 48 (-30.43%)
Mutual labels:  attention
Awesome Robotics
A curated list of awesome links and software libraries that are useful for robots.
Stars: ✭ 478 (+592.75%)
Mutual labels:  optimization-algorithms
Pytorch Gat
My implementation of the original GAT paper (Veličković et al.). I've additionally included the playground.py file for visualizing the Cora dataset, GAT embeddings, an attention mechanism, and entropy histograms. I've supported both Cora (transductive) and PPI (inductive) examples!
Stars: ✭ 908 (+1215.94%)
Mutual labels:  attention
Ban Vqa
Bilinear attention networks for visual question answering
Stars: ✭ 449 (+550.72%)
Mutual labels:  attention
Fluence
A deep learning library based on Pytorch focussed on low resource language research and robustness
Stars: ✭ 54 (-21.74%)
Mutual labels:  attention
Recurrent Visual Attention
A PyTorch Implementation of "Recurrent Models of Visual Attention"
Stars: ✭ 414 (+500%)
Mutual labels:  attention
Nlp paper study
研读顶会论文,复现论文相关代码
Stars: ✭ 691 (+901.45%)
Mutual labels:  attention
Deep learning nlp
Keras, PyTorch, and NumPy Implementations of Deep Learning Architectures for NLP
Stars: ✭ 407 (+489.86%)
Mutual labels:  attention
Attentioncluster
TensorFlow Implementation of "Attention Clusters: Purely Attention Based Local Feature Integration for Video Classification"
Stars: ✭ 33 (-52.17%)
Mutual labels:  attention
Ojalgo
oj! Algorithms
Stars: ✭ 336 (+386.96%)
Mutual labels:  optimization-algorithms
Cppnumericalsolvers
a lightweight C++17 library of numerical optimization methods for nonlinear functions (Including L-BFGS-B for TensorFlow)
Stars: ✭ 638 (+824.64%)
Mutual labels:  optimization-algorithms
Dist Keras
Distributed Deep Learning, with a focus on distributed training, using Keras and Apache Spark.
Stars: ✭ 613 (+788.41%)
Mutual labels:  optimization-algorithms
Crnn attention ocr chinese
CRNN with attention to do OCR,add Chinese recognition
Stars: ✭ 315 (+356.52%)
Mutual labels:  attention
Banglatranslator
Bangla Machine Translator
Stars: ✭ 21 (-69.57%)
Mutual labels:  attention
Attention Is All You Need Pytorch
A PyTorch implementation of the Transformer model in "Attention is All You Need".
Stars: ✭ 6,070 (+8697.1%)
Mutual labels:  attention
Time Attention
Implementation of RNN for Time Series prediction from the paper https://arxiv.org/abs/1704.02971
Stars: ✭ 52 (-24.64%)
Mutual labels:  attention
Performer Pytorch
An implementation of Performer, a linear attention-based transformer, in Pytorch
Stars: ✭ 546 (+691.3%)
Mutual labels:  attention
Mindseye
Neural Networks in Java 8 with CuDNN and Aparapi
Stars: ✭ 8 (-88.41%)
Mutual labels:  optimization-algorithms
Pagmo2
A C++ platform to perform parallel computations of optimisation tasks (global and local) via the asynchronous generalized island model.
Stars: ✭ 540 (+682.61%)
Mutual labels:  optimization-algorithms
Yolov4 Pytorch
This is a pytorch repository of YOLOv4, attentive YOLOv4 and mobilenet YOLOv4 with PASCAL VOC and COCO
Stars: ✭ 1,070 (+1450.72%)
Mutual labels:  attention
Punctuator2
A bidirectional recurrent neural network model with attention mechanism for restoring missing punctuation in unsegmented text
Stars: ✭ 483 (+600%)
Mutual labels:  attention
Cell Detr
Official and maintained implementation of the paper Attention-Based Transformers for Instance Segmentation of Cells in Microstructures [BIBM 2020].
Stars: ✭ 26 (-62.32%)
Mutual labels:  attention
Chinesenre
中文实体关系抽取,pytorch,bilstm+attention
Stars: ✭ 463 (+571.01%)
Mutual labels:  attention
Biblosa Pytorch
Re-implementation of Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Modeling (T. Shen et al., ICLR 2018) on Pytorch.
Stars: ✭ 43 (-37.68%)
Mutual labels:  attention
Structured Self Attention
A Structured Self-attentive Sentence Embedding
Stars: ✭ 459 (+565.22%)
Mutual labels:  attention
Spatial Transformer Network
A Tensorflow implementation of Spatial Transformer Networks.
Stars: ✭ 794 (+1050.72%)
Mutual labels:  attention
Mac Network
Implementation for the paper "Compositional Attention Networks for Machine Reasoning" (Hudson and Manning, ICLR 2018)
Stars: ✭ 444 (+543.48%)
Mutual labels:  attention
Global Self Attention Network
A Pytorch implementation of Global Self-Attention Network, a fully-attention backbone for vision tasks
Stars: ✭ 64 (-7.25%)
Mutual labels:  attention
Gansformer
Generative Adversarial Transformers
Stars: ✭ 421 (+510.14%)
Mutual labels:  attention
Tf Rnn Attention
Tensorflow implementation of attention mechanism for text classification tasks.
Stars: ✭ 735 (+965.22%)
Mutual labels:  attention
Pytorch Original Transformer
My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
Stars: ✭ 411 (+495.65%)
Mutual labels:  attention
Min Cost Flow Class
C++ solvers for Minimum Cost Flow Problems
Stars: ✭ 36 (-47.83%)
Mutual labels:  optimization-algorithms
Neural sp
End-to-end ASR/LM implementation with PyTorch
Stars: ✭ 408 (+491.3%)
Mutual labels:  attention
Captainblackboard
船长关于机器学习、计算机视觉和工程技术的总结和分享
Stars: ✭ 693 (+904.35%)
Mutual labels:  optimization-algorithms
Nlp Tutorials
Simple implementations of NLP models. Tutorials are written in Chinese on my website https://mofanpy.com
Stars: ✭ 394 (+471.01%)
Mutual labels:  attention
Pointer Networks Experiments
Sorting numbers with pointer networks
Stars: ✭ 53 (-23.19%)
Mutual labels:  attention
Ner Bert
BERT-NER (nert-bert) with google bert https://github.com/google-research.
Stars: ✭ 339 (+391.3%)
Mutual labels:  attention
Gaft
A Genetic Algorithm Framework in Python
Stars: ✭ 651 (+843.48%)
Mutual labels:  optimization-algorithms
Transformer Tensorflow
TensorFlow implementation of 'Attention Is All You Need (2017. 6)'
Stars: ✭ 319 (+362.32%)
Mutual labels:  attention
Attentive Neural Processes
implementing "recurrent attentive neural processes" to forecast power usage (w. LSTM baseline, MCDropout)
Stars: ✭ 33 (-52.17%)
Mutual labels:  attention
Awesome Fast Attention
list of efficient attention modules
Stars: ✭ 627 (+808.7%)
Mutual labels:  attention
Deeplearning Nlp Models
A small, interpretable codebase containing the re-implementation of a few "deep" NLP models in PyTorch. Colab notebooks to run with GPUs. Models: word2vec, CNNs, transformer, gpt.
Stars: ✭ 64 (-7.25%)
Mutual labels:  attention
Attention Over Attention Tf Qa
论文“Attention-over-Attention Neural Networks for Reading Comprehension”中AoA模型实现
Stars: ✭ 58 (-15.94%)
Mutual labels:  attention
Text Classification Keras
📚 Text classification library with Keras
Stars: ✭ 53 (-23.19%)
Mutual labels:  attention
Defactonlp
DeFactoNLP: An Automated Fact-checking System that uses Named Entity Recognition, TF-IDF vector comparison and Decomposable Attention models.
Stars: ✭ 30 (-56.52%)
Mutual labels:  attention
Vad
Voice activity detection (VAD) toolkit including DNN, bDNN, LSTM and ACAM based VAD. We also provide our directly recorded dataset.
Stars: ✭ 622 (+801.45%)
Mutual labels:  attention
1-60 of 272 similar projects