All Projects → free-lunch-saliency → Similar Projects or Alternatives

280 Open source projects that are alternatives of or similar to free-lunch-saliency

mmn
Moore Machine Networks (MMN): Learning Finite-State Representations of Recurrent Policy Networks
Stars: ✭ 39 (+160%)
Mutual labels:  atari, interpretability
DeepMove
Codes for WWW'18 Paper-DeepMove: Predicting Human Mobility with Attentional Recurrent Network
Stars: ✭ 120 (+700%)
Mutual labels:  attention
tensorflow-chatbot-chinese
網頁聊天機器人 | tensorflow implementation of seq2seq model with bahdanau attention and Word2Vec pretrained embedding
Stars: ✭ 50 (+233.33%)
Mutual labels:  attention
thermostat
Collection of NLP model explanations and accompanying analysis tools
Stars: ✭ 126 (+740%)
Mutual labels:  interpretability
xai-iml-sota
Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (+240%)
Mutual labels:  interpretability
Recurrent-Independent-Mechanisms
Implementation of the paper Recurrent Independent Mechanisms (https://arxiv.org/pdf/1909.10893.pdf)
Stars: ✭ 90 (+500%)
Mutual labels:  attention
breakout-Deep-Q-Network
Reinforcement Learning | tensorflow implementation of DQN, Dueling DQN and Double DQN performed on Atari Breakout
Stars: ✭ 69 (+360%)
Mutual labels:  atari
mllp
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (+0%)
Mutual labels:  interpretability
flow1d
[ICCV 2021 Oral] High-Resolution Optical Flow from 1D Attention and Correlation
Stars: ✭ 91 (+506.67%)
Mutual labels:  attention
seq2seq-pytorch
Sequence to Sequence Models in PyTorch
Stars: ✭ 41 (+173.33%)
Mutual labels:  attention
Cgnl Network.pytorch
Compact Generalized Non-local Network (NIPS 2018)
Stars: ✭ 252 (+1580%)
Mutual labels:  attention
bert attn viz
Visualize BERT's self-attention layers on text classification tasks
Stars: ✭ 41 (+173.33%)
Mutual labels:  attention
hierarchical-dnn-interpretations
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (+633.33%)
Mutual labels:  interpretability
ALPS 2021
XAI Tutorial for the Explainable AI track in the ALPS winter school 2021
Stars: ✭ 55 (+266.67%)
Mutual labels:  interpretability
h-transformer-1d
Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning
Stars: ✭ 121 (+706.67%)
Mutual labels:  attention
Im2LaTeX
An implementation of the Show, Attend and Tell paper in Tensorflow, for the OpenAI Im2LaTeX suggested problem
Stars: ✭ 16 (+6.67%)
Mutual labels:  attention
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+3126.67%)
Mutual labels:  interpretability
attention-ocr
A pytorch implementation of the attention based ocr
Stars: ✭ 44 (+193.33%)
Mutual labels:  attention
transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Stars: ✭ 861 (+5640%)
Mutual labels:  interpretability
dreyeve
[TPAMI 2018] Predicting the Driver’s Focus of Attention: the DR(eye)VE Project. A deep neural network learnt to reproduce the human driver focus of attention (FoA) in a variety of real-world driving scenarios.
Stars: ✭ 88 (+486.67%)
Mutual labels:  attention
adversarial-robustness-public
Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients"
Stars: ✭ 49 (+226.67%)
Mutual labels:  interpretability
MGAN
Exploiting Coarse-to-Fine Task Transfer for Aspect-level Sentiment Classification (AAAI'19)
Stars: ✭ 44 (+193.33%)
Mutual labels:  attention
TRAR-VQA
[ICCV 2021] TRAR: Routing the Attention Spans in Transformers for Visual Question Answering -- Official Implementation
Stars: ✭ 49 (+226.67%)
Mutual labels:  attention
concept-based-xai
Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (+173.33%)
Mutual labels:  interpretability
Pytorch Seq2seq
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
Stars: ✭ 3,418 (+22686.67%)
Mutual labels:  attention
Long Range Arena
Long Range Arena for Benchmarking Efficient Transformers
Stars: ✭ 235 (+1466.67%)
Mutual labels:  attention
natural-language-joint-query-search
Search photos on Unsplash based on OpenAI's CLIP model, support search with joint image+text queries and attention visualization.
Stars: ✭ 143 (+853.33%)
Mutual labels:  attention
RecycleNet
Attentional Learning of Trash Classification
Stars: ✭ 23 (+53.33%)
Mutual labels:  attention
Fruit-API
A Universal Deep Reinforcement Learning Framework
Stars: ✭ 61 (+306.67%)
Mutual labels:  atari
ConceptBottleneck
Concept Bottleneck Models, ICML 2020
Stars: ✭ 91 (+506.67%)
Mutual labels:  interpretability
AiR
Official Repository for ECCV 2020 paper "AiR: Attention with Reasoning Capability"
Stars: ✭ 41 (+173.33%)
Mutual labels:  attention
meg
Molecular Explanation Generator
Stars: ✭ 14 (-6.67%)
Mutual labels:  interpretability
keras-utility-layer-collection
Collection of custom layers and utility functions for Keras which are missing in the main framework.
Stars: ✭ 63 (+320%)
Mutual labels:  attention
6502.Net
A .Net-based Cross-Assembler for Several 8-Bit Microprocessors
Stars: ✭ 44 (+193.33%)
Mutual labels:  atari
pytorch-attention-augmented-convolution
A pytorch implementation of https://arxiv.org/abs/1904.09925
Stars: ✭ 20 (+33.33%)
Mutual labels:  attention
glcapsnet
Global-Local Capsule Network (GLCapsNet) is a capsule-based architecture able to provide context-based eye fixation prediction for several autonomous driving scenarios, while offering interpretability both globally and locally.
Stars: ✭ 33 (+120%)
Mutual labels:  interpretability
chinese ancient poetry
seq2seq attention tensorflow textrank context
Stars: ✭ 30 (+100%)
Mutual labels:  attention
interpretable-ml
Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (+13.33%)
Mutual labels:  interpretability
kernel-mod
NeurIPS 2018. Linear-time model comparison tests.
Stars: ✭ 17 (+13.33%)
Mutual labels:  interpretability
fastbasic
FastBasic - Fast BASIC interpreter for the Atari 8-bit computers
Stars: ✭ 108 (+620%)
Mutual labels:  atari
adaptive-wavelets
Adaptive, interpretable wavelets across domains (NeurIPS 2021)
Stars: ✭ 58 (+286.67%)
Mutual labels:  interpretability
DQN-pytorch
A PyTorch implementation of Human-Level Control through Deep Reinforcement Learning
Stars: ✭ 23 (+53.33%)
Mutual labels:  atari
how attentive are gats
Code for the paper "How Attentive are Graph Attention Networks?" (ICLR'2022)
Stars: ✭ 200 (+1233.33%)
Mutual labels:  attention
DeepLearningReading
Deep Learning and Machine Learning mini-projects. Current Project: Deepmind Attentive Reader (rc-data)
Stars: ✭ 78 (+420%)
Mutual labels:  attention
awesome-list
Awesome Lists of retrocomputing resources (6502, Apple 2, Atari, ...)
Stars: ✭ 38 (+153.33%)
Mutual labels:  atari
External-Attention-pytorch
🍀 Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐
Stars: ✭ 7,344 (+48860%)
Mutual labels:  attention
lstm-attention
Attention-based bidirectional LSTM for Classification Task (ICASSP)
Stars: ✭ 87 (+480%)
Mutual labels:  attention
ArenaR
Data generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (+86.67%)
Mutual labels:  interpretability
atari-leaderboard
A leaderboard of human and machine performance on the Arcade Learning Environment (ALE).
Stars: ✭ 22 (+46.67%)
Mutual labels:  atari
EBIM-NLI
Enhanced BiLSTM Inference Model for Natural Language Inference
Stars: ✭ 24 (+60%)
Mutual labels:  attention
Astgcn
⚠️[Deprecated] no longer maintained, please use the code in https://github.com/guoshnBJTU/ASTGCN-r-pytorch
Stars: ✭ 246 (+1540%)
Mutual labels:  attention
EgoCNN
Code for "Distributed, Egocentric Representations of Graphs for Detecting Critical Structures" (ICML 2019)
Stars: ✭ 16 (+6.67%)
Mutual labels:  interpretability
Ai law
all kinds of baseline models for long text classificaiton( text categorization)
Stars: ✭ 243 (+1520%)
Mutual labels:  attention
AttnSleep
[IEEE TNSRE] "An Attention-based Deep Learning Approach for Sleep Stage Classification with Single-Channel EEG"
Stars: ✭ 76 (+406.67%)
Mutual labels:  attention
salvador
A free, open-source compressor for the ZX0 format
Stars: ✭ 35 (+133.33%)
Mutual labels:  atari
learningspoons
nlp lecture-notes and source code
Stars: ✭ 29 (+93.33%)
Mutual labels:  attention
retrore
A curated list of original and reverse-engineered vintage 6502 game sourcecode.
Stars: ✭ 22 (+46.67%)
Mutual labels:  atari
reasoning attention
Unofficial implementation algorithms of attention models on SNLI dataset
Stars: ✭ 34 (+126.67%)
Mutual labels:  attention
Jddc solution 4th
2018-JDDC大赛第4名的解决方案
Stars: ✭ 235 (+1466.67%)
Mutual labels:  attention
Virtual-Jaguar-Rx
Virtual Jaguar, an Atari Jaguar emulator, with integrated debugger
Stars: ✭ 35 (+133.33%)
Mutual labels:  atari
1-60 of 280 similar projects