All Projects → Alibi → Similar Projects or Alternatives

79 Open source projects that are alternatives of or similar to Alibi

meg
Molecular Explanation Generator
Stars: ✭ 14 (-98.48%)
Mutual labels:  interpretability
ALPS 2021
XAI Tutorial for the Explainable AI track in the ALPS winter school 2021
Stars: ✭ 55 (-94.05%)
Mutual labels:  interpretability
neuron-importance-zsl
[ECCV 2018] code for Choose Your Neuron: Incorporating Domain Knowledge Through Neuron Importance
Stars: ✭ 56 (-93.94%)
Mutual labels:  interpretability
interpretable-ml
Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-98.16%)
Mutual labels:  interpretability
Awesome Explainable Ai
A collection of research materials on explainable AI/ML
Stars: ✭ 186 (-79.87%)
Mutual labels:  interpretability
removal-explanations
A lightweight implementation of removal-based explanations for ML models.
Stars: ✭ 46 (-95.02%)
Mutual labels:  interpretability
ArenaR
Data generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (-96.97%)
Mutual labels:  interpretability
Tcav
Code for the TCAV ML interpretability project
Stars: ✭ 442 (-52.16%)
Mutual labels:  interpretability
Torch Cam
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM)
Stars: ✭ 249 (-73.05%)
Mutual labels:  interpretability
zennit
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (-93.83%)
Mutual labels:  interpretability
transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Stars: ✭ 861 (-6.82%)
Mutual labels:  interpretability
Shap
A game theoretic approach to explain the output of any machine learning model.
Stars: ✭ 14,917 (+1514.39%)
Mutual labels:  interpretability
Facet
Human-explainable AI.
Stars: ✭ 269 (-70.89%)
Mutual labels:  interpretability
mmn
Moore Machine Networks (MMN): Learning Finite-State Representations of Recurrent Policy Networks
Stars: ✭ 39 (-95.78%)
Mutual labels:  interpretability
Interpretable machine learning with python
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Stars: ✭ 530 (-42.64%)
Mutual labels:  interpretability
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (-47.62%)
Mutual labels:  interpretability
shapeshop
Towards Understanding Deep Learning Representations via Interactive Experimentation
Stars: ✭ 16 (-98.27%)
Mutual labels:  interpretability
concept-based-xai
Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (-95.56%)
Mutual labels:  interpretability
Awesome Federated Learning
Federated Learning Library: https://fedml.ai
Stars: ✭ 624 (-32.47%)
Mutual labels:  interpretability
adaptive-wavelets
Adaptive, interpretable wavelets across domains (NeurIPS 2021)
Stars: ✭ 58 (-93.72%)
Mutual labels:  interpretability
sage
For calculating global feature importance using Shapley values.
Stars: ✭ 129 (-86.04%)
Mutual labels:  interpretability
Captum
Model interpretability and understanding for PyTorch
Stars: ✭ 2,830 (+206.28%)
Mutual labels:  interpretability
Mli Resources
H2O.ai Machine Learning Interpretability Resources
Stars: ✭ 428 (-53.68%)
Mutual labels:  interpretability
Pyss3
A Python package implementing a new machine learning model for text classification with visualization tools for Explainable AI
Stars: ✭ 191 (-79.33%)
Mutual labels:  interpretability
Visualizing-CNNs-for-monocular-depth-estimation
official implementation of "Visualization of Convolutional Neural Networks for Monocular Depth Estimation"
Stars: ✭ 120 (-87.01%)
Mutual labels:  interpretability
free-lunch-saliency
Code for "Free-Lunch Saliency via Attention in Atari Agents"
Stars: ✭ 15 (-98.38%)
Mutual labels:  interpretability
Modelstudio
📍 Interactive Studio for Explanatory Model Analysis
Stars: ✭ 163 (-82.36%)
Mutual labels:  interpretability
Interpret
Fit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+371%)
Mutual labels:  interpretability
ConceptBottleneck
Concept Bottleneck Models, ICML 2020
Stars: ✭ 91 (-90.15%)
Mutual labels:  interpretability
Xai resources
Interesting resources related to XAI (Explainable Artificial Intelligence)
Stars: ✭ 553 (-40.15%)
Mutual labels:  interpretability
mllp
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-98.38%)
Mutual labels:  interpretability
diabetes use case
Sample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-97.62%)
Mutual labels:  interpretability
hierarchical-dnn-interpretations
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (-88.1%)
Mutual labels:  interpretability
Ad examples
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Stars: ✭ 641 (-30.63%)
Mutual labels:  interpretability
glcapsnet
Global-Local Capsule Network (GLCapsNet) is a capsule-based architecture able to provide context-based eye fixation prediction for several autonomous driving scenarios, while offering interpretability both globally and locally.
Stars: ✭ 33 (-96.43%)
Mutual labels:  interpretability
SPINE
Code for SPINE - Sparse Interpretable Neural Embeddings. Jhamtani H.*, Pruthi D.*, Subramanian A.*, Berg-Kirkpatrick T., Hovy E. AAAI 2018
Stars: ✭ 44 (-95.24%)
Mutual labels:  interpretability
adversarial-robustness-public
Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients"
Stars: ✭ 49 (-94.7%)
Mutual labels:  interpretability
Deeplift
Public facing deeplift repo
Stars: ✭ 512 (-44.59%)
Mutual labels:  interpretability
EgoCNN
Code for "Distributed, Egocentric Representations of Graphs for Detecting Critical Structures" (ICML 2019)
Stars: ✭ 16 (-98.27%)
Mutual labels:  interpretability
knowledge-neurons
A library for finding knowledge neurons in pretrained transformer models.
Stars: ✭ 72 (-92.21%)
Mutual labels:  interpretability
xai-iml-sota
Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (-94.48%)
Mutual labels:  interpretability
Dalex
moDel Agnostic Language for Exploration and eXplanation
Stars: ✭ 795 (-13.96%)
Mutual labels:  interpretability
kernel-mod
NeurIPS 2018. Linear-time model comparison tests.
Stars: ✭ 17 (-98.16%)
Mutual labels:  interpretability
summit
🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Stars: ✭ 95 (-89.72%)
Mutual labels:  interpretability
thermostat
Collection of NLP model explanations and accompanying analysis tools
Stars: ✭ 126 (-86.36%)
Mutual labels:  interpretability
Lucid
A collection of infrastructure and tools for research in neural network interpretability.
Stars: ✭ 4,344 (+370.13%)
Mutual labels:  interpretability
Aspect Based Sentiment Analysis
💭 Aspect-Based-Sentiment-Analysis: Transformer & Explainable ML (TensorFlow)
Stars: ✭ 206 (-77.71%)
Mutual labels:  interpretability
yggdrasil-decision-forests
A collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models.
Stars: ✭ 156 (-83.12%)
Mutual labels:  interpretability
Explainx
Explainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code.
Stars: ✭ 196 (-78.79%)
Mutual labels:  interpretability
Xai
XAI - An eXplainability toolbox for machine learning
Stars: ✭ 596 (-35.5%)
Mutual labels:  interpretability
Imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Stars: ✭ 194 (-79%)
Mutual labels:  interpretability
ProtoTree
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-94.91%)
Mutual labels:  interpretability
Awesome Machine Learning Interpretability
A curated list of awesome machine learning interpretability resources.
Stars: ✭ 2,404 (+160.17%)
Mutual labels:  interpretability
Neural Backed Decision Trees
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
Stars: ✭ 411 (-55.52%)
Mutual labels:  interpretability
deep-explanation-penalization
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (-88.1%)
Mutual labels:  interpretability
Grad Cam
[ICCV 2017] Torch code for Grad-CAM
Stars: ✭ 891 (-3.57%)
Mutual labels:  interpretability
Tf Explain
Interpretability Methods for tf.keras models with Tensorflow 2.x
Stars: ✭ 780 (-15.58%)
Mutual labels:  interpretability
Flashtorch
Visualization toolkit for neural networks in PyTorch! Demo -->
Stars: ✭ 561 (-39.29%)
Mutual labels:  interpretability
Awesome deep learning interpretability
深度学习近年来关于神经网络模型解释性的相关高引用/顶会论文(附带代码)
Stars: ✭ 401 (-56.6%)
Mutual labels:  interpretability
partial dependence
Python package to visualize and cluster partial dependence.
Stars: ✭ 23 (-97.51%)
Mutual labels:  interpretability
1-60 of 79 similar projects