All Projects → concept-based-xai → Similar Projects or Alternatives

305 Open source projects that are alternatives of or similar to concept-based-xai

mllp
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-63.41%)
Interpret
Fit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+10514.63%)
zennit
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (+39.02%)
ArenaR
Data generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (-31.71%)
Mutual labels:  interpretability, xai, explainability
hierarchical-dnn-interpretations
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (+168.29%)
adaptive-wavelets
Adaptive, interpretable wavelets across domains (NeurIPS 2021)
Stars: ✭ 58 (+41.46%)
Mutual labels:  interpretability, xai, explainability
deep-explanation-penalization
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+168.29%)
awesome-graph-explainability-papers
Papers about explainability of GNNs
Stars: ✭ 153 (+273.17%)
Mutual labels:  explainable-ai, xai, explainability
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+1080.49%)
ProtoTree
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (+14.63%)
disent
🧶 Modular VAE disentanglement framework for python built with PyTorch Lightning ▸ Including metrics and datasets ▸ With strongly supervised, weakly supervised and unsupervised methods ▸ Easily configured and run with Hydra config ▸ Inspired by disentanglement_lib
Stars: ✭ 41 (+0%)
meg
Molecular Explanation Generator
Stars: ✭ 14 (-65.85%)
Mutual labels:  interpretability, explainable-ai
CARLA
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Stars: ✭ 166 (+304.88%)
Mutual labels:  explainable-ai, explainability
GNNLens2
Visualization tool for Graph Neural Networks
Stars: ✭ 155 (+278.05%)
Mutual labels:  xai, explainability
interpretable-ml
Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-58.54%)
Mutual labels:  interpretability, xai
xai-iml-sota
Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (+24.39%)
Mutual labels:  interpretability, xai
mindsdb server
MindsDB server allows you to consume and expose MindsDB workflows, through http.
Stars: ✭ 3 (-92.68%)
Mutual labels:  explainable-ai, xai
sage
For calculating global feature importance using Shapley values.
Stars: ✭ 129 (+214.63%)
Mutual labels:  interpretability, explainability
dlime experiments
In this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results on three different medical datasets shows the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME).
Stars: ✭ 21 (-48.78%)
Mutual labels:  explainable-ai, xai
responsible-ai-toolbox
This project provides responsible AI user interfaces for Fairlearn, interpret-community, and Error Analysis, as well as foundational building blocks that they rely on.
Stars: ✭ 615 (+1400%)
Mutual labels:  explainable-ai, explainability
thermostat
Collection of NLP model explanations and accompanying analysis tools
Stars: ✭ 126 (+207.32%)
Mutual labels:  interpretability, explainability
removal-explanations
A lightweight implementation of removal-based explanations for ML models.
Stars: ✭ 46 (+12.2%)
Mutual labels:  interpretability, explainability
expmrc
ExpMRC: Explainability Evaluation for Machine Reading Comprehension
Stars: ✭ 58 (+41.46%)
Mutual labels:  explainable-ai, xai
ml-fairness-framework
FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
Stars: ✭ 59 (+43.9%)
Mutual labels:  explainable-ai, xai
WeFEND-AAAI20
Dataset for paper "Weak Supervision for Fake News Detection via Reinforcement Learning" published in AAAI'2020.
Stars: ✭ 67 (+63.41%)
ASTRA
Self-training with Weak Supervision (NAACL 2021)
Stars: ✭ 127 (+209.76%)
ALPS 2021
XAI Tutorial for the Explainable AI track in the ALPS winter school 2021
Stars: ✭ 55 (+34.15%)
Mutual labels:  interpretability, explainability
knodle
A PyTorch-based open-source framework that provides methods for improving the weakly annotated data and allows researchers to efficiently develop and compare their own methods.
Stars: ✭ 76 (+85.37%)
trove
Weakly supervised medical named entity classification
Stars: ✭ 55 (+34.15%)
fastshap
Fast approximate Shapley values in R
Stars: ✭ 79 (+92.68%)
Mutual labels:  explainable-ai, xai
linguistic-style-transfer-pytorch
Implementation of "Disentangled Representation Learning for Non-Parallel Text Style Transfer(ACL 2019)" in Pytorch
Stars: ✭ 55 (+34.15%)
weasel
Weakly Supervised End-to-End Learning (NeurIPS 2021)
Stars: ✭ 117 (+185.37%)
Learning-From-Rules
Implementation of experiments in paper "Learning from Rules Generalizing Labeled Exemplars" to appear in ICLR2020 (https://openreview.net/forum?id=SkeuexBtDr)
Stars: ✭ 46 (+12.2%)
diabetes use case
Sample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-46.34%)
Mutual labels:  interpretability, xai
Awesome Production Machine Learning
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
Stars: ✭ 10,504 (+25519.51%)
Mutual labels:  interpretability, explainability
Shap
A game theoretic approach to explain the output of any machine learning model.
Stars: ✭ 14,917 (+36282.93%)
Mutual labels:  interpretability, explainability
DisCont
Code for the paper "DisCont: Self-Supervised Visual Attribute Disentanglement using Context Vectors".
Stars: ✭ 13 (-68.29%)
transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Stars: ✭ 861 (+2000%)
Mutual labels:  interpretability, explainable-ai
Awesome Machine Learning Interpretability
A curated list of awesome machine learning interpretability resources.
Stars: ✭ 2,404 (+5763.41%)
Mutual labels:  interpretability, xai
wrench
WRENCH: Weak supeRvision bENCHmark
Stars: ✭ 185 (+351.22%)
reef
Automatically labeling training data
Stars: ✭ 102 (+148.78%)
EfficientMORL
EfficientMORL (ICML'21)
Stars: ✭ 22 (-46.34%)
Mutual labels:  vae
language-models
Keras implementations of three language models: character-level RNN, word-level RNN and Sentence VAE (Bowman, Vilnis et al 2016).
Stars: ✭ 39 (-4.88%)
Mutual labels:  vae
Advances-in-Label-Noise-Learning
A curated (most recent) list of resources for Learning with Noisy Labels
Stars: ✭ 360 (+778.05%)
MCIS wsss
Code for ECCV 2020 paper (oral): Mining Cross-Image Semantics for Weakly Supervised Semantic Segmentation
Stars: ✭ 151 (+268.29%)
datafsm
Machine Learning Finite State Machine Models from Data with Genetic Algorithms
Stars: ✭ 14 (-65.85%)
Mutual labels:  explainable-ai
CS-DisMo
[ICCVW 2021] Rethinking Content and Style: Exploring Bias for Unsupervised Disentanglement
Stars: ✭ 20 (-51.22%)
TreeView
"TreeView - sub-cells simplified" (c). Enable subcells in UITableView with a single drop-in extension. CocoaPod:
Stars: ✭ 54 (+31.71%)
Mutual labels:  concept
pyCeterisParibus
Python library for Ceteris Paribus Plots (What-if plots)
Stars: ✭ 19 (-53.66%)
Mutual labels:  explainable-ai
vae-concrete
Keras implementation of a Variational Auto Encoder with a Concrete Latent Distribution
Stars: ✭ 51 (+24.39%)
Mutual labels:  vae
benchmark VAE
Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Stars: ✭ 1,211 (+2853.66%)
Mutual labels:  vae
functional-programming-babelfish
A cheat sheet for finding similar concepts and operators in different functional languages
Stars: ✭ 34 (-17.07%)
Mutual labels:  concept
DeFMO
[CVPR 2021] DeFMO: Deblurring and Shape Recovery of Fast Moving Objects
Stars: ✭ 144 (+251.22%)
kernel-mod
NeurIPS 2018. Linear-time model comparison tests.
Stars: ✭ 17 (-58.54%)
Mutual labels:  interpretability
MIDI-VAE
No description or website provided.
Stars: ✭ 56 (+36.59%)
Mutual labels:  vae
spatio-temporal-brain
A Deep Graph Neural Network Architecture for Modelling Spatio-temporal Dynamics in rs-fMRI Data
Stars: ✭ 22 (-46.34%)
Mutual labels:  explainability
self critical vqa
Code for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''
Stars: ✭ 39 (-4.88%)
Mutual labels:  explainable-ai
DeepSSM SysID
Official PyTorch implementation of "Deep State Space Models for Nonlinear System Identification", 2020.
Stars: ✭ 62 (+51.22%)
Mutual labels:  vae
Cleanlab
The standard package for machine learning with noisy labels, finding mislabeled data, and uncertainty quantification. Works with most datasets and models.
Stars: ✭ 2,526 (+6060.98%)
Mutual labels:  weak-supervision
Snorkel
A system for quickly generating training data with weak supervision
Stars: ✭ 4,953 (+11980.49%)
Mutual labels:  weak-supervision
1-60 of 305 similar projects