All Projects → Interpretability By Parts → Similar Projects or Alternatives

101 Open source projects that are alternatives of or similar to Interpretability By Parts

DCGAN-CelebA-PyTorch-CPP
DCGAN Implementation using PyTorch in both C++ and Python
Stars: ✭ 14 (-84.09%)
Mutual labels:  celeba
summit
🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Stars: ✭ 95 (+7.95%)
Mutual labels:  interpretability
Interpretable machine learning with python
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Stars: ✭ 530 (+502.27%)
Mutual labels:  interpretability
Beta Vae
Pytorch implementation of β-VAE
Stars: ✭ 326 (+270.45%)
Mutual labels:  celeba
deep-explanation-penalization
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+25%)
Mutual labels:  interpretability
Awesome Federated Learning
Federated Learning Library: https://fedml.ai
Stars: ✭ 624 (+609.09%)
Mutual labels:  interpretability
SPINE
Code for SPINE - Sparse Interpretable Neural Embeddings. Jhamtani H.*, Pruthi D.*, Subramanian A.*, Berg-Kirkpatrick T., Hovy E. AAAI 2018
Stars: ✭ 44 (-50%)
Mutual labels:  interpretability
Contrastiveexplanation
Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University
Stars: ✭ 36 (-59.09%)
Mutual labels:  interpretability
ProtoTree
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-46.59%)
Mutual labels:  interpretability
Lucid
A collection of infrastructure and tools for research in neural network interpretability.
Stars: ✭ 4,344 (+4836.36%)
Mutual labels:  interpretability
Interpret
Fit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+4845.45%)
Mutual labels:  interpretability
ConceptBottleneck
Concept Bottleneck Models, ICML 2020
Stars: ✭ 91 (+3.41%)
Mutual labels:  interpretability
Tf Explain
Interpretability Methods for tf.keras models with Tensorflow 2.x
Stars: ✭ 780 (+786.36%)
Mutual labels:  interpretability
Alae
[CVPR2020] Adversarial Latent Autoencoders
Stars: ✭ 3,178 (+3511.36%)
Mutual labels:  celeba
Text nn
Text classification models. Used a submodule for other projects.
Stars: ✭ 55 (-37.5%)
Mutual labels:  interpretability
diabetes use case
Sample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-75%)
Mutual labels:  interpretability
Flashtorch
Visualization toolkit for neural networks in PyTorch! Demo -->
Stars: ✭ 561 (+537.5%)
Mutual labels:  interpretability
knowledge-neurons
A library for finding knowledge neurons in pretrained transformer models.
Stars: ✭ 72 (-18.18%)
Mutual labels:  interpretability
Celebamask Hq
A large-scale face dataset for face parsing, recognition, generation and editing.
Stars: ✭ 1,156 (+1213.64%)
Mutual labels:  celeba
yggdrasil-decision-forests
A collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models.
Stars: ✭ 156 (+77.27%)
Mutual labels:  interpretability
Tf.gans Comparison
Implementations of (theoretical) generative adversarial networks and comparison without cherry-picking
Stars: ✭ 477 (+442.05%)
Mutual labels:  celeba
style-vae
Implementation of VAE and Style-GAN Architecture Achieving State of the Art Reconstruction
Stars: ✭ 25 (-71.59%)
Mutual labels:  celeba
Alibi
Algorithms for monitoring and explaining machine learning models
Stars: ✭ 924 (+950%)
Mutual labels:  interpretability
free-lunch-saliency
Code for "Free-Lunch Saliency via Attention in Atari Agents"
Stars: ✭ 15 (-82.95%)
Mutual labels:  interpretability
Neural Backed Decision Trees
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
Stars: ✭ 411 (+367.05%)
Mutual labels:  interpretability
Pytorch Mnist Celeba Gan Dcgan
Pytorch implementation of Generative Adversarial Networks (GAN) and Deep Convolutional Generative Adversarial Networks (DCGAN) for MNIST and CelebA datasets
Stars: ✭ 363 (+312.5%)
Mutual labels:  celeba
interpretable-ml
Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-80.68%)
Mutual labels:  interpretability
Dalex
moDel Agnostic Language for Exploration and eXplanation
Stars: ✭ 795 (+803.41%)
Mutual labels:  interpretability
Pycadl
Python package with source code from the course "Creative Applications of Deep Learning w/ TensorFlow"
Stars: ✭ 356 (+304.55%)
Mutual labels:  celeba
Adversarial Explainable Ai
💡 A curated list of adversarial attacks on model explanations
Stars: ✭ 56 (-36.36%)
Mutual labels:  interpretability
Pytorch Mnist Celeba Cgan Cdcgan
Pytorch implementation of conditional Generative Adversarial Networks (cGAN) and conditional Deep Convolutional Generative Adversarial Networks (cDCGAN) for MNIST dataset
Stars: ✭ 290 (+229.55%)
Mutual labels:  celeba
Ad examples
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Stars: ✭ 641 (+628.41%)
Mutual labels:  interpretability
Facet
Human-explainable AI.
Stars: ✭ 269 (+205.68%)
Mutual labels:  interpretability
Cnn Interpretability
🏥 Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer’s Disease
Stars: ✭ 68 (-22.73%)
Mutual labels:  interpretability
Tensorflow DCGAN
Study Friendly Implementation of DCGAN in Tensorflow
Stars: ✭ 22 (-75%)
Mutual labels:  celeba
Xai
XAI - An eXplainability toolbox for machine learning
Stars: ✭ 596 (+577.27%)
Mutual labels:  interpretability
removal-explanations
A lightweight implementation of removal-based explanations for ML models.
Stars: ✭ 46 (-47.73%)
Mutual labels:  interpretability
Trelawney
General Interpretability Package
Stars: ✭ 55 (-37.5%)
Mutual labels:  interpretability
shapeshop
Towards Understanding Deep Learning Representations via Interactive Experimentation
Stars: ✭ 16 (-81.82%)
Mutual labels:  interpretability
Xai resources
Interesting resources related to XAI (Explainable Artificial Intelligence)
Stars: ✭ 553 (+528.41%)
Mutual labels:  interpretability
neuron-importance-zsl
[ECCV 2018] code for Choose Your Neuron: Incorporating Domain Knowledge Through Neuron Importance
Stars: ✭ 56 (-36.36%)
Mutual labels:  interpretability
Celeba Hq Modified
Modified h5tool.py make user getting celeba-HQ easier
Stars: ✭ 84 (-4.55%)
Mutual labels:  celeba
sage
For calculating global feature importance using Shapley values.
Stars: ✭ 129 (+46.59%)
Mutual labels:  interpretability
Deeplift
Public facing deeplift repo
Stars: ✭ 512 (+481.82%)
Mutual labels:  interpretability
zennit
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (-35.23%)
Mutual labels:  interpretability
Symbolic Metamodeling
Codebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.
Stars: ✭ 29 (-67.05%)
Mutual labels:  interpretability
Visualizing-CNNs-for-monocular-depth-estimation
official implementation of "Visualization of Convolutional Neural Networks for Monocular Depth Estimation"
Stars: ✭ 120 (+36.36%)
Mutual labels:  interpretability
Tcav
Code for the TCAV ML interpretability project
Stars: ✭ 442 (+402.27%)
Mutual labels:  interpretability
gan-error-avoidance
Learning to Avoid Errors in GANs by Input Space Manipulation (Code for paper)
Stars: ✭ 23 (-73.86%)
Mutual labels:  celeba
Awesome Production Machine Learning
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
Stars: ✭ 10,504 (+11836.36%)
Mutual labels:  interpretability
partial dependence
Python package to visualize and cluster partial dependence.
Stars: ✭ 23 (-73.86%)
Mutual labels:  interpretability
Mli Resources
H2O.ai Machine Learning Interpretability Resources
Stars: ✭ 428 (+386.36%)
Mutual labels:  interpretability
transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Stars: ✭ 861 (+878.41%)
Mutual labels:  interpretability
Began Tensorflow
Tensorflow implementation of "BEGAN: Boundary Equilibrium Generative Adversarial Networks"
Stars: ✭ 904 (+927.27%)
Mutual labels:  celeba
Awesome deep learning interpretability
深度学习近年来关于神经网络模型解释性的相关高引用/顶会论文(附带代码)
Stars: ✭ 401 (+355.68%)
Mutual labels:  interpretability
Cxplain
Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
Stars: ✭ 84 (-4.55%)
Mutual labels:  interpretability
Reverse Engineering Neural Networks
A collection of tools for reverse engineering neural networks.
Stars: ✭ 78 (-11.36%)
Mutual labels:  interpretability
Athena
Automatic equation building and curve fitting. Runs on Tensorflow. Built for academia and research.
Stars: ✭ 57 (-35.23%)
Mutual labels:  interpretability
Grad Cam
[ICCV 2017] Torch code for Grad-CAM
Stars: ✭ 891 (+912.5%)
Mutual labels:  interpretability
Disentangling Vae
Experiments for understanding disentanglement in VAE latent representations
Stars: ✭ 398 (+352.27%)
Mutual labels:  celeba
1-60 of 101 similar projects