summit🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Stars: ✭ 95 (+7.95%)
Interpretable machine learning with pythonExamples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Stars: ✭ 530 (+502.27%)
Beta VaePytorch implementation of β-VAE
Stars: ✭ 326 (+270.45%)
deep-explanation-penalizationCode for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+25%)
SPINECode for SPINE - Sparse Interpretable Neural Embeddings. Jhamtani H.*, Pruthi D.*, Subramanian A.*, Berg-Kirkpatrick T., Hovy E. AAAI 2018
Stars: ✭ 44 (-50%)
ContrastiveexplanationContrastive Explanation (Foil Trees), developed at TNO/Utrecht University
Stars: ✭ 36 (-59.09%)
ProtoTreeProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-46.59%)
LucidA collection of infrastructure and tools for research in neural network interpretability.
Stars: ✭ 4,344 (+4836.36%)
InterpretFit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+4845.45%)
Tf ExplainInterpretability Methods for tf.keras models with Tensorflow 2.x
Stars: ✭ 780 (+786.36%)
Alae[CVPR2020] Adversarial Latent Autoencoders
Stars: ✭ 3,178 (+3511.36%)
Text nnText classification models. Used a submodule for other projects.
Stars: ✭ 55 (-37.5%)
diabetes use caseSample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-75%)
FlashtorchVisualization toolkit for neural networks in PyTorch! Demo -->
Stars: ✭ 561 (+537.5%)
knowledge-neuronsA library for finding knowledge neurons in pretrained transformer models.
Stars: ✭ 72 (-18.18%)
Celebamask HqA large-scale face dataset for face parsing, recognition, generation and editing.
Stars: ✭ 1,156 (+1213.64%)
yggdrasil-decision-forestsA collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models.
Stars: ✭ 156 (+77.27%)
Tf.gans ComparisonImplementations of (theoretical) generative adversarial networks and comparison without cherry-picking
Stars: ✭ 477 (+442.05%)
style-vaeImplementation of VAE and Style-GAN Architecture Achieving State of the Art Reconstruction
Stars: ✭ 25 (-71.59%)
AlibiAlgorithms for monitoring and explaining machine learning models
Stars: ✭ 924 (+950%)
free-lunch-saliencyCode for "Free-Lunch Saliency via Attention in Atari Agents"
Stars: ✭ 15 (-82.95%)
Neural Backed Decision TreesMaking decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
Stars: ✭ 411 (+367.05%)
Pytorch Mnist Celeba Gan DcganPytorch implementation of Generative Adversarial Networks (GAN) and Deep Convolutional Generative Adversarial Networks (DCGAN) for MNIST and CelebA datasets
Stars: ✭ 363 (+312.5%)
interpretable-mlTechniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-80.68%)
DalexmoDel Agnostic Language for Exploration and eXplanation
Stars: ✭ 795 (+803.41%)
PycadlPython package with source code from the course "Creative Applications of Deep Learning w/ TensorFlow"
Stars: ✭ 356 (+304.55%)
Pytorch Mnist Celeba Cgan CdcganPytorch implementation of conditional Generative Adversarial Networks (cGAN) and conditional Deep Convolutional Generative Adversarial Networks (cDCGAN) for MNIST dataset
Stars: ✭ 290 (+229.55%)
Ad examplesA collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Stars: ✭ 641 (+628.41%)
FacetHuman-explainable AI.
Stars: ✭ 269 (+205.68%)
Cnn Interpretability🏥 Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer’s Disease
Stars: ✭ 68 (-22.73%)
Tensorflow DCGANStudy Friendly Implementation of DCGAN in Tensorflow
Stars: ✭ 22 (-75%)
XaiXAI - An eXplainability toolbox for machine learning
Stars: ✭ 596 (+577.27%)
removal-explanationsA lightweight implementation of removal-based explanations for ML models.
Stars: ✭ 46 (-47.73%)
TrelawneyGeneral Interpretability Package
Stars: ✭ 55 (-37.5%)
shapeshopTowards Understanding Deep Learning Representations via Interactive Experimentation
Stars: ✭ 16 (-81.82%)
Xai resourcesInteresting resources related to XAI (Explainable Artificial Intelligence)
Stars: ✭ 553 (+528.41%)
neuron-importance-zsl[ECCV 2018] code for Choose Your Neuron: Incorporating Domain Knowledge Through Neuron Importance
Stars: ✭ 56 (-36.36%)
Celeba Hq ModifiedModified h5tool.py make user getting celeba-HQ easier
Stars: ✭ 84 (-4.55%)
sageFor calculating global feature importance using Shapley values.
Stars: ✭ 129 (+46.59%)
DeepliftPublic facing deeplift repo
Stars: ✭ 512 (+481.82%)
zennitZennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (-35.23%)
Symbolic MetamodelingCodebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.
Stars: ✭ 29 (-67.05%)
TcavCode for the TCAV ML interpretability project
Stars: ✭ 442 (+402.27%)
gan-error-avoidanceLearning to Avoid Errors in GANs by Input Space Manipulation (Code for paper)
Stars: ✭ 23 (-73.86%)
partial dependencePython package to visualize and cluster partial dependence.
Stars: ✭ 23 (-73.86%)
Mli ResourcesH2O.ai Machine Learning Interpretability Resources
Stars: ✭ 428 (+386.36%)
transformers-interpretModel explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Stars: ✭ 861 (+878.41%)
Began TensorflowTensorflow implementation of "BEGAN: Boundary Equilibrium Generative Adversarial Networks"
Stars: ✭ 904 (+927.27%)
CxplainCausal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
Stars: ✭ 84 (-4.55%)
AthenaAutomatic equation building and curve fitting. Runs on Tensorflow. Built for academia and research.
Stars: ✭ 57 (-35.23%)
Grad Cam[ICCV 2017] Torch code for Grad-CAM
Stars: ✭ 891 (+912.5%)
Disentangling VaeExperiments for understanding disentanglement in VAE latent representations
Stars: ✭ 398 (+352.27%)