COVID-CXNetCOVID-CXNet: Diagnosing COVID-19 in Frontal Chest X-ray Images using Deep Learning. Preprint available on arXiv: https://arxiv.org/abs/2006.13807
Stars: ✭ 48 (-98.74%)
self critical vqaCode for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''
Stars: ✭ 39 (-98.98%)
interpretable-mlTechniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-99.55%)
InterpretFit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+14.11%)
CaptumModel interpretability and understanding for PyTorch
Stars: ✭ 2,830 (-25.8%)
WhiteBox-Part1In this part, I've introduced and experimented with ways to interpret and evaluate models in the field of image. (Pytorch)
Stars: ✭ 34 (-99.11%)
zennitZennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (-98.51%)
mllpThe code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-99.61%)
deep-explanation-penalizationCode for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (-97.12%)
ProtoTreeProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-98.77%)
InterpretDLInterpretDL: Interpretation of Deep Learning Models,基于『飞桨』的模型可解释性算法库。
Stars: ✭ 121 (-96.83%)
neuron-importance-zsl[ECCV 2018] code for Choose Your Neuron: Incorporating Domain Knowledge Through Neuron Importance
Stars: ✭ 56 (-98.53%)
TcavCode for the TCAV ML interpretability project
Stars: ✭ 442 (-88.41%)
Text nnText classification models. Used a submodule for other projects.
Stars: ✭ 55 (-98.56%)
Mli ResourcesH2O.ai Machine Learning Interpretability Resources
Stars: ✭ 428 (-88.78%)
ContrastiveexplanationContrastive Explanation (Foil Trees), developed at TNO/Utrecht University
Stars: ✭ 36 (-99.06%)
BqplotPlotting library for IPython/Jupyter notebooks
Stars: ✭ 3,203 (-16.02%)
FacetHuman-explainable AI.
Stars: ✭ 269 (-92.95%)
AlibiAlgorithms for monitoring and explaining machine learning models
Stars: ✭ 924 (-75.77%)
diabetes use caseSample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-99.42%)
SViTE[NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang
Stars: ✭ 50 (-98.69%)
KuiA hybrid command-line/UI development experience for cloud-native development
Stars: ✭ 2,052 (-46.2%)
CxplainCausal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
Stars: ✭ 84 (-97.8%)
DalexmoDel Agnostic Language for Exploration and eXplanation
Stars: ✭ 795 (-79.16%)
SPINECode for SPINE - Sparse Interpretable Neural Embeddings. Jhamtani H.*, Pruthi D.*, Subramanian A.*, Berg-Kirkpatrick T., Hovy E. AAAI 2018
Stars: ✭ 44 (-98.85%)
DeepliftPublic facing deeplift repo
Stars: ✭ 512 (-86.58%)
LucidA collection of infrastructure and tools for research in neural network interpretability.
Stars: ✭ 4,344 (+13.9%)
BreakdownModel Agnostics breakDown plots
Stars: ✭ 93 (-97.56%)
Neural Backed Decision TreesMaking decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
Stars: ✭ 411 (-89.22%)
TrelawneyGeneral Interpretability Package
Stars: ✭ 55 (-98.56%)
KibanaYour window into the Elastic Stack
Stars: ✭ 16,820 (+341.01%)
Symbolic MetamodelingCodebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.
Stars: ✭ 29 (-99.24%)
Interpretability By PartsCode repository for "Interpretable and Accurate Fine-grained Recognition via Region Grouping", CVPR 2020 (Oral)
Stars: ✭ 88 (-97.69%)
removal-explanationsA lightweight implementation of removal-based explanations for ML models.
Stars: ✭ 46 (-98.79%)
Grad Cam[ICCV 2017] Torch code for Grad-CAM
Stars: ✭ 891 (-76.64%)
Faster-Grad-CAMFaster and more precisely than Grad-CAM
Stars: ✭ 33 (-99.13%)
Lrp for lstmLayer-wise Relevance Propagation (LRP) for LSTMs
Stars: ✭ 152 (-96.01%)
tutorialsGit Repo for Articles on Ergo Sum blog and the youtube channel https://www.youtube.com/channel/UCiie9CN--dazA7iT2sry5FA
Stars: ✭ 42 (-98.9%)
Tf ExplainInterpretability Methods for tf.keras models with Tensorflow 2.x
Stars: ✭ 780 (-79.55%)
medialyticsA basic, free tool that shows information about Plex Media Server content
Stars: ✭ 31 (-99.19%)
Ad examplesA collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Stars: ✭ 641 (-83.19%)
CNN-Units-in-NLP✂️ Repository for our ICLR 2019 paper: Discovery of Natural Language Concepts in Individual Units of CNNs
Stars: ✭ 26 (-99.32%)
callgraphMagic to display dynamic call graphs of Python function calls
Stars: ✭ 53 (-98.61%)
shapeshopTowards Understanding Deep Learning Representations via Interactive Experimentation
Stars: ✭ 16 (-99.58%)
Visual AttributionPytorch Implementation of recent visual attribution methods for model interpretability
Stars: ✭ 127 (-96.67%)
Cnn Interpretability🏥 Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer’s Disease
Stars: ✭ 68 (-98.22%)
Pytorch Cnn VisualizationsPytorch implementation of convolutional neural network visualization techniques
Stars: ✭ 6,167 (+61.69%)
kuiA hybrid command-line/UI development experience for cloud-native development
Stars: ✭ 2,137 (-43.97%)
XaiXAI - An eXplainability toolbox for machine learning
Stars: ✭ 596 (-84.37%)
knowledge-neuronsA library for finding knowledge neurons in pretrained transformer models.
Stars: ✭ 72 (-98.11%)
htm-school-vizVisualizations supporting HTM School
Stars: ✭ 57 (-98.51%)