Text nnText classification models. Used a submodule for other projects.
Stars: ✭ 55 (+223.53%)
Torch CamClass activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM)
Stars: ✭ 249 (+1364.71%)
Ad examplesA collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Stars: ✭ 641 (+3670.59%)
delay-discounting-analysisHierarchical Bayesian estimation and hypothesis testing for delay discounting tasks
Stars: ✭ 20 (+17.65%)
Cnn Interpretability🏥 Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer’s Disease
Stars: ✭ 68 (+300%)
supervised-random-projectionsPython implementation of supervised PCA, supervised random projections, and their kernel counterparts.
Stars: ✭ 19 (+11.76%)
AlibiAlgorithms for monitoring and explaining machine learning models
Stars: ✭ 924 (+5335.29%)
Lrp for lstmLayer-wise Relevance Propagation (LRP) for LSTMs
Stars: ✭ 152 (+794.12%)
Xai resourcesInteresting resources related to XAI (Explainable Artificial Intelligence)
Stars: ✭ 553 (+3152.94%)
hypotheticalHypothesis and statistical testing in Python
Stars: ✭ 49 (+188.24%)
Pycebox⬛ Python Individual Conditional Expectation Plot Toolbox
Stars: ✭ 101 (+494.12%)
KernelKnnKernel k Nearest Neighbors in R
Stars: ✭ 14 (-17.65%)
CxplainCausal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
Stars: ✭ 84 (+394.12%)
AthenaAutomatic equation building and curve fitting. Runs on Tensorflow. Built for academia and research.
Stars: ✭ 57 (+235.29%)
distfitdistfit is a python library for probability density fitting.
Stars: ✭ 250 (+1370.59%)
ContrastiveexplanationContrastive Explanation (Foil Trees), developed at TNO/Utrecht University
Stars: ✭ 36 (+111.76%)
CaptumModel interpretability and understanding for PyTorch
Stars: ✭ 2,830 (+16547.06%)
DalexmoDel Agnostic Language for Exploration and eXplanation
Stars: ✭ 795 (+4576.47%)
graphkit-learnA python package for graph kernels, graph edit distances, and graph pre-image problem.
Stars: ✭ 87 (+411.76%)
XaiXAI - An eXplainability toolbox for machine learning
Stars: ✭ 596 (+3405.88%)
Pyss3A Python package implementing a new machine learning model for text classification with visualization tools for Explainable AI
Stars: ✭ 191 (+1023.53%)
Modelstudio📍 Interactive Studio for Explanatory Model Analysis
Stars: ✭ 163 (+858.82%)
Interpretable machine learning with pythonExamples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Stars: ✭ 530 (+3017.65%)
StellargraphStellarGraph - Machine Learning on Graphs
Stars: ✭ 2,235 (+13047.06%)
kafboxA Matlab benchmarking toolbox for kernel adaptive filtering
Stars: ✭ 70 (+311.76%)
Visual AttributionPytorch Implementation of recent visual attribution methods for model interpretability
Stars: ✭ 127 (+647.06%)
BreakdownModel Agnostics breakDown plots
Stars: ✭ 93 (+447.06%)
thermostatCollection of NLP model explanations and accompanying analysis tools
Stars: ✭ 126 (+641.18%)
Interpretability By PartsCode repository for "Interpretable and Accurate Fine-grained Recognition via Region Grouping", CVPR 2020 (Oral)
Stars: ✭ 88 (+417.65%)
deep-significanceEnabling easy statistical significance testing for deep neural networks.
Stars: ✭ 266 (+1464.71%)
frpFRP: Fast Random Projections
Stars: ✭ 40 (+135.29%)
hyppoPython package for multivariate hypothesis testing
Stars: ✭ 144 (+747.06%)
TrelawneyGeneral Interpretability Package
Stars: ✭ 55 (+223.53%)
Symbolic MetamodelingCodebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.
Stars: ✭ 29 (+70.59%)
kernel-epUAI 2015. Kernel-based just-in-time learning for expectation propagation
Stars: ✭ 16 (-5.88%)
Grad Cam[ICCV 2017] Torch code for Grad-CAM
Stars: ✭ 891 (+5141.18%)
ExplainxExplainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code.
Stars: ✭ 196 (+1052.94%)
Tf ExplainInterpretability Methods for tf.keras models with Tensorflow 2.x
Stars: ✭ 780 (+4488.24%)
Awesome Graph ClassificationA collection of important graph embedding, classification and representation learning papers with implementations.
Stars: ✭ 4,309 (+25247.06%)
ImodelsInterpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Stars: ✭ 194 (+1041.18%)
FlashtorchVisualization toolkit for neural networks in PyTorch! Demo -->
Stars: ✭ 561 (+3200%)
interpretable-testNeurIPS 2016. Linear-time interpretable nonparametric two-sample test.
Stars: ✭ 58 (+241.18%)
MachineLearningMachine learning for beginner(Data Science enthusiast)
Stars: ✭ 104 (+511.76%)
adaptive-waveletsAdaptive, interpretable wavelets across domains (NeurIPS 2021)
Stars: ✭ 58 (+241.18%)
DSPKMThis is the page for the book Digital Signal Processing with Kernel Methods.
Stars: ✭ 32 (+88.24%)
ShapA game theoretic approach to explain the output of any machine learning model.
Stars: ✭ 14,917 (+87647.06%)