DalexmoDel Agnostic Language for Exploration and eXplanation
Stars: ✭ 795 (+387.73%)
Xai resourcesInteresting resources related to XAI (Explainable Artificial Intelligence)
Stars: ✭ 553 (+239.26%)
Ad examplesA collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Stars: ✭ 641 (+293.25%)
LucidA collection of infrastructure and tools for research in neural network interpretability.
Stars: ✭ 4,344 (+2565.03%)
Symbolic MetamodelingCodebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.
Stars: ✭ 29 (-82.21%)
Retentioneering ToolsRetentioneering: product analytics, data-driven customer journey map optimization, marketing analytics, web analytics, transaction analytics, graph visualization, and behavioral segmentation with customer segments in Python. Opensource analytics, predictive analytics over clickstream, sentiment analysis, AB tests, machine learning, and Monte Carlo Markov Chain simulations, extending Pandas, Networkx and sklearn.
Stars: ✭ 291 (+78.53%)
removal-explanationsA lightweight implementation of removal-based explanations for ML models.
Stars: ✭ 46 (-71.78%)
AthenaAutomatic equation building and curve fitting. Runs on Tensorflow. Built for academia and research.
Stars: ✭ 57 (-65.03%)
XaiXAI - An eXplainability toolbox for machine learning
Stars: ✭ 596 (+265.64%)
BreakdownModel Agnostics breakDown plots
Stars: ✭ 93 (-42.94%)
DeepliftPublic facing deeplift repo
Stars: ✭ 512 (+214.11%)
TrelawneyGeneral Interpretability Package
Stars: ✭ 55 (-66.26%)
Neural Backed Decision TreesMaking decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
Stars: ✭ 411 (+152.15%)
Visual AttributionPytorch Implementation of recent visual attribution methods for model interpretability
Stars: ✭ 127 (-22.09%)
OpenchemOpenChem: Deep Learning toolkit for Computational Chemistry and Drug Design Research
Stars: ✭ 356 (+118.4%)
ContrastiveexplanationContrastive Explanation (Foil Trees), developed at TNO/Utrecht University
Stars: ✭ 36 (-77.91%)
FacetHuman-explainable AI.
Stars: ✭ 269 (+65.03%)
CxplainCausal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
Stars: ✭ 84 (-48.47%)
Pyncov 19Pyncov-19: Learn and predict the spread of COVID-19
Stars: ✭ 20 (-87.73%)
ZS-Data-Science-ChallengeA Data science challenge - "Mekktronix Sales Forecasting" organised by ZS through Hackerearth platform. Rank: 223 out of 4743.
Stars: ✭ 21 (-87.12%)
fireTSA python multi-variate time series prediction library working with sklearn
Stars: ✭ 62 (-61.96%)
shapeshopTowards Understanding Deep Learning Representations via Interactive Experimentation
Stars: ✭ 16 (-90.18%)
Tf ExplainInterpretability Methods for tf.keras models with Tensorflow 2.x
Stars: ✭ 780 (+378.53%)
Pycebox⬛ Python Individual Conditional Expectation Plot Toolbox
Stars: ✭ 101 (-38.04%)
FlashtorchVisualization toolkit for neural networks in PyTorch! Demo -->
Stars: ✭ 561 (+244.17%)
Data Science WgSF Brigade's Data Science Working Group.
Stars: ✭ 135 (-17.18%)
Interpretable machine learning with pythonExamples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Stars: ✭ 530 (+225.15%)
Text nnText classification models. Used a submodule for other projects.
Stars: ✭ 55 (-66.26%)
TcavCode for the TCAV ML interpretability project
Stars: ✭ 442 (+171.17%)
MicromlpA micro neural network multilayer perceptron for MicroPython (used on ESP32 and Pycom modules)
Stars: ✭ 92 (-43.56%)
Mli ResourcesH2O.ai Machine Learning Interpretability Resources
Stars: ✭ 428 (+162.58%)
StellargraphStellarGraph - Machine Learning on Graphs
Stars: ✭ 2,235 (+1271.17%)
InterpretFit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+2569.94%)
Mlj.jlA Julia machine learning framework
Stars: ✭ 982 (+502.45%)
Pytorch CortexnetPyTorch implementation of the CortexNet predictive model
Stars: ✭ 349 (+114.11%)
Interpretability By PartsCode repository for "Interpretable and Accurate Fine-grained Recognition via Region Grouping", CVPR 2020 (Oral)
Stars: ✭ 88 (-46.01%)
Islr PythonAn Introduction to Statistical Learning (James, Witten, Hastie, Tibshirani, 2013): Python code
Stars: ✭ 3,344 (+1951.53%)
SkaterPython Library for Model Interpretation/Explanations
Stars: ✭ 973 (+496.93%)
diabetes use caseSample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-86.5%)
MlrMachine Learning in R
Stars: ✭ 1,542 (+846.01%)
ML2017FALLMachine Learning (EE 5184) in NTU
Stars: ✭ 66 (-59.51%)
AlibiAlgorithms for monitoring and explaining machine learning models
Stars: ✭ 924 (+466.87%)
SPINECode for SPINE - Sparse Interpretable Neural Embeddings. Jhamtani H.*, Pruthi D.*, Subramanian A.*, Berg-Kirkpatrick T., Hovy E. AAAI 2018
Stars: ✭ 44 (-73.01%)
Lrp for lstmLayer-wise Relevance Propagation (LRP) for LSTMs
Stars: ✭ 152 (-6.75%)
AnndotnetANNdotNET - deep learning tool on .NET Platform.
Stars: ✭ 109 (-33.13%)
Cnn Interpretability🏥 Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer’s Disease
Stars: ✭ 68 (-58.28%)
Grad Cam[ICCV 2017] Torch code for Grad-CAM
Stars: ✭ 891 (+446.63%)