ahmedmalaa / Symbolic Metamodeling
Codebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.
Stars: β 29
Projects that are alternatives of or similar to Symbolic Metamodeling
Cnn Interpretability
π₯ Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimerβs Disease
Stars: β 68 (+134.48%)
Mutual labels: jupyter-notebook, interpretability
Shap
A game theoretic approach to explain the output of any machine learning model.
Stars: β 14,917 (+51337.93%)
Mutual labels: jupyter-notebook, interpretability
Reverse Engineering Neural Networks
A collection of tools for reverse engineering neural networks.
Stars: β 78 (+168.97%)
Mutual labels: jupyter-notebook, interpretability
Pycebox
β¬ Python Individual Conditional Expectation Plot Toolbox
Stars: β 101 (+248.28%)
Mutual labels: jupyter-notebook, interpretability
Mli Resources
H2O.ai Machine Learning Interpretability Resources
Stars: β 428 (+1375.86%)
Mutual labels: jupyter-notebook, interpretability
Athena
Automatic equation building and curve fitting. Runs on Tensorflow. Built for academia and research.
Stars: β 57 (+96.55%)
Mutual labels: jupyter-notebook, interpretability
Visual Attribution
Pytorch Implementation of recent visual attribution methods for model interpretability
Stars: β 127 (+337.93%)
Mutual labels: jupyter-notebook, interpretability
Text nn
Text classification models. Used a submodule for other projects.
Stars: β 55 (+89.66%)
Mutual labels: jupyter-notebook, interpretability
Facet
Human-explainable AI.
Stars: β 269 (+827.59%)
Mutual labels: jupyter-notebook, interpretability
Explainx
Explainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code.
Stars: β 196 (+575.86%)
Mutual labels: jupyter-notebook, interpretability
Imodels
Interpretable ML package π for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Stars: β 194 (+568.97%)
Mutual labels: jupyter-notebook, interpretability
Tcav
Code for the TCAV ML interpretability project
Stars: β 442 (+1424.14%)
Mutual labels: jupyter-notebook, interpretability
Lucid
A collection of infrastructure and tools for research in neural network interpretability.
Stars: β 4,344 (+14879.31%)
Mutual labels: jupyter-notebook, interpretability
Interpretable machine learning with python
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Stars: β 530 (+1727.59%)
Mutual labels: jupyter-notebook, interpretability
Personal History Archive
An experiment in creating a dump of your personal browser history for analysis
Stars: β 28 (-3.45%)
Mutual labels: jupyter-notebook
Linguistic and stylistic complexity
Linguistic and stylistic complexity measures for (literary) texts
Stars: β 28 (-3.45%)
Mutual labels: jupyter-notebook
Financial Machine Learning Articles
Contains the code for my financial machine learning articles
Stars: β 29 (+0%)
Mutual labels: jupyter-notebook
Python plotting snippets
Tips and tricks for plotting in python
Stars: β 28 (-3.45%)
Mutual labels: jupyter-notebook
Symbolic Metamodeling
Code for the NeurIPS 2019 paper "Demystifying Black-box Models with Symbolic Metamodels".
If you use our code in your research, please cite:
@inproceedings{SM2019,
author = {Ahmed M. Alaa, Mihaela van der Schaar},
title = {Demystifying Black-box Models with Symbolic Metamodels},
booktitle = {Neural Information Processing Systems},
year = {2019}
}
Note that the project description data, including the texts, logos, images, and/or trademarks,
for each open source project belongs to its rightful owner.
If you wish to add or remove any projects, please contact us at [email protected].