All Projects β†’ ahmedmalaa β†’ Symbolic Metamodeling

ahmedmalaa / Symbolic Metamodeling

Codebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.

Projects that are alternatives of or similar to Symbolic Metamodeling

Cnn Interpretability
πŸ₯ Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer’s Disease
Stars: ✭ 68 (+134.48%)
Mutual labels:  jupyter-notebook, interpretability
Shap
A game theoretic approach to explain the output of any machine learning model.
Stars: ✭ 14,917 (+51337.93%)
Mutual labels:  jupyter-notebook, interpretability
Reverse Engineering Neural Networks
A collection of tools for reverse engineering neural networks.
Stars: ✭ 78 (+168.97%)
Mutual labels:  jupyter-notebook, interpretability
Pycebox
⬛ Python Individual Conditional Expectation Plot Toolbox
Stars: ✭ 101 (+248.28%)
Mutual labels:  jupyter-notebook, interpretability
Mli Resources
H2O.ai Machine Learning Interpretability Resources
Stars: ✭ 428 (+1375.86%)
Mutual labels:  jupyter-notebook, interpretability
Athena
Automatic equation building and curve fitting. Runs on Tensorflow. Built for academia and research.
Stars: ✭ 57 (+96.55%)
Mutual labels:  jupyter-notebook, interpretability
Visual Attribution
Pytorch Implementation of recent visual attribution methods for model interpretability
Stars: ✭ 127 (+337.93%)
Mutual labels:  jupyter-notebook, interpretability
Text nn
Text classification models. Used a submodule for other projects.
Stars: ✭ 55 (+89.66%)
Mutual labels:  jupyter-notebook, interpretability
Facet
Human-explainable AI.
Stars: ✭ 269 (+827.59%)
Mutual labels:  jupyter-notebook, interpretability
Explainx
Explainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code.
Stars: ✭ 196 (+575.86%)
Mutual labels:  jupyter-notebook, interpretability
Imodels
Interpretable ML package πŸ” for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Stars: ✭ 194 (+568.97%)
Mutual labels:  jupyter-notebook, interpretability
Tcav
Code for the TCAV ML interpretability project
Stars: ✭ 442 (+1424.14%)
Mutual labels:  jupyter-notebook, interpretability
Lucid
A collection of infrastructure and tools for research in neural network interpretability.
Stars: ✭ 4,344 (+14879.31%)
Mutual labels:  jupyter-notebook, interpretability
Interpretable machine learning with python
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Stars: ✭ 530 (+1727.59%)
Mutual labels:  jupyter-notebook, interpretability
Deep learning projects
Stars: ✭ 28 (-3.45%)
Mutual labels:  jupyter-notebook
Gpufilter
GPU Recursive Filtering
Stars: ✭ 28 (-3.45%)
Mutual labels:  jupyter-notebook
Personal History Archive
An experiment in creating a dump of your personal browser history for analysis
Stars: ✭ 28 (-3.45%)
Mutual labels:  jupyter-notebook
Linguistic and stylistic complexity
Linguistic and stylistic complexity measures for (literary) texts
Stars: ✭ 28 (-3.45%)
Mutual labels:  jupyter-notebook
Financial Machine Learning Articles
Contains the code for my financial machine learning articles
Stars: ✭ 29 (+0%)
Mutual labels:  jupyter-notebook
Python plotting snippets
Tips and tricks for plotting in python
Stars: ✭ 28 (-3.45%)
Mutual labels:  jupyter-notebook

Symbolic Metamodeling

Code for the NeurIPS 2019 paper "Demystifying Black-box Models with Symbolic Metamodels".

If you use our code in your research, please cite:

@inproceedings{SM2019,
	author = {Ahmed M. Alaa, Mihaela van der Schaar},
	title = {Demystifying Black-box Models with Symbolic Metamodels},
	booktitle = {Neural Information Processing Systems},
	year = {2019}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].