All Projects → lopusz → Awesome Interpretable Machine Learning

lopusz / Awesome Interpretable Machine Learning

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Awesome Interpretable Machine Learning

Learn Julia The Hard Way
Learn Julia the hard way!
Stars: ✭ 679 (-11.01%)
Mutual labels:  data-science
Reflow
A language and runtime for distributed, incremental data processing in the cloud
Stars: ✭ 706 (-7.47%)
Mutual labels:  data-science
Hitchhikers Guide
The Hitchhiker's Guide to Data Science for Social Good
Stars: ✭ 732 (-4.06%)
Mutual labels:  data-science
Featexp
Feature exploration for supervised learning
Stars: ✭ 688 (-9.83%)
Mutual labels:  data-science
Orchest
A new kind of IDE for Data Science.
Stars: ✭ 694 (-9.04%)
Mutual labels:  data-science
Industry Machine Learning
A curated list of applied machine learning and data science notebooks and libraries across different industries (by @firmai)
Stars: ✭ 6,077 (+696.46%)
Mutual labels:  data-science
Cracking The Data Science Interview
A Collection of Cheatsheets, Books, Questions, and Portfolio For DS/ML Interview Prep
Stars: ✭ 672 (-11.93%)
Mutual labels:  data-science
Mit 15 003 Data Science Tools
Study guides for MIT's 15.003 Data Science Tools
Stars: ✭ 743 (-2.62%)
Mutual labels:  data-science
Cookbook 2nd
IPython Cookbook, Second Edition, by Cyrille Rossant, Packt Publishing 2018
Stars: ✭ 704 (-7.73%)
Mutual labels:  data-science
Bowtie
Create a dashboard with python!
Stars: ✭ 724 (-5.11%)
Mutual labels:  data-science
Data Science Interview Resources
A repository listing out the potential sources which will help you in preparing for a Data Science/Machine Learning interview. New resources added frequently.
Stars: ✭ 690 (-9.57%)
Mutual labels:  data-science
H1st
The AI Application Platform We All Need. Human AND Machine Intelligence. Based on experience building AI solutions at Panasonic: robotics predictive maintenance, cold-chain energy optimization, Gigafactory battery mfg, avionics, automotive cybersecurity, and more.
Stars: ✭ 697 (-8.65%)
Mutual labels:  data-science
Python Small Examples
告别枯燥,致力于打造 Python 实用小例子,更多Python良心教程见 Python中文网 http://www.zglg.work
Stars: ✭ 6,589 (+763.56%)
Mutual labels:  data-science
Querido Diario
📰 Brazilian government gazettes, accessible to everyone.
Stars: ✭ 681 (-10.75%)
Mutual labels:  data-science
Prefect
The easiest way to automate your data
Stars: ✭ 7,956 (+942.73%)
Mutual labels:  data-science
Python Crfsuite
A python binding for crfsuite
Stars: ✭ 678 (-11.14%)
Mutual labels:  data-science
Statistical Rethinking With Python And Pymc3
Python/PyMC3 port of the examples in " Statistical Rethinking A Bayesian Course with Examples in R and Stan" by Richard McElreath
Stars: ✭ 713 (-6.55%)
Mutual labels:  data-science
Machine learning refined
Notes, examples, and Python demos for the textbook "Machine Learning Refined" (published by Cambridge University Press).
Stars: ✭ 750 (-1.7%)
Mutual labels:  data-science
Rows
A common, beautiful interface to tabular data, no matter the format
Stars: ✭ 739 (-3.15%)
Mutual labels:  data-science
Machine learning examples
A collection of machine learning examples and tutorials.
Stars: ✭ 6,466 (+747.44%)
Mutual labels:  data-science
  • Awesome Interpretable Machine Learning [[https://awesome.re][https://awesome.re/badge.svg]]

Opinionated list of resources facilitating model interpretability (introspection, simplification, visualization, explanation).

** Interpretable Models

** Feature Importance

** Feature Selection

** Model Explanations *** Philosophy + Magnets by R. P. Feynman https://www.youtube.com/watch?v=wMFPe-DwULM

+ (2002) Looking Inside the Black Box, presentation of Leo Breiman
  + https://www.stat.berkeley.edu/users/breiman/wald2002-2.pdf

+ (2011) To Explain or to Predict? by Galit Shmueli
  + https://arxiv.org/pdf/1101.0891
  + https://dx.doi.org/10.1214/10-STS330

+ (2016) The Mythos of Model Interpretability by Zachary C. Lipton
  + https://arxiv.org/pdf/1606.03490
  + https://www.youtube.com/watch?v=mvzBQci04qA

+ (2017) Towards A Rigorous Science of Interpretable Machine Learning by Finale Doshi-Velez, Been Kim
  + https://arxiv.org/pdf/1702.08608

+ (2017) The Promise and Peril of Human Evaluation for Model Interpretability by Bernease Herman
  + https://arxiv.org/pdf/1711.07414

+ (2018) [[http://bayes.cs.ucla.edu/WHY/why-intro.pdf][The Book of Why: The New Science of Cause and Effect]] by Judea Pearl

+ (2018) Please Stop Doing the "Explainable" ML by Cynthia Rudin
  + Video (starts 17:30, lasts 10 min): https://zoom.us/recording/play/0y-iI9HamgyDzzP2k_jiTu6jB7JgVVXnjWZKDMbnyRTn3FsxTDZy6Wkrj3_ekx4J
  + Linked at: https://users.cs.duke.edu/~cynthia/mediatalks.html

+ (2018) Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning by Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, Lalana Kagal
  + https://arxiv.org/pdf/1806.00069

+ (2019) Interpretable machine learning: definitions, methods, and applications by W. James Murdoch, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, Bin Yu
  + https://arxiv.org/pdf/1901.04592

+ (2019) On Explainable Machine Learning Misconceptions A More Human-Centered Machine Learning by Patrick Hall
  + https://github.com/jphall663/xai_misconceptions/blob/master/xai_misconceptions.pdf
  + https://github.com/jphall663/xai_misconceptions

+ (2019) An Introduction to Machine Learning Interpretability. An Applied Perspective on Fairness, Accountability, Transparency, and Explainable AI by Patrick Hall and Navdeep Gill
  + https://www.h2o.ai/wp-content/uploads/2019/08/An-Introduction-to-Machine-Learning-Interpretability-Second-Edition.pdf

*** Model Agnostic Explanations + (2009) How to Explain Individual Classification Decisions by David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, Klaus-Robert Mueller + https://arxiv.org/pdf/0912.1128

+ (2013) Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation by Alex Goldstein, Adam Kapelner, Justin Bleich, Emil Pitkin
  + https://arxiv.org/pdf/1309.6392

+ (2016) "Why Should I Trust You?": Explaining the Predictions of Any Classifier by Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin
  + https://arxiv.org/pdf/1602.04938
  + Code: https://github.com/marcotcr/lime
  + https://github.com/marcotcr/lime-experiments
  + https://www.youtube.com/watch?v=bCgEP2zuYxI
  + Introduces the LIME method (Local Interpretable Model-agnostic Explanations)

+ (2016) A Model Explanation System: Latest Updates and Extensions by Ryan Turner
  + https://arxiv.org/pdf/1606.09517
  + http://www.blackboxworkshop.org/pdf/Turner2015_MES.pdf

+ (2017) Understanding Black-box Predictions via Influence Functions by Pang Wei Koh, Percy Liang
  + https://arxiv.org/pdf/1703.04730

+ (2017) A Unified Approach to Interpreting Model Predictions by Scott Lundberg, Su-In Lee
  + https://arxiv.org/pdf/1705.07874
  + Code: https://github.com/slundberg/shap
  + Introduces the SHAP method (SHapley Additive exPlanations), generalizing LIME

+ (2018) Anchors: High-Precision Model-Agnostic Explanations by Marco Ribeiro, Sameer Singh, Carlos Guestrin
  + https://homes.cs.washington.edu/~marcotcr/aaai18.pdf
  + Code: https://github.com/marcotcr/anchor-experiments

+ (2018) Learning to Explain: An Information-Theoretic Perspective on Model Interpretation by Jianbo Chen, Le Song, Martin J. Wainwright, Michael I. Jordan
  + https://arxiv.org/pdf/1802.07814

+ (2018) Explanations of model predictions with live and breakDown packages by Mateusz Staniak, Przemyslaw Biecek
  + https://arxiv.org/pdf/1804.01955
  + Docs: https://mi2datalab.github.io/live/
  + Code: https://github.com/MI2DataLab/live
  + Docs: https://pbiecek.github.io/breakDown
  + Code: https://github.com/pbiecek/breakDown

+ (2018) A review book -  Interpretable Machine Learning. A Guide for Making Black Box
  Models Explainable by Christoph Molnar

  + https://christophm.github.io/interpretable-ml-book/
+ (2018) Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead by Cynthia Rudin
  + https://arxiv.org/pdf/1811.10154
+ (2019) Quantifying Interpretability of Arbitrary Machine Learning Models Through Functional Decomposition by Christoph Molnar, Giuseppe Casalicchio, Bernd Bischl
  + https://arxiv.org/pdf/1904.03867

*** Model Specific Explanations - Neural Networks + (2013) Visualizing and Understanding Convolutional Networks by Matthew D Zeiler, Rob Fergus + https://arxiv.org/pdf/1311.2901

+ (2013) Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps by Karen Simonyan, Andrea Vedaldi, Andrew Zisserman
  + https://arxiv.org/pdf/1312.6034

+ (2015) Understanding Neural Networks Through Deep Visualization by Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, Hod Lipson
  + https://arxiv.org/pdf/1506.06579
  + https://github.com/yosinski/deep-visualization-toolbox

+ (2016) Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization by Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, Dhruv Batra
  + https://arxiv.org/pdf/1610.02391

+ (2016) Generating Visual Explanations by Lisa Anne Hendricks, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, Bernt Schiele, Trevor Darrell
  + https://arxiv.org/pdf/1603.08507

+ (2016) Rationalizing Neural Predictions by Tao Lei, Regina Barzilay, Tommi Jaakkola
  + https://arxiv.org/pdf/1606.04155
  + https://people.csail.mit.edu/taolei/papers/emnlp16_rationale_slides.pdf
  + Code: https://github.com/taolei87/rcnn/tree/master/code/rationale

+ (2016) Gradients of Counterfactuals by Mukund Sundararajan, Ankur Taly, Qiqi Yan
  + https://arxiv.org/pdf/1611.02639

+ Pixel entropy can be used to detect relevant picture regions (for CovNets)
  + See Visualization section and Fig. 5 of the paper
    + (2017) High-Resolution Breast Cancer Screening with Multi-View Deep Convolutional Neural Networks by Krzysztof J. Geras, Stacey Wolfson, Yiqiu Shen, Nan Wu, S. Gene Kim, Eric Kim, Laura Heacock, Ujas Parikh, Linda Moy, Kyunghyun Cho
      + https://arxiv.org/pdf/1703.07047

+ (2017) SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability by Maithra Raghu, Justin Gilmer, Jason Yosinski, Jascha Sohl-Dickstein
  + https://arxiv.org/pdf/1706.05806
  + https://research.googleblog.com/2017/11/interpreting-deep-neural-networks-with.html

+ (2017) Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks by Jose Oramas, Kaili Wang, Tinne Tuytelaars
  + https://arxiv.org/pdf/1712.06302

+ (2017) Axiomatic Attribution for Deep Networks by Mukund Sundararajan, Ankur Taly, Qiqi Yan
  + https://arxiv.org/pdf/1703.01365
  + Code: https://github.com/ankurtaly/Integrated-Gradients
  + Proposes Integrated Gradients Method
  + See also: Gradients of Counterfactuals https://arxiv.org/pdf/1611.02639.pdf

+ (2017) Learning Important Features Through Propagating Activation Differences by Avanti Shrikumar, Peyton Greenside, Anshul Kundaje
  + https://arxiv.org/pdf/1704.02685

  + Proposes Deep Lift method

  + Code: https://github.com/kundajelab/deeplift

  + Videos: https://www.youtube.com/playlist?list=PLJLjQOkqSRTP3cLB2cOOi_bQFw6KPGKML

+ (2017) The (Un)reliability of saliency methods by Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, Been Kim
  + https://arxiv.org/pdf/1711.0867
  + Review of failures for methods extracting most important pixels for prediction

+ (2018) Classifier-agnostic saliency map extraction by Konrad Zolna, Krzysztof J. Geras, Kyunghyun Cho
  + https://arxiv.org/pdf/1805.08249
  + Code: https://github.com/kondiz/casme

+ (2018) A Benchmark for Interpretability Methods in Deep Neural Networks by Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, Been Kim
  + https://arxiv.org/pdf/1806.10758

+ (2018) The Building Blocks of Interpretability by Chris Olah, Arvind Satyanarayan, Ian Johnson, Shan Carter, Ludwig Schubert, Katherine Ye, Alexander Mordvintsev
  + https://dx.doi.org/10.23915/distill.00010
  + Has some embeded links to notebooks
  + Uses Lucid library https://github.com/tensorflow/lucid

+ (2018) Hierarchical interpretations for neural network predictions by Chandan Singh, W. James Murdoch, Bin Yu
  + https://arxiv.org/pdf/1806.05337
  + Code: https://github.com/csinva/hierarchical_dnn_interpretations

+ (2018) iNNvestigate neural networks! by Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T. Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, Pieter-Jan Kindermans
  + https://arxiv.org/pdf/1808.04260
  + Code: https://github.com/albermax/innvestigate

+ (2018) YASENN: Explaining Neural Networks via Partitioning Activation Sequences by Yaroslav Zharov, Denis Korzhenkov, Pavel Shvechikov, Alexander Tuzhilin
  + https://arxiv.org/pdf/1811.02783

+ (2019) Attention is not Explanation by Sarthak Jain, Byron C. Wallace
  + https://arxiv.org/pdf/1902.10186

+ (2019) Attention Interpretability Across NLP Tasks by Shikhar Vashishth, Shyam Upadhyay, Gaurav Singh Tomar, Manaal Faruqui
  + https://arxiv.org/pdf/1909.11218

+ (2019) GRACE: Generating Concise and Informative Contrastive Sample to Explain Neural Network Model's Prediction by Thai Le, Suhang Wang, Dongwon Lee
  + https://arxiv.org/pdf/1911.02042
  + Code: https://github.com/lethaiq/GRACE_KDD20

** Extracting Interpretable Models From Complex Ones

** Model Visualization

** Selected Review Talks and Tutorials

** Venues

** Other Resources

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].