All Projects → h2oai → Mli Resources

h2oai / Mli Resources

H2O.ai Machine Learning Interpretability Resources

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Mli Resources

Interpretable machine learning with python
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Stars: ✭ 530 (+23.83%)
Mutual labels:  transparency, jupyter-notebook, data-science, data-mining, interpretability, h2o
Gwu data mining
Materials for GWU DNSC 6279 and DNSC 6290.
Stars: ✭ 217 (-49.3%)
Mutual labels:  jupyter-notebook, data-science, data-mining, h2o
Awesome Machine Learning Interpretability
A curated list of awesome machine learning interpretability resources.
Stars: ✭ 2,404 (+461.68%)
Mutual labels:  transparency, data-science, data-mining, interpretability
diabetes use case
Sample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-94.86%)
Mutual labels:  data-mining, xgboost, transparency, interpretability
Python Machine Learning Book
The "Python Machine Learning (1st edition)" book code repository and info resource
Stars: ✭ 11,428 (+2570.09%)
Mutual labels:  jupyter-notebook, data-science, data-mining
Machine learning for good
Machine learning fundamentals lesson in interactive notebooks
Stars: ✭ 142 (-66.82%)
Mutual labels:  jupyter-notebook, data-science, data-mining
Data Science Resources
👨🏽‍🏫You can learn about what data science is and why it's important in today's modern world. Are you interested in data science?🔋
Stars: ✭ 171 (-60.05%)
Mutual labels:  jupyter-notebook, data-science, data-mining
Eli5
A library for debugging/inspecting machine learning classifiers and explaining their predictions
Stars: ✭ 2,477 (+478.74%)
Mutual labels:  jupyter-notebook, data-science, xgboost
Sci Pype
A Machine Learning API with native redis caching and export + import using S3. Analyze entire datasets using an API for building, training, testing, analyzing, extracting, importing, and archiving. This repository can run from a docker container or from the repository.
Stars: ✭ 90 (-78.97%)
Mutual labels:  jupyter-notebook, data-science, xgboost
Imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Stars: ✭ 194 (-54.67%)
Mutual labels:  jupyter-notebook, data-science, interpretability
Amazing Feature Engineering
Feature engineering is the process of using domain knowledge to extract features from raw data via data mining techniques. These features can be used to improve the performance of machine learning algorithms. Feature engineering can be considered as applied machine learning itself.
Stars: ✭ 218 (-49.07%)
Mutual labels:  jupyter-notebook, data-science, data-mining
Benchmarks
Comparison tools
Stars: ✭ 139 (-67.52%)
Mutual labels:  jupyter-notebook, xgboost, h2o
Sigmoidal ai
Tutoriais de Python, Data Science, Machine Learning e Deep Learning - Sigmoidal
Stars: ✭ 103 (-75.93%)
Mutual labels:  jupyter-notebook, data-science, xgboost
Fantasy Basketball
Scraping statistics, predicting NBA player performance with neural networks and boosting algorithms, and optimising lineups for Draft Kings with genetic algorithm. Capstone Project for Machine Learning Engineer Nanodegree by Udacity.
Stars: ✭ 146 (-65.89%)
Mutual labels:  jupyter-notebook, data-science, data-mining
H2o Tutorials
Tutorials and training material for the H2O Machine Learning Platform
Stars: ✭ 1,305 (+204.91%)
Mutual labels:  jupyter-notebook, data-science, h2o
Explainx
Explainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code.
Stars: ✭ 196 (-54.21%)
Mutual labels:  transparency, jupyter-notebook, interpretability
interpretable-ml
Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-96.03%)
Mutual labels:  data-mining, transparency, interpretability
Spring2017 proffosterprovost
Introduction to Data Science
Stars: ✭ 18 (-95.79%)
Mutual labels:  jupyter-notebook, data-science, data-mining
Allstate capstone
Allstate Kaggle Competition ML Capstone Project
Stars: ✭ 72 (-83.18%)
Mutual labels:  jupyter-notebook, data-science, xgboost
Facet
Human-explainable AI.
Stars: ✭ 269 (-37.15%)
Mutual labels:  jupyter-notebook, data-science, interpretability

Machine Learning Interpretability (MLI)

Machine learning algorithms create potentially more accurate models than linear models, but any increase in accuracy over more traditional, better-understood, and more easily explainable techniques is not practical for those who must explain their models to regulators or customers. For many decades, the models created by machine learning algorithms were generally taken to be black-boxes. However, a recent flurry of research has introduced credible techniques for interpreting complex, machine-learned models. Materials presented here illustrate applications or adaptations of these techniques for practicing data scientists.

Want to contribute your own content? Just make a pull request.

Want to use the content in this repo? Just cite the H2O.ai machine learning interpretability team or the original author(s) as appropriate.

Contents

Practical MLI examples

(A Dockerfile is provided that will construct a container with all necessary dependencies to run the examples here.)

Installation of Examples

Dockerfile

A Dockerfile is provided to build a docker container with all necessary packages and dependencies. This is the easiest way to use these examples if you are on Mac OS X, *nix, or Windows 10. To do so:

  1. Install and start docker. From a terminal:
  2. Create a directory for the Dockerfile. $ mkdir anaconda_py36_h2o_xgboost_graphviz
  3. Fetch the Dockerfile from the mli-resources repo. $ curl https://raw.githubusercontent.com/h2oai/mli-resources/master/anaconda_py36_h2o_xgboost_graphviz/Dockerfile > anaconda_py36_h2o_xgboost_graphviz/Dockerfile
  4. Build a docker image from the Dockefile. For this and other docker commands below, you may need to use sudo. $ docker build --no-cache anaconda_py36_h2o_xgboost_graphviz
  5. Display docker image IDs. You are probably interested in the most recently created image. $ docker images
  6. Start the docker image and the Jupyter notebook server. $ docker run -i -t -p 8888:8888 <image_id> /bin/bash -c "/opt/conda/bin/conda install jupyter -y --quiet && /opt/conda/bin/jupyter notebook --notebook-dir=/mli-resources --ip='*' --port=8888 --no-browser --allow-root"
  7. List docker containers. $ docker ps
  8. Copy the sample data into the Docker container. Refer to GetData.md to obtain datasets needed for notebooks. $ docker cp path/to/train.csv <container_id>:/mli-resources/data/train.csv
  9. Navigate to the port Jupyter directs you to on your machine. It will likely include a token.
Manual

Install:

  1. Anaconda Python 5.1.0 from the Anaconda archives.
  2. Java.
  3. The latest stable h2o Python package.
  4. Git.
  5. XGBoost with Python bindings.
  6. GraphViz.

Anaconda Python, Java, Git, and GraphViz must be added to your system path.

From a terminal:

  1. Clone the mli-resources repository with examples. $ git clone https://github.com/h2oai/mli-resources.git
  2. $ cd mli-resources
  3. Copy the sample data into the mli-resources repo directory. Refer to GetData.md to obtain datasets needed for notebooks. $ cp path/to/train.csv ./data
  4. Start the Jupyter notebook server. $ jupyter notebook
  5. Navigate to the port Jupyter directs you to on your machine.

Additional Code Examples

The notebooks in this repo have been revamped and refined many times. Other versions with different, and potentially interesting, details are available at these locations:

Testing Explanations

One way to test generated explanations for accuracy is with simulated data with known characteristics. For instance, models trained on totally random data with no relationship between a number of input variables and a prediction target should not give strong weight to any input variable nor generate compelling local explanations or reason codes. Conversely, you can use simulated data with a known signal generating function to test that explanations accurately represent that known function. Detailed examples of testing explanations with simulated data are available here. A summary of these results are available here.

Webinars/Videos

Booklets

Conference Presentations

Miscellaneous Resources

General References

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].