All Projects → kohpangwei → Influence Release

kohpangwei / Influence Release

Licence: mit

Projects that are alternatives of or similar to Influence Release

Bandits
Python library for Multi-Armed Bandits
Stars: ✭ 547 (-2.15%)
Mutual labels:  jupyter-notebook
Deepnlp Course
Deep NLP Course
Stars: ✭ 551 (-1.43%)
Mutual labels:  jupyter-notebook
Log Progress
https://habr.com/ru/post/276725/
Stars: ✭ 556 (-0.54%)
Mutual labels:  jupyter-notebook
Gan
Tooling for GANs in TensorFlow
Stars: ✭ 547 (-2.15%)
Mutual labels:  jupyter-notebook
Curve Text Detector
This repository provides train&test code, dataset, det.&rec. annotation, evaluation script, annotation tool, and ranking.
Stars: ✭ 551 (-1.43%)
Mutual labels:  jupyter-notebook
Ada Build
The Ada Developers Academy Jump Start program, which is intended for anyone who is interested in beginning their journey into coding.
Stars: ✭ 551 (-1.43%)
Mutual labels:  jupyter-notebook
Pdpbox
python partial dependence plot toolbox
Stars: ✭ 544 (-2.68%)
Mutual labels:  jupyter-notebook
Data Science Portfolio
Portfolio of data science projects completed by me for academic, self learning, and hobby purposes.
Stars: ✭ 559 (+0%)
Mutual labels:  jupyter-notebook
Competitive Data Science
Materials for "How to Win a Data Science Competition: Learn from Top Kagglers" course
Stars: ✭ 551 (-1.43%)
Mutual labels:  jupyter-notebook
Tf Dann
Domain-Adversarial Neural Network in Tensorflow
Stars: ✭ 556 (-0.54%)
Mutual labels:  jupyter-notebook
Neural Collage
Collaging on Internal Representations: An Intuitive Approach for Semantic Transfiguration
Stars: ✭ 549 (-1.79%)
Mutual labels:  jupyter-notebook
Go Profiler Notes
felixge's notes on the various go profiling methods that are available.
Stars: ✭ 525 (-6.08%)
Mutual labels:  jupyter-notebook
Torch Residual Networks
This is a Torch implementation of ["Deep Residual Learning for Image Recognition",Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun](http://arxiv.org/abs/1512.03385) the winners of the 2015 ILSVRC and COCO challenges.
Stars: ✭ 553 (-1.07%)
Mutual labels:  jupyter-notebook
Pythoncode Tutorials
The Python Code Tutorials
Stars: ✭ 544 (-2.68%)
Mutual labels:  jupyter-notebook
Gs Quant
Python toolkit for quantitative finance
Stars: ✭ 556 (-0.54%)
Mutual labels:  jupyter-notebook
Fuzzingbook
Project page for "The Fuzzing Book"
Stars: ✭ 549 (-1.79%)
Mutual labels:  jupyter-notebook
Reinforce
Reinforcement Learning Algorithm Package & PuckWorld, GridWorld Gym environments
Stars: ✭ 552 (-1.25%)
Mutual labels:  jupyter-notebook
Data Analysis And Machine Learning Projects
Repository of teaching materials, code, and data for my data analysis and machine learning projects.
Stars: ✭ 5,166 (+824.15%)
Mutual labels:  jupyter-notebook
Qs ledger
Quantified Self Personal Data Aggregator and Data Analysis
Stars: ✭ 559 (+0%)
Mutual labels:  jupyter-notebook
Numerical Tours
Numerical Tours of Signal Processing
Stars: ✭ 553 (-1.07%)
Mutual labels:  jupyter-notebook

Understanding Black-box Predictions via Influence Functions

This code replicates the experiments from the following paper:

Pang Wei Koh and Percy Liang

Understanding Black-box Predictions via Influence Functions

International Conference on Machine Learning (ICML), 2017.

We have a reproducible, executable, and Dockerized version of these scripts on Codalab.

The datasets for the experiments can also be found at the Codalab link.

Dependencies:

  • Numpy/Scipy/Scikit-learn/Pandas
  • Tensorflow (tested on v1.1.0)
  • Keras (tested on v2.0.4)
  • Spacy (tested on v1.8.2)
  • h5py (tested on v2.7.0)
  • Matplotlib/Seaborn (for visualizations)

A Dockerfile with these dependencies can be found here: https://hub.docker.com/r/pangwei/tf1.1/


In this paper, we use influence functions --- a classic technique from robust statistics --- to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products. We show that even on non-convex and non-differentiable models where the theory breaks down, approximations to influence functions can still provide valuable information. On linear models and convolutional neural networks, we demonstrate that influence functions are useful for multiple purposes: understanding model behavior, debugging models, detecting dataset errors, and even creating visually-indistinguishable training-set attacks.

If you have questions, please contact Pang Wei Koh ([email protected]).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].