All Projects → zxhuang1698 → Interpretability By Parts

zxhuang1698 / Interpretability By Parts

Code repository for "Interpretable and Accurate Fine-grained Recognition via Region Grouping", CVPR 2020 (Oral)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Interpretability By Parts

Awesome Federated Learning
Federated Learning Library: https://fedml.ai
Stars: ✭ 624 (+609.09%)
Mutual labels:  interpretability
Contrastiveexplanation
Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University
Stars: ✭ 36 (-59.09%)
Mutual labels:  interpretability
Celebamask Hq
A large-scale face dataset for face parsing, recognition, generation and editing.
Stars: ✭ 1,156 (+1213.64%)
Mutual labels:  celeba
Tf Explain
Interpretability Methods for tf.keras models with Tensorflow 2.x
Stars: ✭ 780 (+786.36%)
Mutual labels:  interpretability
Alibi
Algorithms for monitoring and explaining machine learning models
Stars: ✭ 924 (+950%)
Mutual labels:  interpretability
Text nn
Text classification models. Used a submodule for other projects.
Stars: ✭ 55 (-37.5%)
Mutual labels:  interpretability
Flashtorch
Visualization toolkit for neural networks in PyTorch! Demo -->
Stars: ✭ 561 (+537.5%)
Mutual labels:  interpretability
Celeba Hq Modified
Modified h5tool.py make user getting celeba-HQ easier
Stars: ✭ 84 (-4.55%)
Mutual labels:  celeba
Symbolic Metamodeling
Codebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.
Stars: ✭ 29 (-67.05%)
Mutual labels:  interpretability
Awesome Production Machine Learning
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
Stars: ✭ 10,504 (+11836.36%)
Mutual labels:  interpretability
Dalex
moDel Agnostic Language for Exploration and eXplanation
Stars: ✭ 795 (+803.41%)
Mutual labels:  interpretability
Began Tensorflow
Tensorflow implementation of "BEGAN: Boundary Equilibrium Generative Adversarial Networks"
Stars: ✭ 904 (+927.27%)
Mutual labels:  celeba
Adversarial Explainable Ai
💡 A curated list of adversarial attacks on model explanations
Stars: ✭ 56 (-36.36%)
Mutual labels:  interpretability
Ad examples
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Stars: ✭ 641 (+628.41%)
Mutual labels:  interpretability
Cnn Interpretability
🏥 Visualizing Convolutional Networks for MRI-based Diagnosis of Alzheimer’s Disease
Stars: ✭ 68 (-22.73%)
Mutual labels:  interpretability
Xai
XAI - An eXplainability toolbox for machine learning
Stars: ✭ 596 (+577.27%)
Mutual labels:  interpretability
Trelawney
General Interpretability Package
Stars: ✭ 55 (-37.5%)
Mutual labels:  interpretability
Cxplain
Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
Stars: ✭ 84 (-4.55%)
Mutual labels:  interpretability
Reverse Engineering Neural Networks
A collection of tools for reverse engineering neural networks.
Stars: ✭ 78 (-11.36%)
Mutual labels:  interpretability
Athena
Automatic equation building and curve fitting. Runs on Tensorflow. Built for academia and research.
Stars: ✭ 57 (-35.23%)
Mutual labels:  interpretability

Interp-Parts

Code repository for our paper "Interpretable and Accurate Fine-grained Recognition via Region Grouping" in CVPR 2020 (Oral Presentation).

[Project Page] [Paper]

The repository includes full training and evaluation code for CelebA and CUB-200-2011 datasets.

Dependencies

  • p7zip (used for uncompression)
  • Python 3
  • Pytorch 1.4.0+
  • OpenCV-Python
  • Numpy
  • Scipy
  • MatplotLib
  • Scikit-learn

Dataset

CelebA

You will need to download both aligned and unaligned face images (JPEG format) in CelebA dataset at http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html. Make sure your data/celeba folder is structured as follows:

├── img_align_celeba.zip
├── img_celeba.7z.001
├── ...
├── img_celeba.7z.014
└── annotation.zip

We provide a bash script to unpack all images. You can use

cd ./data/celeba
sh data_processing.sh

It might take more than 30 minutes to uncompress all the data.

CUB 200

For Caltech-UCSD Birds-200-2011 (CUB-200) dataset, you will need to manually download the dataset at http://www.vision.caltech.edu/visipedia/CUB-200-2011.html and uncompress the tgz file into data/cub200. Make sure your data/cub200 folder is structured as follows:

├── CUB_200_2011/
|   ├── images/
|   ├── parts/
|   ├── attributes/
├── train_test_split.txt
├── ...

CelebA Example

Helper for training parameters:

cd src/celeba
python train.py --config-help

Training (Unaligned CelebA from SCOPS)

Training (You can specify the desired settings in celeba_res101.json):

cd src/celeba
python train.py --config ../../celeba_res101.json

The code will create three folders for model checkpoints (./checkpoint), log files (./log) and tensorboard logs (./tensorboard_log).

Visualization and Evaluation (Unaligned CelebA from SCOPS)

Visualization of the results (assuming a ResNet 101 model trained with 9 parts):

cd src/celeba
python visualize.py --load celeba_res101_p9

The code will create a new folder (./visualization) for output images (25 by default).

Evaluating interpretability using part localization (assuming a ResNet101 model trained with 9 parts):

cd src/celeba
python eval_interp.py --load celeba_res101_p9

This should reproduce our results in Table 2.

Evaluating accuracy (assuming a ResNet101 model trained with 9 parts):

cd src/celeba
python eval_acc.py --load celeba_res101_p9

This will report the classification accuracy (mean class accuracy) on the test set of SCOPS split.

Reproduce Results in Table 1 (Aligned CelebA)

Training (You need to change the split to accuracy in celeba_res101.json):

cd src/celeba
python train.py --config ../../celeba_res101.json

Evaluation:

cd src/celeba
python eval_acc.py --load celeba_res101_p9

CUB-200 Example

Helper for training parameters:

cd src/cub200
python train.py --config-help

Training

Training (You can specify the desired settings in celeba_res101.json. The default configuration is slightly different from the paper to reduce GPU memory usage, so that the code can run on a single advanced graphic card.):

cd src/cub200
python train.py --config ../../cub_res101.json

The code will create three folders for model checkpoints (./checkpoint), log files (./log) and tensorboard logs (./tensorboard_log).

Visualization and Evaluation

Visualization of the results (assuming a ResNet 101 model trained with 5 parts):

cd src/cub200
python visualize.py --load cub_res101_p5

The code will create a new folder (./visualization) for output images (25 by default).

Evaluating interpretability using part localization (assuming a ResNet101 model trained with 5 parts):

cd src/cub200
python eval_interp.py --load cub_res101_p5

Evaluating accuracy (assuming a ResNet101 model trained with 5 parts):

cd src/cub200
python eval_acc.py --load cub_res101_p5

This will report the classification accuracy on the test set of CUB-200.

References

If you are using our code, please consider citing our paper.

@InProceedings{Huang_2020_CVPR,
author = {Huang, Zixuan and Li, Yin},
title = {Interpretable and Accurate Fine-grained Recognition via Region Grouping},
booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}

If you are using CelebA dataset, please cite

@inproceedings{liu2015faceattributes,
 title = {Deep Learning Face Attributes in the Wild},
 author = {Liu, Ziwei and Luo, Ping and Wang, Xiaogang and Tang, Xiaoou},
 booktitle = {Proceedings of International Conference on Computer Vision (ICCV)},
 month = {December},
 year = {2015}
}

If you are using CUB-200 dataset, please cite

@techreport{WahCUB_200_2011,
Title = {{The Caltech-UCSD Birds-200-2011 Dataset}},
Author = {Wah, C. and Branson, S. and Welinder, P. and Perona, P. and Belongie, S.},
Year = {2011}
Institution = {California Institute of Technology},
Number = {CNS-TR-2011-001}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].