All Projects → wittawatj → kernel-mod

wittawatj / kernel-mod

Licence: MIT license
NeurIPS 2018. Linear-time model comparison tests.

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to kernel-mod

delay-discounting-analysis
Hierarchical Bayesian estimation and hypothesis testing for delay discounting tasks
Stars: ✭ 20 (+17.65%)
Mutual labels:  hypothesis-testing
supervised-random-projections
Python implementation of supervised PCA, supervised random projections, and their kernel counterparts.
Stars: ✭ 19 (+11.76%)
Mutual labels:  kernel-methods
distfit
distfit is a python library for probability density fitting.
Stars: ✭ 250 (+1370.59%)
Mutual labels:  hypothesis-testing
hypothetical
Hypothesis and statistical testing in Python
Stars: ✭ 49 (+188.24%)
Mutual labels:  hypothesis-testing
graphkit-learn
A python package for graph kernels, graph edit distances, and graph pre-image problem.
Stars: ✭ 87 (+411.76%)
Mutual labels:  kernel-methods
KernelKnn
Kernel k Nearest Neighbors in R
Stars: ✭ 14 (-17.65%)
Mutual labels:  kernel-methods
R-stats-machine-learning
Misc Statistics and Machine Learning codes in R
Stars: ✭ 33 (+94.12%)
Mutual labels:  hypothesis-testing
TimeseriesSurrogates.jl
A Julia package for generating timeseries surrogates
Stars: ✭ 35 (+105.88%)
Mutual labels:  hypothesis-testing
kernel-ep
UAI 2015. Kernel-based just-in-time learning for expectation propagation
Stars: ✭ 16 (-5.88%)
Mutual labels:  kernel-methods
Awesome Graph Classification
A collection of important graph embedding, classification and representation learning papers with implementations.
Stars: ✭ 4,309 (+25247.06%)
Mutual labels:  kernel-methods
pytest tutorial
No description or website provided.
Stars: ✭ 20 (+17.65%)
Mutual labels:  hypothesis-testing
interpretable-test
NeurIPS 2016. Linear-time interpretable nonparametric two-sample test.
Stars: ✭ 58 (+241.18%)
Mutual labels:  kernel-methods
kafbox
A Matlab benchmarking toolbox for kernel adaptive filtering
Stars: ✭ 70 (+311.76%)
Mutual labels:  kernel-methods
bioinf-commons
Bioinformatics library in Kotlin
Stars: ✭ 21 (+23.53%)
Mutual labels:  hypothesis-testing
thermostat
Collection of NLP model explanations and accompanying analysis tools
Stars: ✭ 126 (+641.18%)
Mutual labels:  interpretability
deep-significance
Enabling easy statistical significance testing for deep neural networks.
Stars: ✭ 266 (+1464.71%)
Mutual labels:  hypothesis-testing
frp
FRP: Fast Random Projections
Stars: ✭ 40 (+135.29%)
Mutual labels:  kernel-methods
MachineLearning
Machine learning for beginner(Data Science enthusiast)
Stars: ✭ 104 (+511.76%)
Mutual labels:  hypothesis-testing
adaptive-wavelets
Adaptive, interpretable wavelets across domains (NeurIPS 2021)
Stars: ✭ 58 (+241.18%)
Mutual labels:  interpretability
DSPKM
This is the page for the book Digital Signal Processing with Kernel Methods.
Stars: ✭ 32 (+88.24%)
Mutual labels:  kernel-methods

kmod

license

This repository contains a Python 3.6 implementation of the nonparametric linear-time relative goodness-of-fit tests (i.e., Rel-UME and Rel-FSSD) described in our paper

Informative Features for Model Comparison
Wittawat Jitkrittum, Heishiro Kanagawa, Patsorn Sangkloy, James Hays, Bernhard Schölkopf, Arthur Gretton
NIPS 2018
https://arxiv.org/abs/1810.11630

How to install?

If you plan to reproduce experimental results, you will probably want to modify our code. It is best to install by:

  1. Clone the repository by git clone [email protected]:wittawatj/kernel-mod.git.

  2. cd to the folder that you get, and install our package by

    pip install -e .

Alternatively, if you only want to use the developed package, you can do the following without cloning the repository.

pip install git+https://github.com/wittawatj/kernel-mod.git

Either way, once installed, you should be able to do import kmod without any error.

Dependency

autograd, matplotlib, numpy, scipy, Pytorch 0.4.1 and the following two packages.

In Python, make sure you can import freqopttest and import kgof without any error.

Demo

To get started, check demo_kmod.ipynb. This is a Jupyter notebook which will guide you through from the beginning. There are many Jupyter notebooks in ipynb folder demonstrating other implemented tests. Be sure to check them if you would like to explore.

Reproduce experimental results

Experiments on test powers

All experiments which involve test powers can be found in kmod/ex/ex1_vary_n.py, kmod/ex/ex2_prob_params.py, and kmod/ex/ex3_real_images.py. Each file is runnable with a command line argument. For example in ex1_vary_n.py, we aim to check the test power of each testing algorithm as a function of the sample size n. The script ex1_vary_n.py takes a dataset name as its argument. See run_ex1.sh which is a standalone Bash script on how to execute ex1_power_vs_n.py.

We used independent-jobs package to parallelize our experiments over a Slurm cluster (the package is not needed if you just need to use our developed tests). For example, for ex1_vary_n.py, a job is created for each combination of

(dataset, test algorithm, n, trial)

If you do not use Slurm, you can change the line

engine = SlurmComputationEngine(batch_parameters)

to

engine = SerialComputationEngine()

which will instruct the computation engine to just use a normal for-loop on a single machine (will take a lot of time). Other computation engines that you use might be supported. Running simulation will create a lot of result files (one for each tuple above) saved as Pickle. Also, the independent-jobs package requires a scratch folder to save temporary files for communication among computing nodes. Path to the folder containing the saved results can be specified in kmod/config.py by changing the value of expr_results_path:

# Full path to the directory to store experimental results.
'expr_results_path': '/full/path/to/where/you/want/to/save/results/',

The scratch folder needed by the independent-jobs package can be specified in the same file by changing the value of scratch_path

# Full path to the directory to store temporary files when running experiments
'scratch_path': '/full/path/to/a/temporary/folder/',

To plot the results, see the experiment's corresponding Jupyter notebook in the ipynb/ folder. For example, for ex1_vary_n.py see ipynb/ex1_results.ipynb to plot the results.

Experiments on images

  • Preprocessing scripts for celeba and cifar10 data can be found under preprocessing/. See the readme files in the sub-folders under proprocessing/.

  • The CNN feature extractor (used to define the kernel) in our Mnist experiment is trained with kmod/mnist/classify.py.

  • Many GAN variants we used (i.e., in experiment 5 in the main text and in the appendix) were trained using the code from https://github.com/janesjanes/GAN_training_code.

  • Trained GAN models (Pytorch 0.4.1) used in this work can be found at http://ftp.tuebingen.mpg.de/pub/is/wittawat/kmod_share/. The readme files in the sub-folders under preprocessing/ will tell you how to download the model files, for the purpose of reproducing the results.

Coding guideline

  • Use autograd.numpy instead of numpy. Part of the code relies on autograd to do automatic differentiation. Also use np.dot(X, Y) instead of X.dot(Y). autograd cannot differentiate the latter. Also, do not use x += .... Use x = x + .. instead.

If you have questions or comments about anything related to this work, please do not hesitate to contact Wittawat Jitkrittum and Heishiro Kanagawa

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].