All Projects → mikigom → large-scale-OT-mapping-TF

mikigom / large-scale-OT-mapping-TF

Licence: other
Tensorflow Implementation of "Large-scale Optimal Transport and Mapping Estimation"(ICLR2018/NIPS 2017 OTML)

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to large-scale-OT-mapping-TF

Deep Steganography
Hiding Images within other images using Deep Learning
Stars: ✭ 136 (+655.56%)
Mutual labels:  nips-2017
MongeAmpereFlow
Continuous-time gradient flow for generative modeling and variational inference
Stars: ✭ 29 (+61.11%)
Mutual labels:  optimal-transport
deep-steg
Global NIPS Paper Implementation Challenge of "Hiding Images in Plain Sight: Deep Steganography"
Stars: ✭ 43 (+138.89%)
Mutual labels:  nips-2017
Attentionalpoolingaction
Code/Model release for NIPS 2017 paper "Attentional Pooling for Action Recognition"
Stars: ✭ 248 (+1277.78%)
Mutual labels:  nips-2017
sinkhorn-label-allocation
Sinkhorn Label Allocation is a label assignment method for semi-supervised self-training algorithms. The SLA algorithm is described in full in this ICML 2021 paper: https://arxiv.org/abs/2102.08622.
Stars: ✭ 49 (+172.22%)
Mutual labels:  optimal-transport
pytorch-deep-sets
PyTorch re-implementation of parts of "Deep Sets" (NIPS 2017)
Stars: ✭ 60 (+233.33%)
Mutual labels:  nips-2017
Spherenet
Implementation for <Deep Hyperspherical Learning> in NIPS'17.
Stars: ✭ 111 (+516.67%)
Mutual labels:  nips-2017
infnet-spen
TensorFlow implementation [ICLR 18] "Learning Approximate Inference Networks for Structured Prediction"
Stars: ✭ 30 (+66.67%)
Mutual labels:  iclr2018
progressive-growing-of-gans.pytorch
Unofficial PyTorch implementation of "Progressive Growing of GANs for Improved Quality, Stability, and Variation".
Stars: ✭ 51 (+183.33%)
Mutual labels:  optimal-transport
NIPS-Global-Paper-Implementation-Challenge
Selective Classification For Deep Neural Networks.
Stars: ✭ 11 (-38.89%)
Mutual labels:  nips-2017
OptimalTransport.jl
Optimal transport algorithms for Julia
Stars: ✭ 64 (+255.56%)
Mutual labels:  optimal-transport
nips rl
Code for NIPS 2017 learning to run challenge
Stars: ✭ 37 (+105.56%)
Mutual labels:  nips-2017
Wasserstein2GenerativeNetworks
PyTorch implementation of "Wasserstein-2 Generative Networks" (ICLR 2021)
Stars: ✭ 38 (+111.11%)
Mutual labels:  optimal-transport
Dynamic routing between capsules
Implementation of Dynamic Routing Between Capsules, Sara Sabour, Nicholas Frosst, Geoffrey E Hinton, NIPS 2017
Stars: ✭ 202 (+1022.22%)
Mutual labels:  nips-2017
MNIST-multitask
6️⃣6️⃣6️⃣ Reproduce ICLR '18 under-reviewed paper "MULTI-TASK LEARNING ON MNIST IMAGE DATASETS"
Stars: ✭ 34 (+88.89%)
Mutual labels:  iclr2018
Prototypical Networks Tensorflow
Tensorflow implementation of NIPS 2017 Paper "Prototypical Networks for Few-shot Learning"
Stars: ✭ 122 (+577.78%)
Mutual labels:  nips-2017
pred-rnn
PredRNN: Recurrent Neural Networks for Predictive Learning using Spatiotemporal LSTMs
Stars: ✭ 115 (+538.89%)
Mutual labels:  nips-2017
gan-qp.pytorch
Unofficial PyTorch implementation of "GAN-QP: A Novel GAN Framework without Gradient Vanishing and Lipschitz Constraint"
Stars: ✭ 26 (+44.44%)
Mutual labels:  optimal-transport
stance
Learned string similarity for entity names using optimal transport.
Stars: ✭ 27 (+50%)
Mutual labels:  optimal-transport
MongeAmpere
Solve large instance of semi-discrete optimal transport problems and other Monge-Ampere equations
Stars: ✭ 18 (+0%)
Mutual labels:  optimal-transport

large-scale-OT-mapping-TF

Tensorflow Implementation of the following paper:

Title:	
Large-Scale Optimal Transport and Mapping Estimation
Authors:	
Seguy, Vivien; Bhushan Damodaran, Bharath; Flamary, Rémi; Courty, Nicolas; Rolet, Antoine; Blondel, Mathieu
Publication:	
eprint arXiv:1711.02283
Publication Date:	
11/2017
Origin:	
ARXIV
Keywords:	
Statistics - Machine Learning
Comment:	
10 pages, 4 figures
Bibliographic Code:	
2017arXiv171102283S

on arXiv

on OpenReview

Some notes

  • This repository does not contain an implementation of the entire experiment of the paper. Instead, it confirms the thesis's core algorithm in a small toy example.

  • Unlike the original paper, total batch-wise optimization is not implemented but I believe that it makes little difference.

  • To run experiments, run run.sh.

  • L2 regularization generally looks better than entropic regularization.

  • Epsilon is quiet sensitive and important hyper-parameter. In my toy example, eps = 0.01 looks reasonable choice.

Requirements

python3
tensorflow
matplotlib
seaborn
...

Results (on L2 regularization)

Source and Target

source_and_target

Source points are green and target points are red.

Monge Map Estimation

monge_map_estimation

Source points are green and transported points are blue.

KDE on transported distribution

kde_on_transported_distribution

Author

@mikigom (Junghoon Seo, Satrec Initiative)

[email protected]

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].