All Projects → rcalland → deep-INFOMAX

rcalland / deep-INFOMAX

Licence: other
Chainer implementation of deep-INFOMAX

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to deep-INFOMAX

chainer-ADDA
Adversarial Discriminative Domain Adaptation in Chainer
Stars: ✭ 24 (-25%)
Mutual labels:  chainer
DRNET
PyTorch implementation of the NIPS 2017 paper - Unsupervised Learning of Disentangled Representations from Video
Stars: ✭ 45 (+40.63%)
Mutual labels:  unsupervised-learning
Indoor-SfMLearner
[ECCV'20] Patch-match and Plane-regularization for Unsupervised Indoor Depth Estimation
Stars: ✭ 115 (+259.38%)
Mutual labels:  unsupervised-learning
graph-nvp
GraphNVP: An Invertible Flow Model for Generating Molecular Graphs
Stars: ✭ 69 (+115.63%)
Mutual labels:  chainer
neural style synthesizer
No description or website provided.
Stars: ✭ 15 (-53.12%)
Mutual labels:  chainer
awesome-contrastive-self-supervised-learning
A comprehensive list of awesome contrastive self-supervised learning papers.
Stars: ✭ 748 (+2237.5%)
Mutual labels:  unsupervised-learning
spear
SPEAR: Programmatically label and build training data quickly.
Stars: ✭ 81 (+153.13%)
Mutual labels:  unsupervised-learning
dads
Code for 'Dynamics-Aware Unsupervised Discovery of Skills' (DADS). Enables skill discovery without supervision, which can be combined with model-based control.
Stars: ✭ 138 (+331.25%)
Mutual labels:  unsupervised-learning
machine-learning-course
Machine Learning Course @ Santa Clara University
Stars: ✭ 17 (-46.87%)
Mutual labels:  unsupervised-learning
chainer-DenseNet
Densely Connected Convolutional Network implementation by Chainer
Stars: ✭ 39 (+21.88%)
Mutual labels:  chainer
SimCLR-in-TensorFlow-2
(Minimally) implements SimCLR (https://arxiv.org/abs/2002.05709) in TensorFlow 2.
Stars: ✭ 75 (+134.38%)
Mutual labels:  unsupervised-learning
LinearCorex
Fast, linear version of CorEx for covariance estimation, dimensionality reduction, and subspace clustering with very under-sampled, high-dimensional data
Stars: ✭ 39 (+21.88%)
Mutual labels:  unsupervised-learning
chainer-dense-fusion
Chainer implementation of Dense Fusion
Stars: ✭ 21 (-34.37%)
Mutual labels:  chainer
music-recommendation-system
A simple Music Recommendation System
Stars: ✭ 38 (+18.75%)
Mutual labels:  unsupervised-learning
NMFADMM
A sparsity aware implementation of "Alternating Direction Method of Multipliers for Non-Negative Matrix Factorization with the Beta-Divergence" (ICASSP 2014).
Stars: ✭ 39 (+21.88%)
Mutual labels:  unsupervised-learning
Improved-Wasserstein-GAN-application-on-MRI-images
Improved Wasserstein GAN (WGAN-GP) application on medical (MRI) images
Stars: ✭ 23 (-28.12%)
Mutual labels:  unsupervised-learning
catgan pytorch
Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks
Stars: ✭ 50 (+56.25%)
Mutual labels:  unsupervised-learning
BaySMM
Model for learning document embeddings along with their uncertainties
Stars: ✭ 25 (-21.87%)
Mutual labels:  unsupervised-learning
KD3A
Here is the official implementation of the model KD3A in paper "KD3A: Unsupervised Multi-Source Decentralized Domain Adaptation via Knowledge Distillation".
Stars: ✭ 63 (+96.88%)
Mutual labels:  unsupervised-learning
Discovery
Mining Discourse Markers for Unsupervised Sentence Representation Learning
Stars: ✭ 48 (+50%)
Mutual labels:  unsupervised-learning

deep-INFOMAX

Chainer implementation of Learning deep representations by mutual information estimation and maximization.

Example of clustering result on CIFAR10

CIFAR10

Car class image was taken from the CIFAR10 test set, then the L1 distance in encoder feature space was calculated for the remaining test images. The top row shows the test image, middle row shows the 10 closest images in terms of L1 distance, and the bottom row shows the 10 furthest images.

Usage

To train, run:

$ python train.py -g 0 -o output_directory --alpha X --beta Y --gamma Z, replacing XYZ with the hyperparameters you want (see paper for more details). Training for 1000 epochs takes roughly 24 hours.

To perform some simple clustering, run:

$ python cluster.py -g 0 -i output_directory/encoder_epoch_1000. This will output an image similar to the one above.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].