All Projects → openai → Glow

openai / Glow

Licence: mit
Code for reproducing results in "Glow: Generative Flow with Invertible 1x1 Convolutions"

Programming Languages

python
139335 projects - #7 most used programming language

Labels

Projects that are alternatives of or similar to Glow

Islands
A spigot plugin for creating customisable home islands with different biomes. https://www.spigotmc.org/resources/islands-home-islands-system.84303/
Stars: ✭ 18 (-99.37%)
Mutual labels:  paper
Restoring-Extremely-Dark-Images-In-Real-Time
The project is the official implementation of our CVPR 2021 paper, "Restoring Extremely Dark Images in Real Time"
Stars: ✭ 79 (-97.24%)
Mutual labels:  paper
Awesome-Computer-Vision-Paper-List
This repository contains all the papers accepted in top conference of computer vision, with convenience to search related papers.
Stars: ✭ 248 (-91.33%)
Mutual labels:  paper
Hyperverse
A Minecraft world management plugin
Stars: ✭ 53 (-98.15%)
Mutual labels:  paper
CHyVAE
Code for our paper -- Hyperprior Induced Unsupervised Disentanglement of Latent Representations (AAAI 2019)
Stars: ✭ 18 (-99.37%)
Mutual labels:  paper
MoCo
A pytorch reimplement of paper "Momentum Contrast for Unsupervised Visual Representation Learning"
Stars: ✭ 41 (-98.57%)
Mutual labels:  paper
Facial-Recognition-Attendance-System
An attendance system which uses facial recognition to detect which people are present in any image.
Stars: ✭ 48 (-98.32%)
Mutual labels:  paper
GuidedLabelling
Exploiting Saliency for Object Segmentation from Image Level Labels, CVPR'17
Stars: ✭ 35 (-98.78%)
Mutual labels:  paper
paperback
Paper backup generator suitable for long-term storage.
Stars: ✭ 517 (-81.92%)
Mutual labels:  paper
vehicle-trajectory-prediction
Behavior Prediction in Autonomous Driving
Stars: ✭ 23 (-99.2%)
Mutual labels:  paper
TMNet
The official pytorch implemention of the CVPR paper "Temporal Modulation Network for Controllable Space-Time Video Super-Resolution".
Stars: ✭ 77 (-97.31%)
Mutual labels:  paper
fake-news-detection
This repo is a collection of AWESOME things about fake news detection, including papers, code, etc.
Stars: ✭ 34 (-98.81%)
Mutual labels:  paper
awesome-internals
A curated list of awesome resources and learning materials in the field of X internals
Stars: ✭ 78 (-97.27%)
Mutual labels:  paper
Lottery Ticket Hypothesis-TensorFlow 2
Implementing "The Lottery Ticket Hypothesis" paper by "Jonathan Frankle, Michael Carbin"
Stars: ✭ 28 (-99.02%)
Mutual labels:  paper
sympy-paper
Repo for the paper "SymPy: symbolic computing in python"
Stars: ✭ 42 (-98.53%)
Mutual labels:  paper
Movecraft
The original movement plugin for Bukkit. Reloaded. Again.
Stars: ✭ 79 (-97.24%)
Mutual labels:  paper
ocbnn-public
General purpose library for BNNs, and implementation of OC-BNNs in our 2020 NeurIPS paper.
Stars: ✭ 31 (-98.92%)
Mutual labels:  paper
paper-reading
深度学习经典、新论文逐段精读
Stars: ✭ 6,633 (+132%)
Mutual labels:  paper
3PU pytorch
pytorch implementation of >>Patch-base progressive 3D Point Set Upsampling<<
Stars: ✭ 61 (-97.87%)
Mutual labels:  paper
PublicWeaklySupervised
(Machine) Learning to Do More with Less
Stars: ✭ 13 (-99.55%)
Mutual labels:  paper

Status: Archive (code is provided as-is, no updates expected)

Glow

Code for reproducing results in "Glow: Generative Flow with Invertible 1x1 Convolutions"

To use pretrained CelebA-HQ model, make your own manipulation vectors and run our interactive demo, check demo folder.

Requirements

  • Tensorflow (tested with v1.8.0)
  • Horovod (tested with v0.13.8) and (Open)MPI

Run

pip install -r requirements.txt

To setup (Open)MPI, check instructions on Horovod github page.

Download datasets

For small scale experiments, use MNIST/CIFAR-10 (directly downloaded by train.py using keras)

For larger scale experiments, the datasets used are in the Google Cloud locations https://openaipublic.azureedge.net/glow-demo/data/{dataset_name}-tfr.tar. The dataset_names are below, we mention the exact preprocessing / downsampling method for a correct comparison of likelihood.

Quantitative results

  • imagenet-oord - 20GB. Unconditional ImageNet 32x32 and 64x64, as described in PixelRNN/RealNVP papers (we downloaded this processed version).
  • lsun_realnvp - 140GB. LSUN 96x96. Random 64x64 crops taken at processing time, as described in RealNVP.

Qualitative results

  • celeba - 4GB. CelebA-HQ 256x256 dataset, as described in Progressive growing of GAN's. For 1024x1024 version (120GB), use celeba-full-tfr.tar while downloading.
  • imagenet - 20GB. ImageNet 32x32 and 64x64 with class labels. Centre cropped, area downsampled.
  • lsun - 700GB. LSUN 256x256. Centre cropped, area downsampled.

To download and extract celeb for example, run

wget https://openaipublic.azureedge.net/glow-demo/data/celeba-tfr.tar
tar -xvf celeb-tfr.tar

Change hps.data_dir in train.py file to point to the above folder (or use the --data_dir flag when you run train.py)

For lsun, since download can be quite big, you can instead follow the instructions in data_loaders/generate_tfr/lsun.py to generate the tfr file directly from LSUN images. church_outdoor will be the smallest category.

Simple Train with 1 GPU

Run wtih small depth to test

CUDA_VISIBLE_DEVICES=0 python train.py --depth 1

Train with multiple GPUs using MPI and Horovod

Run default training script with 8 GPUs:

mpiexec -n 8 python train.py
Ablation experiments
mpiexec -n 8 python train.py --problem cifar10 --image_size 32 --n_level 3 --depth 32 --flow_permutation [0/1/2] --flow_coupling [0/1] --seed [0/1/2] --learntop --lr 0.001

Pretrained models, logs and samples

wget https://openaipublic.azureedge.net/glow-demo/logs/abl-[reverse/shuffle/1x1]-[add/aff].tar
CIFAR-10 Quantitative result
mpiexec -n 8 python train.py --problem cifar10 --image_size 32 --n_level 3 --depth 32 --flow_permutation 2 --flow_coupling 1 --seed 0 --learntop --lr 0.001 --n_bits_x 8
ImageNet 32x32 Quantitative result
mpiexec -n 8 python train.py --problem imagenet-oord --image_size 32 --n_level 3 --depth 48 --flow_permutation 2 --flow_coupling 1 --seed 0 --learntop --lr 0.001 --n_bits_x 8
ImageNet 64x64 Quantitative result
mpiexec -n 8 python train.py --problem imagenet-oord --image_size 64 --n_level 4 --depth 48 --flow_permutation 2 --flow_coupling 1 --seed 0 --learntop --lr 0.001 --n_bits_x 8
LSUN 64x64 Quantitative result
mpiexec -n 8 python train.py --problem lsun_realnvp --category [bedroom/church_outdoor/tower] --image_size 64 --n_level 3 --depth 48 --flow_permutation 2 --flow_coupling 1 --seed 0 --learntop --lr 0.001 --n_bits_x 8

Pretrained models, logs and samples

wget https://openaipublic.azureedge.net/glow-demo/logs/lsun-rnvp-[bdr/crh/twr].tar
CelebA-HQ 256x256 Qualitative result
mpiexec -n 40 python train.py --problem celeba --image_size 256 --n_level 6 --depth 32 --flow_permutation 2 --flow_coupling 0 --seed 0 --learntop --lr 0.001 --n_bits_x 5
LSUN 96x96 and 128x128 Qualitative result
mpiexec -n 40 python train.py --problem lsun --category [bedroom/church_outdoor/tower] --image_size [96/128] --n_level 5 --depth 64 --flow_permutation 2 --flow_coupling 0 --seed 0 --learntop --lr 0.001 --n_bits_x 5

Logs and samples

wget https://openaipublic.azureedge.net/glow-demo/logs/lsun-bdr-[96/128].tar
Conditional CIFAR-10 Qualitative result
mpiexec -n 8 python train.py --problem cifar10 --image_size 32 --n_level 3 --depth 32 --flow_permutation 2 --flow_coupling 0 --seed 0 --learntop --lr 0.001 --n_bits_x 5 --ycond --weight_y=0.01
Conditional ImageNet 32x32 Qualitative result
mpiexec -n 8 python train.py --problem imagenet --image_size 32 --n_level 3 --depth 48 --flow_permutation 2 --flow_coupling 0 --seed 0 --learntop --lr 0.001 --n_bits_x 5 --ycond --weight_y=0.01
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].