All Projects → pathak22 → Context Encoder

pathak22 / Context Encoder

Licence: other
[CVPR 2016] Unsupervised Feature Learning by Image Inpainting using GANs

Programming Languages

lua
6591 projects

Projects that are alternatives of or similar to Context Encoder

catgan pytorch
Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks
Stars: ✭ 50 (-93.16%)
Mutual labels:  generative-adversarial-network, gan, dcgan, unsupervised-learning
Igan
Interactive Image Generation via Generative Adversarial Networks
Stars: ✭ 3,845 (+425.99%)
Mutual labels:  computer-graphics, gan, generative-adversarial-network, dcgan
Pix2pix
Image-to-image translation with conditional adversarial nets
Stars: ✭ 8,765 (+1099.04%)
Mutual labels:  computer-graphics, gan, generative-adversarial-network, dcgan
Pytorch Cyclegan And Pix2pix
Image-to-Image Translation in PyTorch
Stars: ✭ 16,477 (+2154.04%)
Mutual labels:  computer-graphics, gan, generative-adversarial-network
All About The Gan
All About the GANs(Generative Adversarial Networks) - Summarized lists for GAN
Stars: ✭ 630 (-13.82%)
Mutual labels:  gan, generative-adversarial-network, unsupervised-learning
Cyclegan
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.
Stars: ✭ 10,933 (+1395.62%)
Mutual labels:  computer-graphics, gan, generative-adversarial-network
GAN-Project-2018
GAN in Tensorflow to be run via Linux command line
Stars: ✭ 21 (-97.13%)
Mutual labels:  generative-adversarial-network, gan, dcgan
Hidt
Official repository for the paper "High-Resolution Daytime Translation Without Domain Labels" (CVPR2020, Oral)
Stars: ✭ 513 (-29.82%)
Mutual labels:  gan, generative-adversarial-network, unsupervised-learning
DLSS
Deep Learning Super Sampling with Deep Convolutional Generative Adversarial Networks.
Stars: ✭ 88 (-87.96%)
Mutual labels:  generative-adversarial-network, gan, dcgan
Pix2pixhd
Synthesizing and manipulating 2048x1024 images with conditional GANs
Stars: ✭ 5,553 (+659.64%)
Mutual labels:  computer-graphics, gan, generative-adversarial-network
Dcgan
The Simplest DCGAN Implementation
Stars: ✭ 286 (-60.88%)
Mutual labels:  gan, generative-adversarial-network, dcgan
Awesome Gans
Awesome Generative Adversarial Networks with tensorflow
Stars: ✭ 585 (-19.97%)
Mutual labels:  gan, generative-adversarial-network, dcgan
St Cgan
Dataset and Code for our CVPR'18 paper ST-CGAN: "Stacked Conditional Generative Adversarial Networks for Jointly Learning Shadow Detection and Shadow Removal"
Stars: ✭ 13 (-98.22%)
Mutual labels:  computer-graphics, gan, generative-adversarial-network
Anogan Tf
Unofficial Tensorflow Implementation of AnoGAN (Anomaly GAN)
Stars: ✭ 218 (-70.18%)
Mutual labels:  gan, generative-adversarial-network, dcgan
Gan Sandbox
Vanilla GAN implemented on top of keras/tensorflow enabling rapid experimentation & research. Branches correspond to implementations of stable GAN variations (i.e. ACGan, InfoGAN) and other promising variations of GANs like conditional and Wasserstein.
Stars: ✭ 210 (-71.27%)
Mutual labels:  gan, generative-adversarial-network, unsupervised-learning
Iseebetter
iSeeBetter: Spatio-Temporal Video Super Resolution using Recurrent-Generative Back-Projection Networks | Python3 | PyTorch | GANs | CNNs | ResNets | RNNs | Published in Springer Journal of Computational Visual Media, September 2020, Tsinghua University Press
Stars: ✭ 202 (-72.37%)
Mutual labels:  gan, generative-adversarial-network, unsupervised-learning
Image generator
DCGAN image generator 🖼️.
Stars: ✭ 173 (-76.33%)
Mutual labels:  gan, generative-adversarial-network, dcgan
Dragan
A stable algorithm for GAN training
Stars: ✭ 189 (-74.15%)
Mutual labels:  gan, generative-adversarial-network, unsupervised-learning
UEGAN
[TIP2020] Pytorch implementation of "Towards Unsupervised Deep Image Enhancement with Generative Adversarial Network"
Stars: ✭ 68 (-90.7%)
Mutual labels:  generative-adversarial-network, gan, unsupervised-learning
Pytorch Mnist Celeba Gan Dcgan
Pytorch implementation of Generative Adversarial Networks (GAN) and Deep Convolutional Generative Adversarial Networks (DCGAN) for MNIST and CelebA datasets
Stars: ✭ 363 (-50.34%)
Mutual labels:  gan, generative-adversarial-network, dcgan

Context Encoders: Feature Learning by Inpainting

CVPR 2016

[Project Website] [Imagenet Results]

Sample results on held-out images:

teaser

This is the training code for our CVPR 2016 paper on Context Encoders for learning deep feature representation in an unsupervised manner by image inpainting. Context Encoders are trained jointly with reconstruction and adversarial loss. This repo contains quick demo, training/testing code for center region inpainting and training/testing code for arbitray random region inpainting. This code is adapted from an initial fork of Soumith's DCGAN implementation. Scroll down to try out a quick demo or train your own inpainting models!

If you find Context Encoders useful in your research, please cite:

@inproceedings{pathakCVPR16context,
    Author = {Pathak, Deepak and Kr\"ahenb\"uhl, Philipp and Donahue, Jeff and Darrell, Trevor and Efros, Alexei},
    Title = {Context Encoders: Feature Learning by Inpainting},
    Booktitle = {Computer Vision and Pattern Recognition ({CVPR})},
    Year = {2016}
}

Contents

  1. Semantic Inpainting Demo
  2. Train Context Encoders
  3. Download Features Caffemodel
  4. TensorFlow Implementation
  5. Project Website
  6. Download Dataset

1) Semantic Inpainting Demo

  1. Install Torch: http://torch.ch/docs/getting-started.html#_

  2. Clone the repository

git clone https://github.com/pathak22/context-encoder.git
  1. Demo
cd context-encoder
bash ./models/scripts/download_inpaintCenter_models.sh
# This will populate the `./models/` folder with trained models.

net=models/inpaintCenter/paris_inpaintCenter.t7 name=paris_result imDir=images/paris overlapPred=4 manualSeed=222 batchSize=21 gpu=1 th demo.lua
net=models/inpaintCenter/imagenet_inpaintCenter.t7 name=imagenet_result imDir=images/imagenet overlapPred=4 manualSeed=222 batchSize=21 gpu=1 th demo.lua
net=models/inpaintCenter/paris_inpaintCenter.t7 name=ucberkeley_result imDir=images/ucberkeley overlapPred=4 manualSeed=222 batchSize=4 gpu=1 th demo.lua
# Note: If you are running on cpu, use gpu=0
# Note: samples given in ./images/* are held-out images

2) Train Context Encoders

If you could successfully run the above demo, run following steps to train your own context encoder model for image inpainting.

  1. [Optional] Install Display Package as follows. If you don't want to install it, then set display=0 in train.lua.
luarocks install https://raw.githubusercontent.com/szym/display/master/display-scm-0.rockspec
cd ~
th -ldisplay.start 8000
# if working on server machine create tunnel: ssh -f -L 8000:localhost:8000 -N server_address.com
# on client side, open in browser: http://localhost:8000/
  1. Make the dataset folder.
mkdir -p /path_to_wherever_you_want/mydataset/train/images/
# put all training images inside mydataset/train/images/
mkdir -p /path_to_wherever_you_want/mydataset/val/images/
# put all val images inside mydataset/val/images/
cd context-encoder/
ln -sf /path_to_wherever_you_want/mydataset dataset
  1. Train the model
# For training center region inpainting model, run:
DATA_ROOT=dataset/train display_id=11 name=inpaintCenter overlapPred=4 wtl2=0.999 nBottleneck=4000 niter=500 loadSize=350 fineSize=128 gpu=1 th train.lua

# For training random region inpainting model, run:
DATA_ROOT=dataset/train display_id=11 name=inpaintRandomNoOverlap useOverlapPred=0 wtl2=0.999 nBottleneck=4000 niter=500 loadSize=350 fineSize=128 gpu=1 th train_random.lua
# or use fineSize=64 to train to generate 64x64 sized image (results are better):
DATA_ROOT=dataset/train display_id=11 name=inpaintRandomNoOverlap useOverlapPred=0 wtl2=0.999 nBottleneck=4000 niter=500 loadSize=350 fineSize=64 gpu=1 th train_random.lua
  1. Test the model
# For training center region inpainting model, run:
DATA_ROOT=dataset/val net=checkpoints/inpaintCenter_500_net_G.t7 name=test_patch overlapPred=4 manualSeed=222 batchSize=30 loadSize=350 gpu=1 th test.lua
DATA_ROOT=dataset/val net=checkpoints/inpaintCenter_500_net_G.t7 name=test_full overlapPred=4 manualSeed=222 batchSize=30 loadSize=129 gpu=1 th test.lua

# For testing random region inpainting model, run (with fineSize=64 or 124, same as training):
DATA_ROOT=dataset/val net=checkpoints/inpaintRandomNoOverlap_500_net_G.t7 name=test_patch_random useOverlapPred=0 manualSeed=222 batchSize=30 loadSize=350 gpu=1 th test_random.lua
DATA_ROOT=dataset/val net=checkpoints/inpaintRandomNoOverlap_500_net_G.t7 name=test_full_random useOverlapPred=0 manualSeed=222 batchSize=30 loadSize=129 gpu=1 th test_random.lua

3) Download Features Caffemodel

Features for context encoder trained with reconstruction loss.

4) TensorFlow Implementation

Checkout the TensorFlow implementation of our paper by Taeksoo here. However, it does not implement full functionalities of our paper.

5) Project Website

Click here.

6) Paris Street-View Dataset

Please email me if you need the dataset and I will share a private link with you. I can't post the public link to this dataset due to the policy restrictions from Google Street View.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].