All Projects → egorzakharov → PerceptualGAN

egorzakharov / PerceptualGAN

Licence: GPL-3.0 License
Pytorch implementation of Image Manipulation with Perceptual Discriminators paper

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to PerceptualGAN

Exprgan
Facial Expression Editing with Controllable Expression Intensity
Stars: ✭ 98 (-17.65%)
Mutual labels:  gan, image-manipulation
Combogan
Stars: ✭ 134 (+12.61%)
Mutual labels:  gan, image-manipulation
Cyclegan
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.
Stars: ✭ 10,933 (+9087.39%)
Mutual labels:  gan, image-manipulation
Pix2pix
Image-to-image translation with conditional adversarial nets
Stars: ✭ 8,765 (+7265.55%)
Mutual labels:  gan, image-manipulation
Starnet
StarNet
Stars: ✭ 141 (+18.49%)
Mutual labels:  gan, image-manipulation
Man
Multinomial Adversarial Networks for Multi-Domain Text Classification (NAACL 2018)
Stars: ✭ 72 (-39.5%)
Mutual labels:  gan, adversarial-networks
Gandissect
Pytorch-based tools for visualizing and understanding the neurons of a GAN. https://gandissect.csail.mit.edu/
Stars: ✭ 1,700 (+1328.57%)
Mutual labels:  gan, image-manipulation
All About The Gan
All About the GANs(Generative Adversarial Networks) - Summarized lists for GAN
Stars: ✭ 630 (+429.41%)
Mutual labels:  gan, adversarial-networks
Tsit
[ECCV 2020 Spotlight] A Simple and Versatile Framework for Image-to-Image Translation
Stars: ✭ 141 (+18.49%)
Mutual labels:  gan, image-manipulation
Focal Frequency Loss
Focal Frequency Loss for Generative Models
Stars: ✭ 141 (+18.49%)
Mutual labels:  gan, image-manipulation
Image To Image Papers
🦓<->🦒 🌃<->🌆 A collection of image to image papers with code (constantly updating)
Stars: ✭ 949 (+697.48%)
Mutual labels:  gan, image-manipulation
Distancegan
Pytorch implementation of "One-Sided Unsupervised Domain Mapping" NIPS 2017
Stars: ✭ 180 (+51.26%)
Mutual labels:  gan, image-manipulation
Adversarialnetspapers
Awesome paper list with code about generative adversarial nets
Stars: ✭ 6,219 (+5126.05%)
Mutual labels:  gan, adversarial-networks
Lggan
[CVPR 2020] Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation
Stars: ✭ 97 (-18.49%)
Mutual labels:  gan, image-manipulation
Adversarial video generation
A TensorFlow Implementation of "Deep Multi-Scale Video Prediction Beyond Mean Square Error" by Mathieu, Couprie & LeCun.
Stars: ✭ 662 (+456.3%)
Mutual labels:  gan, adversarial-networks
Electra
中文 预训练 ELECTRA 模型: 基于对抗学习 pretrain Chinese Model
Stars: ✭ 132 (+10.92%)
Mutual labels:  gan, adversarial-networks
Anycost Gan
[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing
Stars: ✭ 367 (+208.4%)
Mutual labels:  gan, image-manipulation
Igan
Interactive Image Generation via Generative Adversarial Networks
Stars: ✭ 3,845 (+3131.09%)
Mutual labels:  gan, image-manipulation
Oneshottranslation
Pytorch implementation of "One-Shot Unsupervised Cross Domain Translation" NIPS 2018
Stars: ✭ 135 (+13.45%)
Mutual labels:  gan, image-manipulation
Deblurgan
Image Deblurring using Generative Adversarial Networks
Stars: ✭ 2,033 (+1608.4%)
Mutual labels:  gan, image-manipulation

PerceptualGAN

This is a PyTorch implementation of the paper Image Manipulation with Perceptual Discriminators

Diana Sungatullina 1, Egor Zakharov 1, Dmitry Ulyanov 1, Victor Lempitsky 1,2
1 Skolkovo Institute of Science and Technology 2 Samsung Research

European Conference on Computer Vision, 2018

Project page

Dependencies

Usage

1. Cloning the repository

$ git clone https://github.com/egorzakharov/PerceptualGAN.git
$ cd PerceptualGAN/

2. Downloading the paper datasets

Please follow the guidelines from official repositories:

Celeba-HQ

monet2photo, apple2orange

3. Setting up tensorboard for pytorch

All training data (with intermediate results) is displayed via tensorboard.

Follow the installation instructions in repository.

To launch, run the following command in repository folder:

tensorboard --logdir runs

4. Training

Example usage:

$ ./scripts/celebahq_256p_pretrain.sh
$ ./scripts/celebahq_256p_smile.sh

In order to achieve best quality results, you need to first pretrain the network as autoencoder.

For that, please use scripts with pretrain suffix for the appropriate dataset. After the pretraining, you can launch the main training script.

Also you need to set the following options within the scripts:

images_path: for Celeba-HQ this should point at the folder with images, otherwise it can be ignored

train/test_img_A/B_path: should point either at the txt list with image names (in the case of Celeba-HQ) or at image folders (CycleGAN).

pretrained_gen_path: when pretraining is finished, should point at the folder with latest_gen_B.pkl file (by default can be specified to:

--pretrained_gen_path runs/<model name>/checkpoints

For detailed description of other options refer to:

train.py
models/translation_generator.py
models/discriminator.py

You can easily train the model on your own dataset by changing the paths to your data and specifying input image size and transformations, see the example scripts for reference.

5. Testing

In order to test, you need to run the following command and set input_path to the folder with images (optionally, also set img_list to a list with subset of these image names), specify scaling by setting image_size (required for CelebA-HQ), file with network weights (net_path) and output directory (output_path).

Example usage:

python test.py --input_path data/celeba_hq --img_list data/lists_hq/smile_test.txt --image_size 256 \
--net_path runs/celebahq_256p_smile/checkpoints/latest_gen_B.pkl --output_path results/smile_test

6. Pretrained models

Models are accessible via the link.

If you want to use finetuned VGG for better results, you can download it and put in the repository folder. Also you will have to set enc_type option:

--enc_type vgg19_pytorch_modified

Default PyTorch VGG network is used in the example scripts.


Acknowledgements

This work has been supported by the Ministry of Education and Science of the Russian Federation (grant 14.756.31.0001).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].