All Projects → yiranran → Apdrawinggan

yiranran / Apdrawinggan

Code for APDrawingGAN: Generating Artistic Portrait Drawings from Face Photos with Hierarchical GANs (CVPR 2019 Oral)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Apdrawinggan

Cyclegan
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.
Stars: ✭ 10,933 (+2043.73%)
Mutual labels:  computer-graphics, gan, image-generation
Pix2pix
Image-to-image translation with conditional adversarial nets
Stars: ✭ 8,765 (+1618.63%)
Mutual labels:  computer-graphics, gan, image-generation
Anycost Gan
[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing
Stars: ✭ 367 (-28.04%)
Mutual labels:  computer-graphics, gan, image-generation
Pytorch Cyclegan And Pix2pix
Image-to-Image Translation in PyTorch
Stars: ✭ 16,477 (+3130.78%)
Mutual labels:  computer-graphics, gan, image-generation
AsymmetricGAN
[ACCV 2018 Oral] Dual Generator Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
Stars: ✭ 42 (-91.76%)
Mutual labels:  gan, image-generation
lecam-gan
Regularizing Generative Adversarial Networks under Limited Data (CVPR 2021)
Stars: ✭ 127 (-75.1%)
Mutual labels:  gan, image-generation
Inpainting gmcnn
Image Inpainting via Generative Multi-column Convolutional Neural Networks, NeurIPS2018
Stars: ✭ 256 (-49.8%)
Mutual labels:  gan, image-generation
Anime Face Dataset
🖼 A collection of high-quality anime faces.
Stars: ✭ 272 (-46.67%)
Mutual labels:  gan, image-generation
automatic-manga-colorization
Use keras.js and cyclegan-keras to colorize manga automatically. All computation in browser. Demo is online:
Stars: ✭ 20 (-96.08%)
Mutual labels:  gan, image-generation
Face Generator
Generate human faces with neural networks
Stars: ✭ 266 (-47.84%)
Mutual labels:  gan, face
Consingan
PyTorch implementation of "Improved Techniques for Training Single-Image GANs" (WACV-21)
Stars: ✭ 294 (-42.35%)
Mutual labels:  gan, image-generation
MNIST-invert-color
Invert the color of MNIST images with PyTorch
Stars: ✭ 13 (-97.45%)
Mutual labels:  gan, image-generation
ADL2019
Applied Deep Learning (2019 Spring) @ NTU
Stars: ✭ 20 (-96.08%)
Mutual labels:  gan, image-generation
Awesome-ICCV2021-Low-Level-Vision
A Collection of Papers and Codes for ICCV2021 Low Level Vision and Image Generation
Stars: ✭ 163 (-68.04%)
Mutual labels:  gan, image-generation
Deep-Exemplar-based-Video-Colorization
The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization".
Stars: ✭ 180 (-64.71%)
Mutual labels:  gan, image-generation
Flame Fitting
Example code for the FLAME 3D head model. The code demonstrates how to sample 3D heads from the model, fit the model to 3D keypoints and 3D scans.
Stars: ✭ 269 (-47.25%)
Mutual labels:  computer-graphics, face
Few Shot Patch Based Training
The official implementation of our SIGGRAPH 2020 paper Interactive Video Stylization Using Few-Shot Patch-Based Training
Stars: ✭ 313 (-38.63%)
Mutual labels:  gan, image-generation
Selectiongan
[CVPR 2019 Oral] Multi-Channel Attention Selection GAN with Cascaded Semantic Guidance for Cross-View Image Translation
Stars: ✭ 366 (-28.24%)
Mutual labels:  computer-graphics, image-generation
Igan
Interactive Image Generation via Generative Adversarial Networks
Stars: ✭ 3,845 (+653.92%)
Mutual labels:  computer-graphics, gan
Semantic Pyramid for Image Generation
PyTorch reimplementation of the paper: "Semantic Pyramid for Image Generation" [CVPR 2020].
Stars: ✭ 45 (-91.18%)
Mutual labels:  gan, image-generation

APDrawingGAN

We provide PyTorch implementations for our CVPR 2019 paper "APDrawingGAN: Generating Artistic Portrait Drawings from Face Photos with Hierarchical GANs".

This project generates artistic portrait drawings from face photos using a GAN-based model. You may find useful information in preprocessing steps and training/testing tips.

[Paper] [Demo]

Our Proposed Framework

Sample Results

Up: input, Down: output

Citation

If you use this code for your research, please cite our paper.

@inproceedings{YiLLR19,
  title     = {{APDrawingGAN}: Generating Artistic Portrait Drawings from Face Photos with Hierarchical GANs},
  author    = {Yi, Ran and Liu, Yong-Jin and Lai, Yu-Kun and Rosin, Paul L},
  booktitle = {{IEEE} Conference on Computer Vision and Pattern Recognition (CVPR '19)},
  pages     = {10743--10752},
  year      = {2019}
}

Prerequisites

  • Linux or macOS
  • Python 2.7
  • CPU or NVIDIA GPU + CUDA CuDNN

Getting Started

Installation

pip install -r requirements.txt

Quick Start (Apply a Pre-trained Model)

python test.py --dataroot dataset/data/test_single --name formal_author --model test --dataset_mode single --norm batch --use_local --which_epoch 300

The test results will be saved to a html file here: ./results/formal_author/test_300/index.html.

  • If you want to test on your own data, please first align your pictures and prepare your data's facial landmarks and background masks according to tutorial in preprocessing steps, then run
python test.py --dataroot {path_to_aligned_photos} --name formal_author --model test --dataset_mode single --norm batch --use_local --which_epoch 300
  • We also provide an online demo at https://face.lol (optimized, using 120 pairs for training), which will be easier to use if you want to test more photos.

Train

  • Download our APDrawing dataset and copy content to dataset folder
  • Download models of pre-traning and auxiliary networks (for fast distance transform and line detection), from https://cg.cs.tsinghua.edu.cn/people/~Yongjin/APDrawingGAN-Models2.zip (Model2).
  • Run python -m visdom.server
  • Train a model (with pre-training as initialization): first copy "pre-training" models into checkpoints dir of current experiment(checkpoints/[name], e.g. checkpoints/formal), and copy "auxiliary" models into checkpoints/auxiliary.
python train.py --dataroot dataset/data --name formal --continue_train --use_local --discriminator_local --niter 300 --niter_decay 0 --save_epoch_freq 25
  • Train a model (without initialization): first copy models of auxiliary networks into checkpoints/auxiliary.
python train.py --dataroot dataset/data --name formal_noinit --use_local --discriminator_local --niter 300 --niter_decay 0 --save_epoch_freq 25
  • To view training results and loss plots, click the URL http://localhost:8097. To see more intermediate results, check out ./checkpoints/formal/web/index.html

Test

  • Test the model on test set:
python test.py --dataroot dataset/data --name formal --use_local --which_epoch 250

The test results will be saved to a html file here: ./results/formal/test_250/index.html.

  • Test the model on images without paired ground truth (please use --model test, --dataset_mode single and --norm batch):
python test.py --dataroot dataset/data/test_single --name formal --model test --dataset_mode single --norm batch --use_local --which_epoch 250

You can find these scripts at scripts directory.

Preprocessing Steps

Preprocessing steps for your own data (either for testing or training).

Training/Test Tips

Best practice for training and testing your models.

You can contact email [email protected] for any questions.

Acknowledgments

Our code is inspired by pytorch-CycleGAN-and-pix2pix.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].