All Projects → VITA-Group → DeepPS

VITA-Group / DeepPS

Licence: MIT license
[ECCV 2020] "Deep Plastic Surgery: Robust and Controllable Image Editing with Human-Drawn Sketches"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to DeepPS

SRResCycGAN
Code repo for "Deep Cyclic Generative Adversarial Residual Convolutional Networks for Real Image Super-Resolution" (ECCVW AIM2020).
Stars: ✭ 47 (-25.4%)
Mutual labels:  eccv2020
Point2Mesh
Meshing Point Clouds with Predicted Intrinsic-Extrinsic Ratio Guidance (ECCV2020)
Stars: ✭ 61 (-3.17%)
Mutual labels:  eccv2020
Sketch-Resize-Commands
A Sketch plugin that lets you resize and reposition objects by using simple arithmetic commands, like `b+20,lr+20`. Multiple objects are supported.
Stars: ✭ 32 (-49.21%)
Mutual labels:  sketch
uiLogos-sketch-plugin
Sketch plugin to Insert professionally designed dummy logos of companies and 190+ country flag into SketchApp
Stars: ✭ 26 (-58.73%)
Mutual labels:  sketch
deep-atrous-guided-filter
Deep Atrous Guided Filter for Image Restoration in Under Display Cameras (UDC Challenge, ECCV 2020).
Stars: ✭ 32 (-49.21%)
Mutual labels:  eccv2020
sketch-data-cn
为 Sketch 准备的模拟数据中文版,包含:中文姓名,手机号,省份,城市,地区,公司名,银行名,星期几,详情地址,邮编,邮箱,颜色,广告词等。
Stars: ✭ 39 (-38.1%)
Mutual labels:  sketch
dualFace
dualFace: Two-Stage Drawing Guidance for Freehand Portrait Sketching (CVMJ)
Stars: ✭ 46 (-26.98%)
Mutual labels:  sketch
Sketch-Highlighter
Sketch plugin that generates highlights for selected text layers
Stars: ✭ 41 (-34.92%)
Mutual labels:  sketch
library-styles-sync
sync shared styles from a Sketch Library to the current document
Stars: ✭ 70 (+11.11%)
Mutual labels:  sketch
IAST-ECCV2020
IAST: Instance Adaptive Self-training for Unsupervised Domain Adaptation (ECCV 2020) https://teacher.bupt.edu.cn/zhuchuang/en/index.htm
Stars: ✭ 84 (+33.33%)
Mutual labels:  eccv2020
Anime2Sketch
A sketch extractor for anime/illustration.
Stars: ✭ 1,623 (+2476.19%)
Mutual labels:  sketch
public
Some public files that I can link to: icons, screenshots, etc.
Stars: ✭ 29 (-53.97%)
Mutual labels:  sketch
sketch-pages-to-folders
Sketch plugin that exports all the artboards of a Sketch file into folders, which are based on the pages of the Sketch file.
Stars: ✭ 56 (-11.11%)
Mutual labels:  sketch
library-replacer-sketchplugin
A Sketch plugin that allows you to replace a library in a Sketch file
Stars: ✭ 19 (-69.84%)
Mutual labels:  sketch
PiP-Planning-informed-Prediction
(ECCV 2020) PiP: Planning-informed Trajectory Prediction for Autonomous Driving
Stars: ✭ 101 (+60.32%)
Mutual labels:  eccv2020
sketch-crowdin
Connect your Sketch and Crowdin projects together
Stars: ✭ 35 (-44.44%)
Mutual labels:  sketch
gql-sketch
💎 minimal graphql client for Sketch
Stars: ✭ 29 (-53.97%)
Mutual labels:  sketch
SAN
[ECCV 2020] Scale Adaptive Network: Learning to Learn Parameterized Classification Networks for Scalable Input Images
Stars: ✭ 41 (-34.92%)
Mutual labels:  eccv2020
stark-sketch-plugin
Ensure your design is accessible and high contrast for every type of color blindness
Stars: ✭ 45 (-28.57%)
Mutual labels:  sketch
Sketch-Navigator
"Sketch Navigator lets you quickly jump to any specific artboard without having to scan the all too easily cluttered Layers List in the app’s left-hand pane." - Khoi Vinh
Stars: ✭ 160 (+153.97%)
Mutual labels:  sketch

Deep Plastic Surgery

(a) controllable face synthesis (b) controllable face editing
(c) adjusting refinement level l

Our framework allows users to (a) synthesize and (b) edit photos based on hand-drawn sketches. (c) Our model works robustly on various sketches by setting refinement level l adaptive to the quality of the input sketches, i.e., higher l for poorer sketches, thus tolerating the drawing errors and achieving the controllability on sketch faithfulness. Note that our model requires no real sketches for training.

This is a pytorch implementation of the paper.

Shuai Yang, Zhangyang Wang, Jiaying Liu and Zongming Guo. Deep Plastic Surgery: Robust and Controllable Image Editing with Human-Drawn Sketches, accepted by European Conference on Computer Vision (ECCV), 2020.

[Project] | [Paper] | [Human-Drawn Facial Sketches]

Please consider citing our paper if you find the software useful for your work.

Usage:

Prerequisites

  • Python 2.7
  • Pytorch 1.2.0
  • matplotlib
  • scipy
  • Pillow
  • torchsample

Install

  • Clone this repo:
git clone https://github.com/TAMU-VITA/DeepPS.git
cd DeepPS/src

Testing Example

  • Download pre-trained models from [Google Drive] | [Baidu Cloud](code:oieu) to ../save/
  • Sketch-to-photo translation
    • setting l to 1 to use refinment level 1.0
    • setting l to -1 (default) means testing with multiple levels in [0,1] with step of l_step (default l_step = 0.25)
    • Results can be found in ../output/
python test.py --l 1.0

python test.py

  • Face editing with refinment level 0.0, 0.25, 0.5, 0.75 and 1.0
    • model_task to specify task. SYN for synthesis and EDT for editing
    • specify the task, input image filename, model filename for F and G, respectively
    • Results can be found in ../output/
python test.py --model_task EDT --input_name ../data/EDT/4.png \
--load_F_name ../save/ECCV-EDT-celebaHQ-F256.ckpt --model_name ECCV-EDT-celebaHQ
  • Use --help to view more testing options
python test.py --help

Training Examples

  • Download pre-trained model F from [Google Drive] | [Baidu Cloud](code:oieu) to ../save/
  • Prepare your data in ../data/dataset/train/ in form of (I,S):
    • Please refer to pix2pix for more details

Training on image synthesis task

  • Train G with default parameters on 256*256 images
    • Progressively train G64, G128 and G256 on 64*64, 128*128 and 256*256 images like pix2pixHD.
      • step1: for each resolution, G is first trained with a fixed l = 1 to learn the greatest refinement level for 30 epoches (--epoch_pre)
      • step2: we then use l ∈ {i/K}, i=0,...,K where K = 20 (i.e. --max_dilate 21) for 200 epoches (--epoch)
python train.py --save_model_name PSGAN-SYN

Saved model can be found at ../save/

  • Train G with default parameters on 64*64 images
    • Prepare your dataset in ../data/dataset64/train/ (for example, provided by ContextualGAN)
    • Prepare your network F pretrained on 64*64 images and save it as ../save/ECCV-SYN-celeba-F64.ckpt
    • max_level = 1 to indicate only training on level 1 (level 1, 2, 3 --> image resolution 64*64, 128*128, 256*256)
    • use_F_level = 1 to indicate network F is used on level 1
    • Specify the max dilation diameter, training level, F model image size
    • AtoB means images are prepared in form of (S,I)
python train.py --train_path ../data/dataset64/ \
--max_dilate 9 --max_level 1 --use_F_level 1 \
--load_F_name ../save/ECCV-SYN-celeba-F64.ckpt --img_size 64 \
--save_model_name PSGAN-SYN-64 --AtoB

Training on image editing task

  • Train G with default parameters on 256*256 images
    • Progressively train G64, G128 and G256 on 64*64, 128*128 and 256*256 images like pix2pixHD.
      • step1: for each resolution, G is first trained with a fixed l = 1 to learn the greatest refinement level for 30 epoches (--epoch_pre)
      • step2: we then use l ∈ {i/K}, i=0,...,K where K = 20 (i.e. --max_dilate 21) for 200 epoches (--epoch)
python train.py --model_task EDT \
--load_F_name ../save/ECCV-EDT-celebaHQ-F256.ckpt --save_model_name PSGAN-EDT

Saved model can be found at ../save/

  • Use --help to view more training options
python train.py --help

Contact

Shuai Yang

[email protected]

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].