All Projects → chaofengc → Psfrgan

chaofengc / Psfrgan

Licence: other
PyTorch codes for "Progressive Semantic-Aware Style Transformation for Blind Face Restoration"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Psfrgan

Waveletsrnet
A pytorch implementation of Paper "Wavelet-srnet: A wavelet-based cnn for multi-scale face super resolution"
Stars: ✭ 186 (+0.54%)
Mutual labels:  super-resolution, face
Face And Image Super Resolution
Stars: ✭ 174 (-5.95%)
Mutual labels:  super-resolution, face
A Pytorch Tutorial To Super Resolution
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network | a PyTorch Tutorial to Super-Resolution
Stars: ✭ 157 (-15.14%)
Mutual labels:  super-resolution
Mzsr
Meta-Transfer Learning for Zero-Shot Super-Resolution (CVPR, 2020)
Stars: ✭ 181 (-2.16%)
Mutual labels:  super-resolution
Dockerface
Face detection using deep learning.
Stars: ✭ 173 (-6.49%)
Mutual labels:  face
Waifu2x
PyTorch on Super Resolution
Stars: ✭ 156 (-15.68%)
Mutual labels:  super-resolution
Gpufit
GPU-accelerated Levenberg-Marquardt curve fitting in CUDA
Stars: ✭ 174 (-5.95%)
Mutual labels:  super-resolution
Frvsr
Frame-Recurrent Video Super-Resolution (official repository)
Stars: ✭ 157 (-15.14%)
Mutual labels:  super-resolution
Anime4k
A High-Quality Real Time Upscaler for Anime Video
Stars: ✭ 14,083 (+7512.43%)
Mutual labels:  super-resolution
Knead proj
游戏中捏脸的实现
Stars: ✭ 169 (-8.65%)
Mutual labels:  face
Pytorch Zssr
PyTorch implementation of 1712.06087 "Zero-Shot" Super-Resolution using Deep Internal Learning
Stars: ✭ 180 (-2.7%)
Mutual labels:  super-resolution
Video Super Resolution
Video super resolution implemented in Pytorch
Stars: ✭ 169 (-8.65%)
Mutual labels:  super-resolution
Dpir
Plug-and-Play Image Restoration with Deep Denoiser Prior (PyTorch)
Stars: ✭ 159 (-14.05%)
Mutual labels:  super-resolution
Cognitive Face Windows
Windows SDK for the Microsoft Face API, part of Cognitive Services
Stars: ✭ 175 (-5.41%)
Mutual labels:  face
Tenet
Official Pytorch Implementation for Trinity of Pixel Enhancement: a Joint Solution for Demosaicing, Denoising and Super-Resolution
Stars: ✭ 157 (-15.14%)
Mutual labels:  super-resolution
Fbrecog
An unofficial python wrapper for the Facebook face recognition endpoint
Stars: ✭ 184 (-0.54%)
Mutual labels:  face
Mmediting
OpenMMLab Image and Video Editing Toolbox
Stars: ✭ 2,618 (+1315.14%)
Mutual labels:  super-resolution
3klcon
Automation Recon tool which works with Large & Medium scopes. It performs more than 20 tasks and gets back all the results in separated files.
Stars: ✭ 189 (+2.16%)
Mutual labels:  face
Anime Face Gan Keras
A DCGAN to generate anime faces using custom mined dataset
Stars: ✭ 161 (-12.97%)
Mutual labels:  face
Super resolution with cnns and gans
Image Super-Resolution Using SRCNN, DRRN, SRGAN, CGAN in Pytorch
Stars: ✭ 176 (-4.86%)
Mutual labels:  super-resolution

PSFR-GAN in PyTorch

We only provide test codes at this time.

Progressive Semantic-Aware Style Transformation for Blind Face Restoration
Chaofeng Chen, Xiaoming Li, Lingbo Yang, Xianhui Lin, Lei Zhang, Kwan-Yee K. Wong

Getting Started

Prerequisites and Installation

  • Ubuntu 18.04
  • CUDA 10.1
  • Clone this repository
    git clone https://github.com/chaofengc/PSFR-GAN.git
    cd PSFR-GAN
    
  • Python 3.7, install required packages by pip3 install -r requirements.txt

Download Pretrain Models and Dataset

Download the pretrained models from the following link and put them to ./pretrain_models

Test single image

Run the following script to enhance face(s) in single input

python test_enhance_single_unalign.py --test_img_path ./test_dir/test_hzgg.jpg --results_dir test_hzgg_results --gpus 1

This script do the following things:

  • Crop and align all the faces from input image, stored at results_dir/LQ_faces
  • Parse these faces and then enhance them, results stored at results_dir/ParseMaps and results_dir/HQ
  • Paste then enhanced faces back to the original image results_dir/hq_final.jpg
  • You can use --gpus to specify how many GPUs to use, <=0 means running on CPU. The program will use GPU with the most available memory. Set CUDA_VISIBLE_DEVICE to specify the GPU if you do not want automatic GPU selection.

Test image folder

To test multiple images, we first crop out all the faces and align them use the following script.

python align_and_crop_dir.py --src_dir test_dir --results_dir test_dir_align_results

For images (e.g. multiface_test.jpg) contain multiple faces, the aligned faces will be stored as multiface_test_{face_index}.jpg
And then parse the aligned faces and enhance them with

python test_enhance_dir_align.py --src_dir test_dir_align_results --results_dir test_dir_enhance_results

Results will be saved to three folders respectively: results_dir/lq, results_dir/parse, results_dir/hq.

Additional test script

For your convenience, we also provide script to test multiple unaligned images and paste the enhance results back. Note the paste back operation could be quite slow for large size images containing many faces.

python test_enhance_dir_unalign.py --src_dir test_dir --results_dir test_unalign_results

This script basically do the same thing as test_enhance_single_unalign.py for each image in src_dir

Citation

@InProceedings{ChenPSFRGAN,
    author = {Chen, Chaofeng and Li, Xiaoming and Lin, Xianhui and Lingbo, Yang and Zhang, Lei and Wong, KKY},
    title = {Progressive Semantic-Aware Style Transformation for Blind Face Restoration},
    Journal = {arXiv preprint arXiv:2009.08709},
    year = {2020}
}

License

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Acknowledgement

This work is inspired by SPADE, and closed related to DFDNet and HiFaceGAN. Our codes largely benefit from CycleGAN.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].