All Projects → zhangmozhe → Deep-Exemplar-based-Video-Colorization

zhangmozhe / Deep-Exemplar-based-Video-Colorization

Licence: MIT License
The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization".

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Deep-Exemplar-based-Video-Colorization

Pix2pix
Image-to-image translation with conditional adversarial nets
Stars: ✭ 8,765 (+4769.44%)
Mutual labels:  generative-adversarial-network, gan, image-generation
Lggan
[CVPR 2020] Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation
Stars: ✭ 97 (-46.11%)
Mutual labels:  generative-adversarial-network, gan, image-generation
Dcgan Tensorflow
A Tensorflow implementation of Deep Convolutional Generative Adversarial Networks trained on Fashion-MNIST, CIFAR-10, etc.
Stars: ✭ 70 (-61.11%)
Mutual labels:  generative-adversarial-network, gan, image-generation
Few Shot Patch Based Training
The official implementation of our SIGGRAPH 2020 paper Interactive Video Stylization Using Few-Shot Patch-Based Training
Stars: ✭ 313 (+73.89%)
Mutual labels:  generative-adversarial-network, gan, image-generation
Focal Frequency Loss
Focal Frequency Loss for Generative Models
Stars: ✭ 141 (-21.67%)
Mutual labels:  generative-adversarial-network, gan, image-generation
Anycost Gan
[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing
Stars: ✭ 367 (+103.89%)
Mutual labels:  generative-adversarial-network, gan, image-generation
Semantic Pyramid for Image Generation
PyTorch reimplementation of the paper: "Semantic Pyramid for Image Generation" [CVPR 2020].
Stars: ✭ 45 (-75%)
Mutual labels:  generative-adversarial-network, gan, image-generation
Ganspace
Discovering Interpretable GAN Controls [NeurIPS 2020]
Stars: ✭ 1,224 (+580%)
Mutual labels:  generative-adversarial-network, gan, image-generation
Unetgan
Official Implementation of the paper "A U-Net Based Discriminator for Generative Adversarial Networks" (CVPR 2020)
Stars: ✭ 139 (-22.78%)
Mutual labels:  generative-adversarial-network, gan, image-generation
Cyclegan
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.
Stars: ✭ 10,933 (+5973.89%)
Mutual labels:  generative-adversarial-network, gan, image-generation
MNIST-invert-color
Invert the color of MNIST images with PyTorch
Stars: ✭ 13 (-92.78%)
Mutual labels:  generative-adversarial-network, gan, image-generation
Arbitrary Text To Image Papers
A collection of arbitrary text to image papers with code (constantly updating)
Stars: ✭ 196 (+8.89%)
Mutual labels:  generative-adversarial-network, gan, image-generation
ADL2019
Applied Deep Learning (2019 Spring) @ NTU
Stars: ✭ 20 (-88.89%)
Mutual labels:  generative-adversarial-network, gan, image-generation
Hidt
Official repository for the paper "High-Resolution Daytime Translation Without Domain Labels" (CVPR2020, Oral)
Stars: ✭ 513 (+185%)
Mutual labels:  generative-adversarial-network, gan, image-generation
Mlds2018spring
Machine Learning and having it Deep and Structured (MLDS) in 2018 spring
Stars: ✭ 124 (-31.11%)
Mutual labels:  generative-adversarial-network, gan, image-generation
Tsit
[ECCV 2020 Spotlight] A Simple and Versatile Framework for Image-to-Image Translation
Stars: ✭ 141 (-21.67%)
Mutual labels:  generative-adversarial-network, gan, image-generation
Pytorch Cyclegan And Pix2pix
Image-to-Image Translation in PyTorch
Stars: ✭ 16,477 (+9053.89%)
Mutual labels:  generative-adversarial-network, gan, image-generation
StyleGANCpp
Unofficial implementation of StyleGAN's generator
Stars: ✭ 25 (-86.11%)
Mutual labels:  generative-adversarial-network, gan
Anime2Sketch
A sketch extractor for anime/illustration.
Stars: ✭ 1,623 (+801.67%)
Mutual labels:  generative-adversarial-network, image-generation
GAN-auto-write
Generative Adversarial Network that learns to generate handwritten digits. (Learning Purposes)
Stars: ✭ 18 (-90%)
Mutual labels:  generative-adversarial-network, gan

Deep Exemplar-based Video Colorization (Pytorch Implementation)

Paper | Pretrained Model | Youtube video 🔥 | Colab demo

Deep Exemplar-based Video Colorization, CVPR2019

Bo Zhang1,3, Mingming He1,5, Jing Liao2, Pedro V. Sander1, Lu Yuan4, Amine Bermak1, Dong Chen3
1Hong Kong University of Science and Technology,2City University of Hong Kong, 3Microsoft Research Asia, 4Microsoft Cloud&AI, 5USC Institute for Creative Technologies

Prerequisites

  • Python 3.6+
  • Nvidia GPU + CUDA, CuDNN

Installation

First use the following commands to prepare the environment:

conda create -n ColorVid python=3.6
source activate ColorVid
pip install -r requirements.txt

Then, download the pretrained models from this link, unzip the file and place the files into the corresponding folders:

  • video_moredata_l1 under the checkpoints folder
  • vgg19_conv.pth and vgg19_gray.pth under the data folder

Data Preparation

In order to colorize your own video, it requires to extract the video frames, and provide a reference image as an example.

  • Place your video frames into one folder, e.g., ./sample_videos/v32_180
  • Place your reference images into another folder, e.g., ./sample_videos/v32

If you want to automatically retrieve color images, you can try the retrieval algorithm from this link which will retrieve similar images from the ImageNet dataset. Or you can try this link on your own image database.

Test

python test.py --image-size [image-size] \
               --clip_path [path-to-video-frames] \
               --ref_path [path-to-reference] \
               --output_path [path-to-output]

We provide several sample video clips with corresponding references. For example, one can colorize one sample legacy video using:

python test.py --clip_path ./sample_videos/clips/v32 \
               --ref_path ./sample_videos/ref/v32 \
               --output_path ./sample_videos/output

Note that we use 216*384 images for training, which has aspect ratio of 1:2. During inference, we scale the input to this size and then rescale the output back to the original size.

Train

We also provide training code for reference. The training can be started by running:

python train.py --data_root [root of video samples] \
       --data_root_imagenet [root of image samples] \
       --gpu_ids [gpu ids] \

We do not provide the full video dataset due to the copyright issue. For image samples, we retrieve semantically similar images from ImageNet using this repository. Still, one can refer to our code to understand the detailed procedure of augmenting the image dataset to mimic the video frames.

Comparison with State-of-the-Arts

More results

Please check our Youtube demo for results of video colorization.

Citation

If you use this code for your research, please cite our paper.

@inproceedings{zhang2019deep,
title={Deep exemplar-based video colorization},
author={Zhang, Bo and He, Mingming and Liao, Jing and Sander, Pedro V and Yuan, Lu and Bermak, Amine and Chen, Dong},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={8052--8061},
year={2019}
}

Old Photo Restoration 🔥

If you are also interested in restoring the artifacts in the legacy photo, please check our recent work, bringing old photo back to life.

@inproceedings{wan2020bringing,
title={Bringing Old Photos Back to Life},
author={Wan, Ziyu and Zhang, Bo and Chen, Dongdong and Zhang, Pan and Chen, Dong and Liao, Jing and Wen, Fang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={2747--2757},
year={2020}
}

License

This project is licensed under the MIT license.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].