All Projects → aimagelab → art2real

aimagelab / art2real

Licence: other
Art2Real: Unfolding the Reality of Artworks via Semantically-Aware Image-to-Image Translation. CVPR 2019

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to art2real

Pix2pix
Image-to-image translation with conditional adversarial nets
Stars: ✭ 8,765 (+12245.07%)
Mutual labels:  image-to-image-translation
GazeCorrection
Unsupervised High-Resolution Portrait Gaze Correction and Animation (TIP 2022)
Stars: ✭ 174 (+145.07%)
Mutual labels:  image-to-image-translation
unsup temp embed
Unsupervised learning of action classes with continuous temporal embedding (CVPR'19)
Stars: ✭ 62 (-12.68%)
Mutual labels:  cvpr2019
Deblurgan
Image Deblurring using Generative Adversarial Networks
Stars: ✭ 2,033 (+2763.38%)
Mutual labels:  image-to-image-translation
SMIT
Pytorch implemenation of Stochastic Multi-Label Image-to-image Translation (SMIT), ICCV Workshops 2019.
Stars: ✭ 37 (-47.89%)
Mutual labels:  image-to-image-translation
Generative-Adversarial-Network-
Different Generative Adversarial Networks
Stars: ✭ 23 (-67.61%)
Mutual labels:  image-to-image-translation
Stargan
StarGAN - Official PyTorch Implementation (CVPR 2018)
Stars: ✭ 4,946 (+6866.2%)
Mutual labels:  image-to-image-translation
AffineGAN
PyTorch Implementation of "Facial Image-to-Video Translation by a Hidden Affine Transformation" in MM'19.
Stars: ✭ 46 (-35.21%)
Mutual labels:  image-to-image-translation
pix2pix
This project uses a conditional generative adversarial network (cGAN) named Pix2Pix for the Image to image translation task.
Stars: ✭ 28 (-60.56%)
Mutual labels:  image-to-image-translation
2019-CVPR-AIC-Track-1-UWIPL
Repository for 2019 CVPR AI City Challenge Track 1 from IPL@UW
Stars: ✭ 19 (-73.24%)
Mutual labels:  cvpr2019
Stargan V2
StarGAN v2 - Official PyTorch Implementation (CVPR 2020)
Stars: ✭ 2,700 (+3702.82%)
Mutual labels:  image-to-image-translation
CompenNet
[CVPR'19] End-to-end Projector Photometric Compensation
Stars: ✭ 35 (-50.7%)
Mutual labels:  cvpr2019
visual-compatibility
Context-Aware Visual Compatibility Prediction (https://arxiv.org/abs/1902.03646)
Stars: ✭ 92 (+29.58%)
Mutual labels:  cvpr2019
Pi Rec
🔥 PI-REC: Progressive Image Reconstruction Network With Edge and Color Domain. 🔥 图像翻译,条件GAN,AI绘画
Stars: ✭ 1,619 (+2180.28%)
Mutual labels:  image-to-image-translation
PanoDR
Code and models for "PanoDR: Spherical Panorama Diminished Reality for Indoor Scenes" presented at the OmniCV workshop of CVPR21.
Stars: ✭ 22 (-69.01%)
Mutual labels:  image-to-image-translation
Pix2pixhd
Synthesizing and manipulating 2048x1024 images with conditional GANs
Stars: ✭ 5,553 (+7721.13%)
Mutual labels:  image-to-image-translation
LLVIP
LLVIP: A Visible-infrared Paired Dataset for Low-light Vision
Stars: ✭ 438 (+516.9%)
Mutual labels:  image-to-image-translation
Guided-I2I-Translation-Papers
Guided Image-to-Image Translation Papers
Stars: ✭ 117 (+64.79%)
Mutual labels:  image-to-image-translation
DeepSIM
Official PyTorch implementation of the paper: "DeepSIM: Image Shape Manipulation from a Single Augmented Training Sample" (ICCV 2021 Oral)
Stars: ✭ 389 (+447.89%)
Mutual labels:  image-to-image-translation
obman
[cvpr19] Hands+Objects synthetic dataset, instructions to download and code to load the dataset
Stars: ✭ 120 (+69.01%)
Mutual labels:  cvpr2019

Art2Real

This repository contains the reference code for the paper Art2Real: Unfolding the Reality of Artworks via Semantically-Aware Image-to-Image Translation (CVPR 2019).

Please cite with the following BibTeX:

@inproceedings{tomei2019art2real,
  title={{Art2Real: Unfolding the Reality of Artworks via Semantically-Aware Image-to-Image Translation}},
  author={Tomei, Matteo and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2019}
}

Art2Real

Requirements

This code is built on top of the Cycle-GAN source code.

The required Python packages are:

  • torch>=0.4.1
  • torchvision>=0.2.1
  • dominate>=2.3.1
  • visdom>=0.1.8.3
  • faiss

Pre-trained Models

Download pre-trained models and place them under the checkpoint folder. For example, when downloading the monet2photo checkpoints, place them under the folder ./checkpoints/monet2photo/.

Test

Run python test.py using the following arguments:

Argument Possible values
--dataroot Dataset root folder containing the testA directory
--name monet2photo, landscape2photo, portrait2photo
--num_test Number of test samples

For example, to reproduce the results of our model for the first 100 test samples of the landscape2photo setting, use:

python test.py --dataroot ./datasets/landscape2photo --name landscape2photo --num_test 100

Training

Note: for simplicity, the released training code does not include the regular update of semantic masks from the generated images. In this code, original painting masks are kept fixed.

To run the training code, download the following zip folder containing RGB patches of real landscapes, FAISS indexes and masks from Monet and landscape paintings:

Place it under the root code folder (i.e. ./data_for_patch_retrieval).

Run python train.py using the following arguments:

Argument Possible values
--dataroot Dataset root folder containing the trainA and trainB directories
--name Name of the experiment. It decides where to store samples and models
--no_flip Since artistic masks are fixed, we do not random flip images during training
--patch_size_1 Height and width of the first scale patches
--stride_1 Stride of the first scale patches
--patch_size_2 Height and width of the second scale patches
--stride_2 Stride of the second scale patches
--patch_size_3 Height and width of the third scale patches
--stride_3 Stride of the third scale patches
--which_mem_bank ./data_for_patch_retrieval
--artistic_masks_dir masks_of_artistic_images_monet, masks_of_artistic_images_landscape
--preload_mem_patches If specified, load all RGB patches in memory
--preload_indexes If specified, load all FAISS indexes in memory
  • Required RAM for both RGB patches and FAISS indexes: ~40 GB.

  • Specify only --patch_size_1 and --stride_1 to run the single-scale version.

For example, to train the model on the landscape2photo setting, use:

python train.py --dataroot ./datasets/landscape2photo --name landscape2photo --no_dropout --display_id 0 --no_flip --niter_decay 100 --no_flip --patch_size_1 16 --stride_1 6 --patch_size_2 8 --stride_2 5 --patch_size_3 --stride_3 4 --which_mem_bank ./data_for_patch_retrieval --artistic_masks_dir masks_of_artistic_images_landscape --preload_mem_patches --preload_indexes

Art2Real Art2Real

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].