All Projects → jessemelpolio → Non Stationary_texture_syn

jessemelpolio / Non Stationary_texture_syn

Licence: mit
Code used for texture synthesis using GAN

Programming Languages

python
139335 projects - #7 most used programming language

Labels

Projects that are alternatives of or similar to Non Stationary texture syn

UEGAN
[TIP2020] Pytorch implementation of "Towards Unsupervised Deep Image Enhancement with Generative Adversarial Network"
Stars: ✭ 68 (-77.85%)
Mutual labels:  gan
Alae
[CVPR2020] Adversarial Latent Autoencoders
Stars: ✭ 3,178 (+935.18%)
Mutual labels:  gan
Dreampower
DeepNude with DreamNet improvements.
Stars: ✭ 287 (-6.51%)
Mutual labels:  gan
Inpainting gmcnn
Image Inpainting via Generative Multi-column Convolutional Neural Networks, NeurIPS2018
Stars: ✭ 256 (-16.61%)
Mutual labels:  gan
Anime Face Dataset
🖼 A collection of high-quality anime faces.
Stars: ✭ 272 (-11.4%)
Mutual labels:  gan
Mydeeplearning
A deep learning library to provide algs in pure Numpy or Tensorflow.
Stars: ✭ 281 (-8.47%)
Mutual labels:  gan
hifigan-denoiser
HiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep Features in Adversarial Networks
Stars: ✭ 88 (-71.34%)
Mutual labels:  gan
Consingan
PyTorch implementation of "Improved Techniques for Training Single-Image GANs" (WACV-21)
Stars: ✭ 294 (-4.23%)
Mutual labels:  gan
Age
Code for the paper "Adversarial Generator-Encoder Networks"
Stars: ✭ 274 (-10.75%)
Mutual labels:  gan
Makegirlsmoe web
Create Anime Characters with MakeGirlsMoe
Stars: ✭ 3,144 (+924.1%)
Mutual labels:  gan
Singan
Official pytorch implementation of the paper: "SinGAN: Learning a Generative Model from a Single Natural Image"
Stars: ✭ 2,983 (+871.66%)
Mutual labels:  gan
My Data Competition Experience
本人多次机器学习与大数据竞赛Top5的经验总结,满满的干货,拿好不谢
Stars: ✭ 271 (-11.73%)
Mutual labels:  gan
Faceswap Gan
A denoising autoencoder + adversarial losses and attention mechanisms for face swapping.
Stars: ✭ 3,099 (+909.45%)
Mutual labels:  gan
DCGAN-CelebA-PyTorch-CPP
DCGAN Implementation using PyTorch in both C++ and Python
Stars: ✭ 14 (-95.44%)
Mutual labels:  gan
Ydata Synthetic
Synthetic structured data generators
Stars: ✭ 288 (-6.19%)
Mutual labels:  gan
Awesome-ICCV2021-Low-Level-Vision
A Collection of Papers and Codes for ICCV2021 Low Level Vision and Image Generation
Stars: ✭ 163 (-46.91%)
Mutual labels:  gan
Pytorch Lesson Zh
pytorch 包教不包会
Stars: ✭ 279 (-9.12%)
Mutual labels:  gan
Person Reid gan
ICCV2017 Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in vitro
Stars: ✭ 301 (-1.95%)
Mutual labels:  gan
Pytorch Srgan
A modern PyTorch implementation of SRGAN
Stars: ✭ 289 (-5.86%)
Mutual labels:  gan
Dcgan
The Simplest DCGAN Implementation
Stars: ✭ 286 (-6.84%)
Mutual labels:  gan

Non-stationary texture synthesis using adversarial expansions

This is the official code of paper Non-stationary texture synthesis using adversarial expansions.

This code was mainly adapted by Zhen Zhu on the basis of the repository CycleGAN.

If you use this code for your research, please cite:

Non-stationary texture synthesis using adversarial expansions
Yang Zhou*, Zhen Zhu*, Xiang Bai, Dani Lischinski, Daniel Cohen-Or, Hui Huang
In SIGGRAPH 2018. (* equal contributions)

Requirements

This code is tested under Ubuntu 14.04 and 16.04. The total project can be well functioned under the following environment:

  • python-2.7
  • pytorch-0.3.0 with cuda correctly specified
  • cuda-8.0
  • other packages under python-2.7

Preparations

Please run download_pretrained_models.sh first to make a new folder Models and then download the VGG19 model pre-trained on ImageNet to this folder. The pre-trained VGG19 model is used to calculate style loss.

Data

There is no restriction for the format of the source texture images. The structure of the data folder is recommanded as the provided sub-folders inside datasets folder. To be more specific, datasets/half is what we use in paper production.

The dataset structure is recommended as:

+--half
|
|   +--sunflower
|
|       +--train
|
|           +--sunflower.jpg
|
|       +--test
|
|           +--sunflower.jpg
|
|   +--brick
|
|       +--train
|
|           +--brick.jpg
|
|       +--test
|
|           +--brick.jpg
|
...

Architecture of the repository

Inside the main folder, train.py is used to train a model as described in our paper. test.py is used to test with the original image(result is 2x the size of the input). test_recurrent.py is used for extreme expansions. cnn-vis.py is used to visualize the internal layers of our generator. The residual blocks visualization shown in our paper are generated through cnn-vis.py.

In folder data, file custom_dataset_data_loader specified five dataset mode: aligned, unaligned, single and half_crop. Generally, we use single for testing and half_crop for training.

In folder models, two files are of great importance: models.py and networks.py, please carefully check it before using it. half_gan_style.py is the major model we use in our paper. Some utilities are implemented in vgg.py.

In folder options, all hyperparameters are defined here. Go to this folder to see the meaning of every hyperparameter.

Folder scripts contains scripts used for training and testing. To train or test a model, use commands like sh scripts/train_half_style.sh. Go into these files to see how to specify some hyper parameters.

Folder util contains some scripts to generate perlin noise (perlin2d.py), generate random tile (random_tile.py), which are useful to replicate our paper's results. Some other useful scripts are also included.

Train, test and visualize

Folder scripts contain scripts used for training and testing. To train or test a model, use commands like sh scripts/train_half_style.sh. Go into these files to see how to specify some hyper parameters. To visualize the internal layers inside network, especially the residual blocks, you can use script visualize_layers.sh, as shown in our paper.

Cite

If you use this code for your research, please cite our paper:

@article{TexSyn18,
title = {Non-stationary Texture Synthesis by Adversarial Expansion},
author = {Yang Zhou and Zhen Zhu and Xiang Bai and Dani Lischinski and Daniel Cohen-Or and Hui Huang},
journal = {ACM Transactions on Graphics (Proc. SIGGRAPH)},
volume = {37},
number = {4},
pages = {},  
year = {2018},
}

Acknowledgements

The code is based on project CycleGAN. We sincerely thank for their great work.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].