All Projects → DmitryUlyanov → Texture_nets

DmitryUlyanov / Texture_nets

Licence: apache-2.0
Code for "Texture Networks: Feed-forward Synthesis of Textures and Stylized Images" paper.

Programming Languages

lua
6591 projects

Projects that are alternatives of or similar to Texture nets

Torch Models
Stars: ✭ 65 (-94.33%)
Mutual labels:  torch, style-transfer, neural-style
Neural Style Audio Torch
Torch implementation for audio neural style.
Stars: ✭ 130 (-88.67%)
Mutual labels:  torch, style-transfer, neural-style
Neural-Tile
A better tiling script for Neural-Style
Stars: ✭ 35 (-96.95%)
Mutual labels:  style-transfer, neural-style
Keras-Style-Transfer
An implementation of "A Neural Algorithm of Artistic Style" in Keras
Stars: ✭ 36 (-96.86%)
Mutual labels:  style-transfer, neural-style
Neural-Zoom
Infinite Zoom For Style Transfer
Stars: ✭ 34 (-97.04%)
Mutual labels:  torch, neural-style
Adain Style
Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization
Stars: ✭ 1,049 (-8.54%)
Mutual labels:  style-transfer, neural-style
Online Neural Doodle
Feedforward neural doodle
Stars: ✭ 183 (-84.05%)
Mutual labels:  torch, neural-style
MeuralPaint
TensorFlow implementation of CNN fast neural style transfer ⚡️ 🎨 🌌
Stars: ✭ 19 (-98.34%)
Mutual labels:  style-transfer, neural-style
Neural-Zoom-Legacy
Infinite Zoom For Style Transfer
Stars: ✭ 14 (-98.78%)
Mutual labels:  style-transfer, neural-style
Tensorflow Style Transfer
A simple, concise tensorflow implementation of style transfer (neural style)
Stars: ✭ 278 (-75.76%)
Mutual labels:  style-transfer, neural-style
Fast Neural Doodle
Faster neural doodle
Stars: ✭ 339 (-70.44%)
Mutual labels:  torch, neural-style
Tensorflow Fast Style Transfer
A simple, concise tensorflow implementation of fast style transfer
Stars: ✭ 224 (-80.47%)
Mutual labels:  style-transfer, neural-style
Faststyle
Tensorflow implementation of fast neural style transfer.
Stars: ✭ 170 (-85.18%)
Mutual labels:  style-transfer, neural-style
Neural Tools
Tools made for usage alongside artistic style transfer projects
Stars: ✭ 150 (-86.92%)
Mutual labels:  style-transfer, neural-style
Neural Style Audio Tf
TensorFlow implementation for audio neural style.
Stars: ✭ 413 (-63.99%)
Mutual labels:  style-transfer, neural-style
Style Transfer
A Keras Implementation of "A Neural Algorithm of Artistic Style"
Stars: ✭ 115 (-89.97%)
Mutual labels:  style-transfer, neural-style
Fast Style Transfer
TensorFlow CNN for fast style transfer ⚡🖥🎨🖼
Stars: ✭ 10,240 (+792.76%)
Mutual labels:  style-transfer, neural-style
neural-style-pytorch
Neural Style implementation in PyTorch! 🎨
Stars: ✭ 50 (-95.64%)
Mutual labels:  style-transfer, neural-style
Torch2coreml
Torch7 -> CoreML
Stars: ✭ 363 (-68.35%)
Mutual labels:  torch, neural-style
Neural Style Pt
PyTorch implementation of neural style transfer algorithm
Stars: ✭ 456 (-60.24%)
Mutual labels:  style-transfer, neural-style

Texture Networks + Instance normalization: Feed-forward Synthesis of Textures and Stylized Images

In the paper Texture Networks: Feed-forward Synthesis of Textures and Stylized Images we describe a faster way to generate textures and stylize images. It requires learning a feedforward generator with a loss function proposed by Gatys et al.. When the model is trained, a texture sample or stylized image of any size can be generated instantly.

Improved Texture Networks: Maximizing Quality and Diversity in Feed-forward Stylization and Texture Synthesis presents a better architectural design for the generator network. By switching batch_norm to Instance Norm we facilitate the learning process resulting in much better quality.

This also implements the stylization part from Perceptual Losses for Real-Time Style Transfer and Super-Resolution.

You can find an oline demo here (thanks to RiseML).

Prerequisites

Download VGG-19.

cd data/pretrained && bash download_models.sh && cd ../..

Stylization

Training

Preparing image dataset

You can use an image dataset of any kind. For my experiments I tried Imagenet and MS COCO datasets. The structure of the folders should be the following:

dataset/train
dataset/train/dummy
dataset/val/
dataset/val/dummy

The dummy folders should contain images. The dataloader is based on one used infb.resnet.torch.

Here is a quick example for MSCOCO:

wget http://msvocds.blob.core.windows.net/coco2014/train2014.zip
wget http://msvocds.blob.core.windows.net/coco2014/val2014.zip
unzip train2014.zip
unzip val2014.zip
mkdir -p dataset/train
mkdir -p dataset/val
ln -s `pwd`/val2014 dataset/val/dummy
ln -s `pwd`/train2014 dataset/train/dummy

Training a network

Basic usage:

th train.lua -data <path to any image dataset>  -style_image path/to/img.jpg

These parameters work for me:

th train.lua -data <path to any image dataset> -style_image path/to/img.jpg -style_size 600 -image_size 512 -model johnson -batch_size 4 -learning_rate 1e-2 -style_weight 10 -style_layers relu1_2,relu2_2,relu3_2,relu4_2 -content_layers relu4_2

Check out issues tab, you will find some useful advices there.

To achieve the results from the paper you need to play with -image_size, -style_size, -style_layers, -content_layers, -style_weight, -tv_weight.

Do not hesitate to set -batch_size to one, but remember the larger -batch_size the larger -learning_rate you can use.

Testing

th test.lua -input_image path/to/image.jpg -model_t7 data/checkpoints/model.t7

Play with -image_size here. Raise -cpu flag to use CPU for processing.

You can find a pretrained model here. It is not the model from the paper.

Generating textures

soon

Hardware

  • The code was tested with 12GB NVIDIA Titan X GPU and Ubuntu 14.04.
  • You may decrease batch_size, image_size if the model do not fit your GPU memory.
  • The pretrained models do not need much memory to sample.

Credits

The code is based on Justin Johnson's great code for artistic style.

The work was supported by Yandex and Skoltech.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].