All Projects → rgeirhos → Stylized Imagenet

rgeirhos / Stylized Imagenet

Licence: mit
Code to create Stylized-ImageNet, a stylized version of standard ImageNet (ICLR 2019 Oral)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Stylized Imagenet

depth-preserving-neural-style-transfer
Depth-Preserving Style Transfer
Stars: ✭ 86 (-75.64%)
Mutual labels:  style-transfer
Neural-Zoom-Legacy
Infinite Zoom For Style Transfer
Stars: ✭ 14 (-96.03%)
Mutual labels:  style-transfer
Fast Artistic Videos
Video style transfer using feed-forward networks.
Stars: ✭ 301 (-14.73%)
Mutual labels:  style-transfer
fast-style-transfer-tutorial-pytorch
Simple Tutorials & Code Implementation of fast-style-transfer(Perceptual Losses for Real-Time Style Transfer and Super-Resolution, 2016 ECCV) using PyTorch.
Stars: ✭ 18 (-94.9%)
Mutual labels:  style-transfer
neural-style-pytorch
Neural Style implementation in PyTorch! 🎨
Stars: ✭ 50 (-85.84%)
Mutual labels:  style-transfer
Neural Style Tf
TensorFlow (Python API) implementation of Neural Style
Stars: ✭ 2,943 (+733.71%)
Mutual labels:  style-transfer
pytorch-neural-style-transfer-johnson
Reconstruction of the fast neural style transfer (Johnson et al.). Some portions of the paper have been improved by the follow-up work like the instance normalization, etc. Checkout transformer_net.py's header for details.
Stars: ✭ 85 (-75.92%)
Mutual labels:  style-transfer
Few Shot Patch Based Training
The official implementation of our SIGGRAPH 2020 paper Interactive Video Stylization Using Few-Shot Patch-Based Training
Stars: ✭ 313 (-11.33%)
Mutual labels:  style-transfer
HistoGAN
Reference code for the paper HistoGAN: Controlling Colors of GAN-Generated and Real Images via Color Histograms (CVPR 2021).
Stars: ✭ 158 (-55.24%)
Mutual labels:  style-transfer
Tensorflow Style Transfer
A simple, concise tensorflow implementation of style transfer (neural style)
Stars: ✭ 278 (-21.25%)
Mutual labels:  style-transfer
Awesome Neural Style
A curated list of neural style and deep neural network visualization
Stars: ✭ 16 (-95.47%)
Mutual labels:  style-transfer
deepai-js-client
Simple Javascript Client Library for Browser and Node.js for calling DeepAI's APIs
Stars: ✭ 22 (-93.77%)
Mutual labels:  style-transfer
Linearstyletransfer
This is the Pytorch implementation of "Learning Linear Transformations for Fast Image and Video Style Transfer" (CVPR 2019).
Stars: ✭ 270 (-23.51%)
Mutual labels:  style-transfer
Text-Effects-Transfer
Matlab implementation of the paper "Awesome Typography: Statistics-Based Text Effects Transfer"
Stars: ✭ 40 (-88.67%)
Mutual labels:  style-transfer
Zhihu
This repo contains the source code in my personal column (https://zhuanlan.zhihu.com/zhaoyeyu), implemented using Python 3.6. Including Natural Language Processing and Computer Vision projects, such as text generation, machine translation, deep convolution GAN and other actual combat code.
Stars: ✭ 3,307 (+836.83%)
Mutual labels:  style-transfer
Point-Then-Operate
Code for the ACL 2019 paper ``A Hierarchical Reinforced Sequence Operation Method for Unsupervised Text Style Transfer``
Stars: ✭ 44 (-87.54%)
Mutual labels:  style-transfer
automated-deep-photo-style-transfer
TensorFlow implementation for the paper "Automated Deep Photo Style Transfer"
Stars: ✭ 92 (-73.94%)
Mutual labels:  style-transfer
Mace Models
Mobile AI Compute Engine Model Zoo
Stars: ✭ 329 (-6.8%)
Mutual labels:  style-transfer
Shapematchinggan
[ICCV 2019, Oral] Controllable Artistic Text Style Transfer via Shape-Matching GAN
Stars: ✭ 315 (-10.76%)
Mutual labels:  style-transfer
Randomcnn Voice Transfer
Audio style transfer with shallow random parameters CNN. Result: https://soundcloud.com/mazzzystar/sets/speech-conversion-sample
Stars: ✭ 273 (-22.66%)
Mutual labels:  style-transfer

Stylized-ImageNet

This repository contains information and code on how to create Stylized-ImageNet, a stylized version of ImageNet that can be used to induce a shape bias in CNNs as reported in our paper ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness by Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, and Wieland Brendel. We hope that you may find this repository a useful resource for your own research. Note that all code, data and materials concerning this paper are available in a separate repository, namely rgeirhos:texture-vs-shape.

Please don't hesitate to contact me at [email protected] or open an issue in case there is any question!

Example images

Here are a few examples of how different stylizations of the same ImageNet image can look like: As you can see, local textures are heavily distorted, while global object shapes remain (more or less) intact during stylization. This makes Stylized-ImageNet an effective dataset to nudge CNNs towards learning more about shapes, and less about local textures.

Usage

  1. Get style images (paintings). Download train.zip from Kaggle's painter-by-numbers dataset; extract the content (paintings) into this new directory: code/paintings_raw/ (about 38G).
  2. Get ImageNet images & set path. If you already have the ImageNet images, set the IMAGENET_PATH variable in code/general.py accordingly. If not, obtain the ImageNet images from the ImageNet website and store them somewhere locally, then set the variable. Note that the ImageNet images need to be split in two subdirectories, train/ and val/ (for training and validation images, respectively). In any case, also set the STYLIZED_IMAGENET_PATH variable (also in code/general.py). This variable indicates the path where you would like to store the final dataset. Make sure you have enough disk space: In our setting, Stylized-ImageNet needs 134G of disk space (which is a bit less than standard ImageNet with 181G).
  3. go to code/ and execute create_stylized_imagenet.sh (assuming access to a GPU). The easiest way for doing this is to the docker image that we provide (see Section below). This creates Stylized-ImageNet in the directory that you specified in step 2.
  4. Optionally, delete paintings_raw/, paintings_excluded/ and paintings_preprocessed/ which are no longer needed.

Docker image

We provide a docker image so that you don't have to install all of the libraries yourself here: https://hub.docker.com/r/bethgelab/deeplearning/. The repo is tested with bethgelab/deeplearning:cuda9.0-cudnn7.

CNNs pre-trained on Stylized-ImageNet

We provide CNNs that are trained on Stylized-ImageNet at rgeirhos:texture-vs-shape.

Training details

ImageNet images are typically normalized using the standard ImageNet mean and std parameters when training a model. Stylized-ImageNet can be used as a drop-in replacement for ImageNet during training, i.e. the results in our paper are based on identical normalization as for ImageNet images. More specifically, we use ImageNet mean and std for both datasets when training; with mean and std parameters taken from the PyTorch ImageNet example training script.

Stylize arbitrary datasets

This repository is tailored to creating a stylized version of ImageNet. Should you be interested in stylizing a different dataset, I recommend using this code: https://github.com/bethgelab/stylize-datasets which stylizes arbitrary image datasets.

Credit

The code itself heavily relies on the pytorch-AdaIN github repository by Naoto Inoue (naoto0804), which is a PyTorch implementation of the AdaIN style transfer approach by X. Huang and S. Belongie, "Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization", published at ICCV 2017. In fact, the entire AdaIN implementation is taken from this repository; in order to enable anyone to create Stylized-ImageNet with as little additional effort as possible we here make everything available in one repository (preprocessing, style transfer, etc.).

If you find Stylized-ImageNet useful for your work, please consider citing it:

@inproceedings{
geirhos2018,
title={ImageNet-trained {CNN}s are biased towards texture; increasing shape bias improves accuracy and robustness.},
author={Robert Geirhos and Patricia Rubisch and Claudio Michaelis and Matthias Bethge and Felix A Wichmann and Wieland Brendel},
booktitle={International Conference on Learning Representations},
year={2019},
url={https://openreview.net/forum?id=Bygh9j09KX},
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].