All Projects → irasin → Pytorch_Style_Swap

irasin / Pytorch_Style_Swap

Licence: GPL-3.0 license
Unofficial Pytorch(1.0+) implementation of paper [Fast Patch-based Style Transfer of Arbitrary Style](https://arxiv.org/abs/1612.04337).

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Pytorch Style Swap

Pytorch AdaIN
Pytorch implementation from scratch of [Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization [Huang+, ICCV2017]]
Stars: ✭ 85 (+51.79%)
Mutual labels:  styletransfer
stylizeapp
A flask website for style transfer
Stars: ✭ 34 (-39.29%)
Mutual labels:  styletransfer
awesome style transfer
The style transfer paper collection in International CV conference
Stars: ✭ 42 (-25%)
Mutual labels:  styletransfer
Meetup-Content
Entirety.ai Intuition to Implementation Meetup Content.
Stars: ✭ 33 (-41.07%)
Mutual labels:  styletransfer
SANET
"Arbitrary Style Transfer with Style-Attentional Networks" (CVPR 2019)
Stars: ✭ 21 (-62.5%)
Mutual labels:  styletransfer
color-aware-style-transfer
Reference code for the paper CAMS: Color-Aware Multi-Style Transfer.
Stars: ✭ 36 (-35.71%)
Mutual labels:  styletransfer
Image-Style-Transfer-Using-CNNs
Implementation of Image Style Transfer Using CNNs using Pytorch.
Stars: ✭ 16 (-71.43%)
Mutual labels:  styletransfer

Pytorch_Style_Swap

Unofficial Pytorch(1.0+) implementation of paper Fast Patch-based Style Transfer of Arbitrary Style.

Original torch implementation from the author can be found here.

This repository provides a pre-trained model for you to generate your own image given content image and style image. Also, you can download the training dataset or prepare your own dataset to train the model from scratch.

If you have any question, please feel free to contact me. (Language in English/Japanese/Chinese will be ok!)

Notice

I propose a structure-emphasized multimodal style transfer(SEMST), feel free to use it here.


Requirements

  • Python 3.7
  • PyTorch 1.0+
  • TorchVision
  • Pillow

Anaconda environment recommended here!

  • GPU environment (For the calculation of style-swap)

Usage


test

  1. Clone this repository

    git clone https://github.com/irasin/Pytorch_Style_Swap
    cd Pytorch_Style_Swap
  2. Prepare your content image and style image. I provide some in the content and style and you can try to use them easily. Notice that they may be too large to tranfer because the style_swap.py really consume a lot of gpu memory, So I'm not sure all the image provided can be transferred.

  3. Generate the output image. A transferred output image and a content_output_pair image and a NST_demo_like image will be generated.

    python test -c content_image_path -s style_image_path
    usage: test.py [-h] 
                   [--content CONTENT] 
                   [--style STYLE]
                   [--output_name OUTPUT_NAME] 
                   [--patch_size PATCH_SIZE]
                   [--gpu GPU] 
                   [--model_state_path MODEL_STATE_PATH]
    
    

    If output_name is not given, it will use the combination of content image name and style image name.


train

  1. Download COCO (as content dataset)and Wikiart (as style dataset) and unzip them, rename them as content and style respectively (recommended).

  2. Modify the argument in the train.py such as the path of directory, epoch, learning_rate or you can add your own training code.

  3. Train the model using gpu.

  4. python train.py
    usage: train.py [-h] 
                    [--batch_size BATCH_SIZE] 
                    [--epoch EPOCH]
                    [--patch_size PATCH_SIZE] 
                    [--gpu GPU]
                    [--learning_rate LEARNING_RATE] 
                    [--tv_weight TV_WEIGHT]
                    [--snapshot_interval SNAPSHOT_INTERVAL]
                    [--train_content_dir TRAIN_CONTENT_DIR]
                    [--train_style_dir TRAIN_STYLE_DIR]
                    [--test_content_dir TEST_CONTENT_DIR]
                    [--test_style_dir TEST_STYLE_DIR] 
                    [--save_dir SAVE_DIR]
    

Result

Some results of content image and my cat (called Sora) will be shown here.

image image image image image image

My Opinion

The style-swap is implemented by a serious of convolutional operation.I am a beginner of Pytorch, so I afraid that my implementation technique are too poor so that it caused style-swap consuming too much gpu memory. I will be very appreciated if you can improve this implementation.Feel free to give me a PR.Thanks.

Also, as you may know, Adain and WCT are more powerful than style-swap, check them if you are interested.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].