All Projects → RElbers → ada-conv-pytorch

RElbers / ada-conv-pytorch

Licence: MIT license
No description or website provided.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to ada-conv-pytorch

Image recoloring
Image Recoloring Based on Object Color Distributions (Eurographics 2019)
Stars: ✭ 30 (-34.78%)
Mutual labels:  style-transfer
CycleGAN-Music-Style-Transfer-Refactorization
Symbolic Music Genre Transfer with CycleGAN - Refactorization
Stars: ✭ 28 (-39.13%)
Mutual labels:  style-transfer
sRender
Facial Sketch Render, ICASSP 2021
Stars: ✭ 20 (-56.52%)
Mutual labels:  style-transfer
Android-Tensorflow-Style-Transfer
Based on tensorflow's style transfer Android project.
Stars: ✭ 18 (-60.87%)
Mutual labels:  style-transfer
Houdini-Plugin-for-Tensorflow-Smoke-Stylization
Tensorflow implementation of Style Transfer for Smoke Simulations. Created as part of ETH Zurich Student Summer Research Fellowship
Stars: ✭ 33 (-28.26%)
Mutual labels:  style-transfer
rudetoxifier
Code and data of "Methods for Detoxification of Texts for the Russian Language" paper
Stars: ✭ 30 (-34.78%)
Mutual labels:  style-transfer
linguistic-style-transfer-pytorch
Implementation of "Disentangled Representation Learning for Non-Parallel Text Style Transfer(ACL 2019)" in Pytorch
Stars: ✭ 55 (+19.57%)
Mutual labels:  style-transfer
Text-to-Speech-Landscape
No description or website provided.
Stars: ✭ 31 (-32.61%)
Mutual labels:  style-transfer
ganslate
Simple and extensible GAN image-to-image translation framework. Supports natural and medical images.
Stars: ✭ 17 (-63.04%)
Mutual labels:  style-transfer
dreamsnap
Real life through the eyes of an artist
Stars: ✭ 16 (-65.22%)
Mutual labels:  style-transfer
zero-shot-style-transfer
TensorFlow Implementation of Several Zero-Shot Image Style Transfer Methods
Stars: ✭ 14 (-69.57%)
Mutual labels:  style-transfer
deep dream
DeepDream psychodelic image generator.
Stars: ✭ 39 (-15.22%)
Mutual labels:  style-transfer
SANET
Arbitrary Style Transfer with Style-Attentional Networks
Stars: ✭ 105 (+128.26%)
Mutual labels:  style-transfer
Shakespearizing-Modern-English
Code for "Jhamtani H.*, Gangal V.*, Hovy E. and Nyberg E. Shakespearizing Modern Language Using Copy-Enriched Sequence to Sequence Models" Workshop on Stylistic Variation, EMNLP 2017
Stars: ✭ 64 (+39.13%)
Mutual labels:  style-transfer
groove2groove
Code for "Groove2Groove: One-Shot Music Style Transfer with Supervision from Synthetic Data"
Stars: ✭ 88 (+91.3%)
Mutual labels:  style-transfer
VisualML
Interactive Visual Machine Learning Demos.
Stars: ✭ 104 (+126.09%)
Mutual labels:  style-transfer
CartoonGAN-tensorflow
Simple code implement the paper of CartoonGAN(CVPR2018)
Stars: ✭ 14 (-69.57%)
Mutual labels:  style-transfer
awesome style transfer
The style transfer paper collection in International CV conference
Stars: ✭ 42 (-8.7%)
Mutual labels:  style-transfer
CS231n
PyTorch/Tensorflow solutions for Stanford's CS231n: "CNNs for Visual Recognition"
Stars: ✭ 47 (+2.17%)
Mutual labels:  style-transfer
NLP Toolkit
Library of state-of-the-art models (PyTorch) for NLP tasks
Stars: ✭ 92 (+100%)
Mutual labels:  style-transfer

AdaConv

Unofficial PyTorch implementation of the Adaptive Convolution architecture for image style transfer from "Adaptive Convolutions for Structure-Aware Style Transfer". I tried to be as faithful as possible to the what the paper explains of the model, but not every training detail was in the paper so I had to make some choices regarding that. If something was unclear I tried to do what AdaIn does instead. Results are at the bottom of this page.

Direct link to the adaconv module.

Direct link to the kernel predictor module.

Usage

The parameters in the commands below are the default parameters and can thus be omitted unless you want to use different options. Check the help option (-h or --help) for more information about all parameters. To train a new model:

python train.py --content ./data/MSCOCO/train2017 --style ./data/WikiArt/train

To resume training from a checkpoint (.ckpt files are saved in the log directory):

python train.py --checkpoint <path-to-ckpt-file>

To apply the model on a single style-content pair:

python stylize.py --content ./content.png --style ./style.png --output ./output.png --model ./model.ckpt

To apply the model on every style-content combination in a folder and create a table of outputs:

python test.py --content-dir ./test_images/content --style-dir ./test_images/style --output-dir ./test_images/output --model ./model.ckpt

Weights

Pretrained weights can be downloaded here. Move model.ckpt to the root directory of this project and run stylize.py or test.py. You can finetune the model further by loading it as a checkpoint and increasing the number of iterations. To train for an additional 40k (200k - 160k) iterations:

python train.py --checkpoint ./model.ckpt --iterations 200000

Data

The model is trained with the MS COCO train2017 dataset for content images and the WikiArt train dataset for style images. By default the content images should be placed in ./data/MSCOCO/train2017 and the style images in ./data/WikiArt/train. You can change these directories by passing arguments when running the script. The test style and content images in the ./test_images folder are taken from the official AdaIn repository.

Results

Judging from the results I'm not convinced everything is as the original authors did, but without an official repository it's hard to compare implementations. Results after training 160k iterations:

https://raw.githubusercontent.com/RElbers/ada-conv-pytorch/master/imgs/results_table_256.jpg

Comparison with reported results in the paper:

https://raw.githubusercontent.com/RElbers/ada-conv-pytorch/master/imgs/results_comparison.jpg

Architecture (from the original paper):

https://raw.githubusercontent.com/RElbers/ada-conv-pytorch/master/imgs/arch_01.png

https://raw.githubusercontent.com/RElbers/ada-conv-pytorch/master/imgs/arch_02.png

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].