All Projects → charliememory → EGSC-IT

charliememory / EGSC-IT

Licence: other
Tensorflow implementation of ICLR2019 paper "Exemplar Guided Unsupervised Image-to-Image Translation with Semantic Consistency"

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to EGSC-IT

iPerceive
Applying Common-Sense Reasoning to Multi-Modal Dense Video Captioning and Video Question Answering | Python3 | PyTorch | CNNs | Causality | Reasoning | LSTMs | Transformers | Multi-Head Self Attention | Published in IEEE Winter Conference on Applications of Computer Vision (WACV) 2021
Stars: ✭ 52 (+79.31%)
Mutual labels:  multi-modal
awesome-gan
A collection of AWESOME things about GAN
Stars: ✭ 42 (+44.83%)
Mutual labels:  image-translation
UNITE
Unbalanced Feature Transport for Exemplar-based Image Translation [CVPR 2021] and Marginal Contrastive Correspondence for Guided Image Generation [CVPR 2022]
Stars: ✭ 183 (+531.03%)
Mutual labels:  image-translation
Valhalla
Open Source Routing Engine for OpenStreetMap
Stars: ✭ 1,794 (+6086.21%)
Mutual labels:  multi-modal
CoCosNet-v2
CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation
Stars: ✭ 312 (+975.86%)
Mutual labels:  image-translation
CommonCoreOntologies
The Common Core Ontology Repository holds the current released version of the Common Core Ontology suite.
Stars: ✭ 109 (+275.86%)
Mutual labels:  semantic-consistency
nemar
[CVPR2020] Unsupervised Multi-Modal Image Registration via Geometry Preserving Image-to-Image Translation
Stars: ✭ 120 (+313.79%)
Mutual labels:  multi-modal
day2night
Image2Image Translation Research
Stars: ✭ 46 (+58.62%)
Mutual labels:  image-translation
cycleGAN-PyTorch
A clean and lucid implementation of cycleGAN using PyTorch
Stars: ✭ 107 (+268.97%)
Mutual labels:  image-translation
Dunit
(CVPR 2020) DUNIT: Detection-Based Unsupervised Image-to-Image Translation
Stars: ✭ 24 (-17.24%)
Mutual labels:  image-translation
Dalle Pytorch
Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
Stars: ✭ 3,661 (+12524.14%)
Mutual labels:  multi-modal
SMIT
Pytorch implemenation of Stochastic Multi-Label Image-to-image Translation (SMIT), ICCV Workshops 2019.
Stars: ✭ 37 (+27.59%)
Mutual labels:  image-translation
chainer-pix2pix
Chainer implementation for Image-to-Image Translation Using Conditional Adversarial Networks
Stars: ✭ 40 (+37.93%)
Mutual labels:  image-translation
Multi-Modal-Transformer
The repository collects many various multi-modal transformer architectures, including image transformer, video transformer, image-language transformer, video-language transformer and related datasets. Additionally, it also collects many useful tutorials and tools in these related domains.
Stars: ✭ 61 (+110.34%)
Mutual labels:  multi-modal
MMTOD
Multi-modal Thermal Object Detector
Stars: ✭ 38 (+31.03%)
Mutual labels:  multi-modal
OASIS
Official implementation of the paper "You Only Need Adversarial Supervision for Semantic Image Synthesis" (ICLR 2021)
Stars: ✭ 232 (+700%)
Mutual labels:  multi-modal
skill-sample-nodejs-berry-bash
Demonstrates the use of interactive render template directives through multi modal screen design.
Stars: ✭ 22 (-24.14%)
Mutual labels:  multi-modal
Guided-I2I-Translation-Papers
Guided Image-to-Image Translation Papers
Stars: ✭ 117 (+303.45%)
Mutual labels:  image-translation
TRAR-VQA
[ICCV 2021] TRAR: Routing the Attention Spans in Transformers for Visual Question Answering -- Official Implementation
Stars: ✭ 49 (+68.97%)
Mutual labels:  multi-modal
Pix2Pix
Image to Image Translation using Conditional GANs (Pix2Pix) implemented using Tensorflow 2.0
Stars: ✭ 29 (+0%)
Mutual labels:  image-translation

Exemplar Guided Unsupervised Image-to-Image Translation with Semantic Consistency

Tensorflow implementation of ICLR 2019 paper Exemplar Guided Unsupervised Image-to-Image Translation with Semantic Consistency

alt text

Network architecture

alt text

Information flow diagrams

alt text

Dependencies

  • python 3.6.9
  • tensorflow-gpu (1.14.0)
  • numpy (1.14.0)
  • Pillow (5.0.0)
  • scikit-image (0.13.0)
  • scipy (1.0.1)
  • matplotlib (2.0.0)

Resources

  • Pretrained models: MNIST, MNIST_multi, GTA<->BDD, CelebA, VGG19
  • Training & Testing data in tf-record format: MNIST, MNIST_multi. GTA<->BDD, CelebA. Note: For the GTA<->BDD experiment, the data are prepared with RGB images of 512x1024 resolution, and segmentation labels of 8 categories. They are provided used for further research. In our paper, we use RGB images of 256x512 resolution without and segmentation labels.
  • Segmentation model Refer to DeepLab-ResNet-TensorFlow

TF-record data preparation steps (Optional)

You can skip this data preparation procedure if directly using the tf-record data files.

  1. cd datasets
  2. ./run_convert_mnist.sh to download and convert mnist and mnist_multi to tf-record format.
  3. ./run_convert_gta_bdd.sh to convert the images and segmentation to tf-record format. You need to download data from GTA5 website and BDD website. Note: this script will reuse gta data downloaded and processed in ./run_convert_gta_bdd.sh
  4. ./run_convert_celeba.sh to convert the images to tf-record format. You can directly download the prepared data or download and process data from CelebA website .

Training steps

  1. Replace the links data, logs, weights with your own directories or links.
  2. Download VGG19 into 'weights' directory.
  3. Download the tf-record training data to the data_parent_dir (default ./data).
  4. Modify the data_parent_dir, checkpoint_dir and comment/uncomment the target experiment in the run_train_feaMask.sh and run_train_EGSCIT.sh scripts.
  5. Run run_train_feaMask.sh to pretrain the feature mask network. Then run run_train_EGSCIT.sh.

Testing steps

  1. Replace the links data, logs, weights with your own directories or links.
  2. (Optional) Download the pretrained models to the checkpoint_dir (default ./logs).
  3. Download the tf-record testing data to the data_parent_dir (default ./data).
  4. Modify the data_parent_dir, checkpoint_dir and comment/uncomment the target experiment in the run_test_EGSCIT.sh script.
  5. run run_test_EGSCIT.sh.

Citation

@article{ma2018exemplar,
  title={Exemplar Guided Unsupervised Image-to-Image Translation with Semantic Consistency},
  author={Ma, Liqian and Jia, Xu and Georgoulis, Stamatios and Tuytelaars, Tinne and Van Gool, Luc},
  journal={ICLR},
  year={2019}
}

Related projects

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].