All Projects → KumapowerLIU → Rethinking Inpainting Medfe

KumapowerLIU / Rethinking Inpainting Medfe

Licence: other
Rethinking Image Inpainting via a Mutual Encoder Decoder with Feature Equalizations. ECCV 2020 Oral

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Rethinking Inpainting Medfe

Facegan
TF implementation of our ECCV 2018 paper: Semi-supervised Adversarial Learning to Generate Photorealistic Face Images of New Identities from 3D Morphable Model
Stars: ✭ 176 (-12%)
Mutual labels:  generative-adversarial-network
Pytorch Generative Model Collections
Collection of generative models in Pytorch version.
Stars: ✭ 2,296 (+1048%)
Mutual labels:  generative-adversarial-network
Neuralnetworks.thought Experiments
Observations and notes to understand the workings of neural network models and other thought experiments using Tensorflow
Stars: ✭ 199 (-0.5%)
Mutual labels:  generative-adversarial-network
Opt Mmd
Learning kernels to maximize the power of MMD tests
Stars: ✭ 181 (-9.5%)
Mutual labels:  generative-adversarial-network
Kbgan
Code for "KBGAN: Adversarial Learning for Knowledge Graph Embeddings" https://arxiv.org/abs/1711.04071
Stars: ✭ 186 (-7%)
Mutual labels:  generative-adversarial-network
Freezed
Freeze the Discriminator: a Simple Baseline for Fine-Tuning GANs (CVPRW 2020)
Stars: ✭ 195 (-2.5%)
Mutual labels:  generative-adversarial-network
Tgan
Generative adversarial training for generating synthetic tabular data.
Stars: ✭ 173 (-13.5%)
Mutual labels:  generative-adversarial-network
Iseebetter
iSeeBetter: Spatio-Temporal Video Super Resolution using Recurrent-Generative Back-Projection Networks | Python3 | PyTorch | GANs | CNNs | ResNets | RNNs | Published in Springer Journal of Computational Visual Media, September 2020, Tsinghua University Press
Stars: ✭ 202 (+1%)
Mutual labels:  generative-adversarial-network
Dragan
A stable algorithm for GAN training
Stars: ✭ 189 (-5.5%)
Mutual labels:  generative-adversarial-network
Arbitrary Text To Image Papers
A collection of arbitrary text to image papers with code (constantly updating)
Stars: ✭ 196 (-2%)
Mutual labels:  generative-adversarial-network
Gan Weight Norm
Code for "On the Effects of Batch and Weight Normalization in Generative Adversarial Networks"
Stars: ✭ 182 (-9%)
Mutual labels:  generative-adversarial-network
Gan2shape
Code for GAN2Shape (ICLR2021 oral)
Stars: ✭ 183 (-8.5%)
Mutual labels:  generative-adversarial-network
Deep Learning With Python
Deep learning codes and projects using Python
Stars: ✭ 195 (-2.5%)
Mutual labels:  generative-adversarial-network
Gail Tf
Tensorflow implementation of generative adversarial imitation learning
Stars: ✭ 179 (-10.5%)
Mutual labels:  generative-adversarial-network
Csa Inpainting
Coherent Semantic Attention for image inpainting(ICCV 2019)
Stars: ✭ 202 (+1%)
Mutual labels:  generative-adversarial-network
Text To Image
Text to image synthesis using thought vectors
Stars: ✭ 2,052 (+926%)
Mutual labels:  generative-adversarial-network
Creative Adversarial Networks
(WIP) Implementation of Creative Adversarial Networks https://arxiv.org/pdf/1706.07068.pdf
Stars: ✭ 193 (-3.5%)
Mutual labels:  generative-adversarial-network
Triple Gan
See Triple-GAN-V2 in PyTorch: https://github.com/taufikxu/Triple-GAN
Stars: ✭ 203 (+1.5%)
Mutual labels:  generative-adversarial-network
Conditional Gan
Tensorflow implementation for Conditional Convolutional Adversarial Networks.
Stars: ✭ 202 (+1%)
Mutual labels:  generative-adversarial-network
Keras Acgan
Auxiliary Classifier Generative Adversarial Networks in Keras
Stars: ✭ 196 (-2%)
Mutual labels:  generative-adversarial-network

License CC BY-NC-SA 4.0 Python 3.6

Rethinking-Inpainting-MEDFE

MEDFE Show

Paper | BibTex

Rethinking Image Inpainting via a Mutual Encoder Decoder with Feature Equalizations .

Hongyu Liu, Bin Jiang, Yibing Song, Wei Huang and Chao Yang.
In ECCV 2020 (Oral).

License

All rights reserved. Licensed under the CC BY-NC-SA 4.0 (Attribution-NonCommercial-ShareAlike 4.0 International)

The code is released for academic research use only. For commercial use, please contact [email protected].

Installation

Clone this repo.

git clone https://github.com/KumapowerLIU/Rethinking-Inpainting-MEDFE.git

Prerequisites

  • Python3
  • Pytorch >=1.0
  • Tensorboard
  • Torchvision
  • pillow

Dataset Preparation

We use Places2, CelebA and Paris Street-View datasets. To train a model on the full dataset, download datasets from official websites.

Our model is trained on the irregular mask dataset provided by Liu et al. You can download publically available Irregular Mask Dataset from their website.

For Structure image of datasets, we follow the structure flow and utlize the RTV smooth method.Run generation function data/Matlab/generate_structre_images.m in your matlab. For example, if you want to generate smooth images for Places2, you can run the following code:

generate_structure_images("path to Places2 dataset root", "path to output folder");

Training New Models

# To train on the you dataset, for example.
python train.py --st_root=[the path of structure images] --de_root=[the path of ground truth images] --mask_root=[the path of mask images]

There are many options you can specify. Please use python train.py --help or see the options

For the current version, the batchsize needs to be set to 1.

To log training, use --./logs for Tensorboard. The logs are stored at logs/[name].

Code Structure

  • train.py: the entry point for training.
  • models/networks.py: defines the architecture of all models
  • options/: creates option lists using argparse package. More individuals are dynamically added in other files as well.
  • data/: process the dataset before passing to the network.
  • models/encoder.py: defines the encoder.
  • models/decoder.py: defines the decoder.
  • models/PCconv.py: defines the Multiscale Partial Conv, feature equalizations and two branch.
  • models/MEDFE.py: defines the loss, model, optimizetion, foward, backward and others.

Pre-trained weights and test model

There are three folders to present pre-trained for three datasets respectively, for the celeba, we olny use the centering masks. You can download the pre-trained model here. The test model and demo will comming soon. I think adding random noise to the input may make the effect better, I will re-train our model and update the parameters.

About Feature equalizations

I think the feature equalizations may can be utlized in many tasks to replace the traditional attention block (None local/CBAM). I didn't try because of lack of time,I hope someone can try the method and communicate with me.

Citation

If you use this code for your research, please cite our papers.

@inproceedings{Liu2019MEDFE,
  title={Rethinking Image Inpainting via a Mutual Encoder-Decoder with Feature Equalizations},
  author={Hongyu Liu, Bin Jiang, Yibing Song, Wei Huang, and Chao Yang,},
  booktitle={Proceedings of the European Conference on Computer Vision},
  year={2020}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].