All Projects → nihalsid → deepfillv2-pylightning

nihalsid / deepfillv2-pylightning

Licence: other
Clean minimal implementation of Free-Form Image Inpainting with Gated Convolutions in pytorch lightning. Inspired from pytorch implementation by @avalonstrel.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to deepfillv2-pylightning

Exemplar-GAN-Eye-Inpainting-Tensorflow
Tensorflow implement of "Eye In-Painting with Exemplar Generative Adversarial Networks"
Stars: ✭ 98 (+653.85%)
Mutual labels:  image-inpainting
hydra-zen
Pythonic functions for creating and enhancing Hydra applications
Stars: ✭ 165 (+1169.23%)
Mutual labels:  pytorch-lightning
deepaudio-speaker
neural network based speaker embedder
Stars: ✭ 19 (+46.15%)
Mutual labels:  pytorch-lightning
Image-Processing
A set of algorithms and other cool things that I learned while doing image processing with openCV using C++ and python.
Stars: ✭ 29 (+123.08%)
Mutual labels:  image-inpainting
ImageInpainting.jl
Image inpainting algorithms in Julia
Stars: ✭ 24 (+84.62%)
Mutual labels:  image-inpainting
hififace
Unofficial PyTorch Implementation for HifiFace (https://arxiv.org/abs/2106.09965)
Stars: ✭ 227 (+1646.15%)
Mutual labels:  pytorch-lightning
Color-Image-Inpainting
Image inpainting based on OMP and KSVD algorithm
Stars: ✭ 66 (+407.69%)
Mutual labels:  image-inpainting
BrainMaGe
Brain extraction in presence of abnormalities, using single and multiple MRI modalities
Stars: ✭ 23 (+76.92%)
Mutual labels:  pytorch-lightning
GazeCorrection
Unsupervised High-Resolution Portrait Gaze Correction and Animation (TIP 2022)
Stars: ✭ 174 (+1238.46%)
Mutual labels:  image-inpainting
Depth-Guided-Inpainting
Code for ECCV 2020 "DVI: Depth Guided Video Inpainting for Autonomous Driving"
Stars: ✭ 50 (+284.62%)
Mutual labels:  image-inpainting
Edge Connect
EdgeConnect: Structure Guided Image Inpainting using Edge Prediction, ICCV 2019 https://arxiv.org/abs/1901.00212
Stars: ✭ 2,163 (+16538.46%)
Mutual labels:  image-inpainting
Generative inpainting
DeepFill v1/v2 with Contextual Attention and Gated Convolution, CVPR 2018, and ICCV 2019 Oral
Stars: ✭ 2,659 (+20353.85%)
Mutual labels:  image-inpainting
ConSSL
PyTorch Implementation of SOTA SSL methods
Stars: ✭ 61 (+369.23%)
Mutual labels:  pytorch-lightning
AOT-GAN-for-Inpainting
[TVCG] AOT-GAN for High-Resolution Image Inpainting (codebase for image inpainting)
Stars: ✭ 147 (+1030.77%)
Mutual labels:  image-inpainting
Transformer-QG-on-SQuAD
Implement Question Generator with SOTA pre-trained Language Models (RoBERTa, BERT, GPT, BART, T5, etc.)
Stars: ✭ 28 (+115.38%)
Mutual labels:  pytorch-lightning
Implicit-Internal-Video-Inpainting
[ICCV 2021]: IIVI: Internal Video Inpainting by Implicit Long-range Propagation
Stars: ✭ 190 (+1361.54%)
Mutual labels:  image-inpainting
Fast-AgingGAN
A deep learning model to age faces in the wild, currently runs at 60+ fps on GPUs
Stars: ✭ 133 (+923.08%)
Mutual labels:  pytorch-lightning
PanoDR
Code and models for "PanoDR: Spherical Panorama Diminished Reality for Indoor Scenes" presented at the OmniCV workshop of CVPR21.
Stars: ✭ 22 (+69.23%)
Mutual labels:  image-inpainting
skillful nowcasting
Implementation of DeepMind's Deep Generative Model of Radar (DGMR) https://arxiv.org/abs/2104.00954
Stars: ✭ 117 (+800%)
Mutual labels:  pytorch-lightning
VideoTransformer-pytorch
PyTorch implementation of a collections of scalable Video Transformer Benchmarks.
Stars: ✭ 159 (+1123.08%)
Mutual labels:  pytorch-lightning

DeepFillv2 Pytorch Lightning

Clean minimal implementation of Free-Form Image Inpainting with Gated Convolutions in pytorch lightning. Inspired from pytorch implementation by @avalonstrel.

The models lack support for guidance masks, but can be very easily added. Instead of contexual attention, uses self attention GAN. The dataset class lacks support for generating freeform masks.

Usage

usage: deepfillv2.py [-h] [--dataset DATASET] [--num_workers NUM_WORKERS]
                     [--image_size IMAGE_SIZE] [--bbox_shape BBOX_SHAPE]
                     [--bbox_randomness BBOX_RANDOMNESS]
                     [--bbox_margin BBOX_MARGIN] [--bbox_max_num BBOX_MAX_NUM]
                     [--vis_dataset VIS_DATASET] [--overfit]
                     [--max_epoch MAX_EPOCH] [--save_epoch SAVE_EPOCH]
                     [--lr LR] [--weight_decay WEIGHT_DECAY] [--l1_c_h L1_C_H]
                     [--l1_c_nh L1_C_NH] [--l1_r_h L1_R_H] [--l1_r_nh L1_R_NH]
                     [--gen_loss_alpha GEN_LOSS_ALPHA]
                     [--disc_loss_alpha DISC_LOSS_ALPHA]
                     [--batch_size BATCH_SIZE] [--input_nc INPUT_NC]
                     [--experiment EXPERIMENT]
                     [--visualization_set VISUALIZATION_SET]

Arguments:

  --dataset DATASET                         dataset name
  --num_workers NUM_WORKERS                 num workers
  --image_size IMAGE_SIZE                   input image size
  --bbox_shape BBOX_SHAPE                   random box size
  --bbox_randomness BBOX_RANDOMNESS         variation in box size
  --bbox_margin BBOX_MARGIN                 margin from boundaries for box
  --bbox_max_num BBOX_MAX_NUM               max num of boxes
  --vis_dataset VIS_DATASET                 images to be visualized after each epoch
  --overfit                                 overfit
  --max_epoch MAX_EPOCH                     number of epochs to train for
  --save_epoch SAVE_EPOCH                   save every nth epoch
  --lr LR                                   learning rate, default=0.001
  --weight_decay WEIGHT_DECAY               weight decay
  --l1_c_h L1_C_H                           reconstruction coarse weight for holes
  --l1_c_nh L1_C_NH                         reconstruction coarse weight for non-holes
  --l1_r_h L1_R_H                           reconstruction coarse weight for holes
  --l1_r_nh L1_R_NH                         reconstruction coarse weight for non-holes
  --gen_loss_alpha GEN_LOSS_ALPHA           reconstruction coarse weight for non-holes
  --disc_loss_alpha DISC_LOSS_ALPHA         reconstruction coarse weight for non-holes
  --batch_size BATCH_SIZE                   batch size
  --input_nc INPUT_NC                       number of input channels + mask
  --experiment EXPERIMENT                   experiment directory
  --visualization_set VISUALIZATION_SET     validation samples to be visualized

The defaults should give reasonable performance in most cases.

Data

The datasets must be placed in the the following structure

├── dataset-name
    ├── images
        # all the rgb images
    ├── split
        ├── train.txt
        ├── val.txt 
        ├── test.txt
        ├── vis_0.txt
        # names of images without extension

For an example see the matterport folder in data directory.

Acknowledgements

Official repository for DeepFillv2 and pytorch implementation by @avalonstrel.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].