All Projects → pfnet-research → Chainer-DeepFill

pfnet-research / Chainer-DeepFill

Licence: MIT License
No description, website, or topics provided.

Programming Languages

python
139335 projects - #7 most used programming language

Chainer implementation of DeepFill v1 and v2

DeepFillv1 Paper | Official Implementation

DeepFillv2 Paper

Requirements

numpy
opencv_python
chainer >= 6.0.0b
Pillow
PyYAML

Datasets

Please save text files that contain paths to images in distinct lines, and specify them for IMAGE_FLIST in the config file (src/contextual_attention.yml and src/gated_convolution.yml).

IMAGE_FLIST: [
  'paths_for_training_image.txt', # for training
  'paths_for_validation_image.txt', # for validation
]

When you train DeepFillv2 with edge image input, please save edge image in advance, and specify the paths to the text files that contain edge images paths for EDGE_FLIST in the config file (src/gated_convolution.yml). The orders of image paths and edge paths must be the same.

EDGE_FLIST: [
  'paths_for_training_edge.txt', # for training
  'paths_for_validation_edge.txt', # for validation
]

You can train without edge input if you do not specify anything for EDGE_FLIST.

Edge image example:

edge image

Background and edge values should be 0 and 255 respectively.

Training

Only single GPU training is supported.

  • DeepFillv1

    • Modify contextual_attention.yml to set IMAGE_FLIST, MODEL_RESTORE,EVAL_FOLDER and other parameters.
    • Run
    cd src
    python train_contextual_attention.py --snapshot path_to_snapshot.npz
    
  • DeepFillv2

    • Modify gated_convolution.yml to set IMAGE_FLIST, EDGE_FLIST, MODEL_RESTORE,EVAL_FOLDER and other parameters.
    • Run
    cd src
    python train_gated_convolution.py --snapshot path_to_snapshot.npz
    

Validation

Run

python test.py --model [v1 or v2] --config_path [path to config] --snapshot [path to snapshot] --name [file name to save]

Results on ImageNet

  • DeepFillv1 (top: original, middle: input, bottom: output) contextual_attention
  • DeepFillv2 with edge input (top: original, middle: input, bottom: output) gated_convolution

Citing

@article{yu2018generative,
  title={Generative Image Inpainting with Contextual Attention},
  author={Yu, Jiahui and Lin, Zhe and Yang, Jimei and Shen, Xiaohui and Lu, Xin and Huang, Thomas S},
  journal={arXiv preprint arXiv:1801.07892},
  year={2018}
}

@article{yu2018free,
  title={Free-Form Image Inpainting with Gated Convolution},
  author={Yu, Jiahui and Lin, Zhe and Yang, Jimei and Shen, Xiaohui and Lu, Xin and Huang, Thomas S},
  journal={arXiv preprint arXiv:1806.03589},
  year={2018}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].