All Projects → taey16 → Pix2pixbegan.pytorch

taey16 / Pix2pixbegan.pytorch

A pytorch implementation of pix2pix + BEGAN (Boundary Equilibrium Generative Adversarial Networks)

Projects that are alternatives of or similar to Pix2pixbegan.pytorch

Pytorch Pix2pix
Pytorch implementation of pix2pix for various datasets.
Stars: ✭ 74 (-50%)
Mutual labels:  gan, pix2pix
P2pala
Page to PAGE Layout Analysis Tool
Stars: ✭ 147 (-0.68%)
Mutual labels:  gan, pix2pix
Ashpy
TensorFlow 2.0 library for distributed training, evaluation, model selection, and fast prototyping.
Stars: ✭ 82 (-44.59%)
Mutual labels:  gan, pix2pix
Pix2pixhd
Synthesizing and manipulating 2048x1024 images with conditional GANs
Stars: ✭ 5,553 (+3652.03%)
Mutual labels:  gan, pix2pix
Tensorflow Pix2pix
A lightweight pix2pix Tensorflow implementation.
Stars: ✭ 143 (-3.38%)
Mutual labels:  gan, pix2pix
Deepnude nowatermark withmodel
DeepNude source code,without watermark,with demo and model download link,one command to run offline,GAN/Pytorch/pix2pix/pic2pic
Stars: ✭ 950 (+541.89%)
Mutual labels:  gan, pix2pix
Tensorflow2.0 Examples
🙄 Difficult algorithm, Simple code.
Stars: ✭ 1,397 (+843.92%)
Mutual labels:  gan, pix2pix
Pytorch Cyclegan And Pix2pix
Image-to-Image Translation in PyTorch
Stars: ✭ 16,477 (+11033.11%)
Mutual labels:  gan, pix2pix
Cyclegan
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.
Stars: ✭ 10,933 (+7287.16%)
Mutual labels:  gan, pix2pix
Sketch To Art
🖼 Create artwork from your casual sketch with GAN and style transfer
Stars: ✭ 115 (-22.3%)
Mutual labels:  gan, pix2pix
Igan
Interactive Image Generation via Generative Adversarial Networks
Stars: ✭ 3,845 (+2497.97%)
Mutual labels:  gan, pix2pix
Focal Frequency Loss
Focal Frequency Loss for Generative Models
Stars: ✭ 141 (-4.73%)
Mutual labels:  gan, pix2pix
cgan-face-generator
Face generator from sketches using cGAN (pix2pix) model
Stars: ✭ 52 (-64.86%)
Mutual labels:  gan, pix2pix
Pix2pix
Image-to-image translation with conditional adversarial nets
Stars: ✭ 8,765 (+5822.3%)
Mutual labels:  gan, pix2pix
pix2pix-tensorflow
A minimal tensorflow implementation of pix2pix (Image-to-Image Translation with Conditional Adversarial Nets - https://phillipi.github.io/pix2pix/).
Stars: ✭ 22 (-85.14%)
Mutual labels:  gan, pix2pix
Neuralnetworkpostprocessing
Unity Post Processing with Convolution Neural Network
Stars: ✭ 81 (-45.27%)
Mutual labels:  gan, pix2pix
Swapnet
Virtual Clothing Try-on with Deep Learning. PyTorch reproduction of SwapNet by Raj et al. 2018. Now with Docker support!
Stars: ✭ 202 (+36.49%)
Mutual labels:  gan, pix2pix
Paddlegan
PaddlePaddle GAN library, including lots of interesting applications like First-Order motion transfer, wav2lip, picture repair, image editing, photo2cartoon, image style transfer, and so on.
Stars: ✭ 4,987 (+3269.59%)
Mutual labels:  gan, pix2pix
Deepnudecli
DeepNude Command Line Version With Watermark Removed
Stars: ✭ 112 (-24.32%)
Mutual labels:  gan, pix2pix
Starnet
StarNet
Stars: ✭ 141 (-4.73%)
Mutual labels:  gan, pix2pix

pix2pix + BEGAN

Install

Dataset

Train

  • pix2pixGAN
  • CUDA_VISIBLE_DEVICES=x python main_pix2pixgan.py --dataroot /path/to/facades/train --valDataroot /path/to/facades/val --exp /path/to/a/directory/for/checkpoints
  • pix2pixBEGAN
  • CUDA_VISIBLE_DEVICES=x python main_pix2pixBEGAN.py --dataroot /path/to/facades/train --valDataroot /path/to/facades/val --exp /path/to/a/directory/for/checkpoints
  • Most of the parameters are the same for a fair comparision.
  • The original pix2pix is modelled as a conditional GAN, however we didn't. Input samples are not given in D(Only target samples are given)
  • We used the image-buffer(analogyous to replay-buffer in DQN) in training D.
  • Try other datasets as your need. Similar results will be found.

Training Curve(pix2pixBEGAN)

  • L_D and L_G \w BEGAN

loss

  • We found out both L_D and L_G are balanced consistently(equilibrium parameter, gamma=0.7) and converged, even thought network D and G are different in terms of model capacity and detailed layer specification.

  • M_global

Mglobal

  • As the author said, M_global is a good indicator for monitoring convergence.

  • Parsing log: train-log file will be saved in the driectory, you specified, named as train.log

  • L_D and L_G \w GAN

BEGAN_loss

Comparison

  • pix2pixGAN vs. pix2pixBEGAN
  • CUDA_VISIBLE_DEVICES=x python compare.py --netG_GAN /path/to/netG.pth --netG_BEGAN /path/to/netG.pth --exp /path/to/a/dir/for/saving --tstDataroot /path/to/facades/test/ failure GANvsBEGAN
  • Checkout more results(order in input, real-target, fake(pix2pixBEGAN), fake(pix2pixGAN))
  • Interpolation on the input-space.
  • CUDA_VISIBLE_DEVICES=x python interpolateInput.py --tstDataroot ~/path/to/your/facades/test/ --interval 14 --exp /path/to/resulting/dir --tstBatchSize 4 --netG /path/to/your/netG_epoch_xxx.pth
  • Upper rows: pix2pixGAN, Lower rows: pix2pixBEGAN interpolation

Showing reconstruction from D and generation from G

  • (order in input, real-target, reconstructed-real, fake, reconstructed-fake) reconDandGenG

Reference

misc.

  • We apologize for your inconvenience when cloning this project. Size of resulting images are huge. please be patient.(Downloading zip file seems to need less time.)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].