All Projects → EndlessSora → Focal Frequency Loss

EndlessSora / Focal Frequency Loss

Focal Frequency Loss for Generative Models

Projects that are alternatives of or similar to Focal Frequency Loss

Cyclegan
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.
Stars: ✭ 10,933 (+7653.9%)
Mutual labels:  gan, generative-adversarial-network, image-generation, pix2pix, image-manipulation
Pytorch Cyclegan And Pix2pix
Image-to-Image Translation in PyTorch
Stars: ✭ 16,477 (+11585.82%)
Mutual labels:  gan, generative-adversarial-network, image-generation, pix2pix, image-manipulation
Pix2pix
Image-to-image translation with conditional adversarial nets
Stars: ✭ 8,765 (+6116.31%)
Mutual labels:  gan, generative-adversarial-network, image-generation, pix2pix, image-manipulation
Tsit
[ECCV 2020 Spotlight] A Simple and Versatile Framework for Image-to-Image Translation
Stars: ✭ 141 (+0%)
Mutual labels:  gan, generative-adversarial-network, image-generation, image-manipulation
Lggan
[CVPR 2020] Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation
Stars: ✭ 97 (-31.21%)
Mutual labels:  gan, generative-adversarial-network, image-generation, image-manipulation
Anycost Gan
[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing
Stars: ✭ 367 (+160.28%)
Mutual labels:  gan, generative-adversarial-network, image-generation, image-manipulation
Igan
Interactive Image Generation via Generative Adversarial Networks
Stars: ✭ 3,845 (+2626.95%)
Mutual labels:  gan, generative-adversarial-network, pix2pix, image-manipulation
Rectorch
rectorch is a pytorch-based framework for state-of-the-art top-N recommendation
Stars: ✭ 121 (-14.18%)
Mutual labels:  generative-adversarial-network, autoencoder, variational-autoencoder
Gandissect
Pytorch-based tools for visualizing and understanding the neurons of a GAN. https://gandissect.csail.mit.edu/
Stars: ✭ 1,700 (+1105.67%)
Mutual labels:  gan, generative-adversarial-network, image-manipulation
Unetgan
Official Implementation of the paper "A U-Net Based Discriminator for Generative Adversarial Networks" (CVPR 2020)
Stars: ✭ 139 (-1.42%)
Mutual labels:  gan, generative-adversarial-network, image-generation
Repo 2017
Python codes in Machine Learning, NLP, Deep Learning and Reinforcement Learning with Keras and Theano
Stars: ✭ 1,123 (+696.45%)
Mutual labels:  generative-adversarial-network, autoencoder, variational-autoencoder
Deep Learning With Python
Example projects I completed to understand Deep Learning techniques with Tensorflow. Please note that I do no longer maintain this repository.
Stars: ✭ 134 (-4.96%)
Mutual labels:  gan, generative-adversarial-network, variational-autoencoder
Contrastive Unpaired Translation
Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)
Stars: ✭ 822 (+482.98%)
Mutual labels:  generative-adversarial-network, image-generation, image-manipulation
Mlds2018spring
Machine Learning and having it Deep and Structured (MLDS) in 2018 spring
Stars: ✭ 124 (-12.06%)
Mutual labels:  gan, generative-adversarial-network, image-generation
Ad examples
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Stars: ✭ 641 (+354.61%)
Mutual labels:  gan, generative-adversarial-network, autoencoder
Pix2pixhd
Synthesizing and manipulating 2048x1024 images with conditional GANs
Stars: ✭ 5,553 (+3838.3%)
Mutual labels:  gan, generative-adversarial-network, pix2pix
Image To Image Papers
🦓<->🦒 🌃<->🌆 A collection of image to image papers with code (constantly updating)
Stars: ✭ 949 (+573.05%)
Mutual labels:  gan, generative-adversarial-network, image-manipulation
Gesturegan
[ACM MM 2018 Oral] GestureGAN for Hand Gesture-to-Gesture Translation in the Wild
Stars: ✭ 136 (-3.55%)
Mutual labels:  generative-adversarial-network, image-generation, image-manipulation
Dcgan Tensorflow
A Tensorflow implementation of Deep Convolutional Generative Adversarial Networks trained on Fashion-MNIST, CIFAR-10, etc.
Stars: ✭ 70 (-50.35%)
Mutual labels:  gan, generative-adversarial-network, image-generation
Ganspace
Discovering Interpretable GAN Controls [NeurIPS 2020]
Stars: ✭ 1,224 (+768.09%)
Mutual labels:  gan, generative-adversarial-network, image-generation

Focal Frequency Loss for Generative Models

teaser

This repository will provide the official code for the following paper:

Focal Frequency Loss for Generative Models
Liming Jiang, Bo Dai, Wayne Wu and Chen Change Loy
arXiv preprint, 2020.
Paper

Abstract: Despite the remarkable success of generative models in creating photorealistic images using deep neural networks, gaps could still exist between the real and generated images, especially in the frequency domain. In this study, we find that narrowing the frequency domain gap can ameliorate the image synthesis quality further. To this end, we propose the focal frequency loss, a novel objective function that brings optimization of generative models into the frequency domain. The proposed loss allows the model to dynamically focus on the frequency components that are hard to synthesize by down-weighting the easy frequencies. This objective function is complementary to existing spatial losses, offering great impedance against the loss of important frequency information due to the inherent crux of neural networks. We demonstrate the versatility and effectiveness of focal frequency loss to improve various baselines in both perceptual quality and quantitative performance.

Updates

  • [12/2020] The paper of Focal Frequency Loss is released on arXiv.

Code

The code will be made publicly available. Please stay tuned.

Citation

If you find this work useful for your research, please cite our paper:

@article{jiang2020focal,
  title={Focal Frequency Loss for Generative Models},
  author={Jiang, Liming and Dai, Bo and Wu, Wayne and Loy, Chen Change},
  journal={arXiv preprint},
  volume={arXiv:2012.12821},
  year={2020}
}

License

Copyright (c) 2020

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].