All Projects → mftnakrsu → DeepDream

mftnakrsu / DeepDream

Licence: MIT License
Generative deep learning: DeepDream

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to DeepDream

neural-dream
PyTorch implementation of DeepDream algorithm
Stars: ✭ 110 (+547.06%)
Mutual labels:  deepdream, inception
simsg
Semantic Image Manipulation using Scene Graphs (CVPR 2020)
Stars: ✭ 49 (+188.24%)
Mutual labels:  gans, generative-models
ACCV TinyGAN
BigGAN; Knowledge Distillation; Black-Box; Fast Training; 16x compression
Stars: ✭ 62 (+264.71%)
Mutual labels:  gans
private-data-generation
A toolbox for differentially private data generation
Stars: ✭ 80 (+370.59%)
Mutual labels:  generative-models
AvatarGAN
Generate Cartoon Images using Generative Adversarial Network
Stars: ✭ 24 (+41.18%)
Mutual labels:  gans
metrics
IS, FID score Pytorch and TF implementation, TF implementation is a wrapper of the official ones.
Stars: ✭ 91 (+435.29%)
Mutual labels:  inception
gans-collection.torch
Torch implementation of various types of GAN (e.g. DCGAN, ALI, Context-encoder, DiscoGAN, CycleGAN, EBGAN, LSGAN)
Stars: ✭ 53 (+211.76%)
Mutual labels:  gans
acl2020-interactive-entity-linking
No description or website provided.
Stars: ✭ 26 (+52.94%)
Mutual labels:  inception
VSGAN
VapourSynth Single Image Super-Resolution Generative Adversarial Network (GAN)
Stars: ✭ 124 (+629.41%)
Mutual labels:  gans
ladder-vae-pytorch
Ladder Variational Autoencoders (LVAE) in PyTorch
Stars: ✭ 59 (+247.06%)
Mutual labels:  generative-models
Deep-Learning
It contains the coursework and the practice I have done while learning Deep Learning.🚀 👨‍💻💥 🚩🌈
Stars: ✭ 21 (+23.53%)
Mutual labels:  gans
Splice
Official Pytorch Implementation for "Splicing ViT Features for Semantic Appearance Transfer" presenting "Splice" (CVPR 2022)
Stars: ✭ 126 (+641.18%)
Mutual labels:  generative-models
VapourSynth-Super-Resolution-Helper
Setup scripts for ESRGAN/MXNet image/video upscaling in VapourSynth
Stars: ✭ 63 (+270.59%)
Mutual labels:  upscale
PaintsTensorFlow
line drawing colorization using TensorFlow
Stars: ✭ 47 (+176.47%)
Mutual labels:  gans
pyanime4k
An easy way to use anime4k in python
Stars: ✭ 80 (+370.59%)
Mutual labels:  upscale
IncetOps
基于Inception,一个审计、执行、回滚、统计sql的开源系统
Stars: ✭ 46 (+170.59%)
Mutual labels:  inception
Machine-Learning
The projects I do in Machine Learning with PyTorch, keras, Tensorflow, scikit learn and Python.
Stars: ✭ 54 (+217.65%)
Mutual labels:  gans
minimal glo
Minimal PyTorch implementation of Generative Latent Optimization from the paper "Optimizing the Latent Space of Generative Networks"
Stars: ✭ 112 (+558.82%)
Mutual labels:  generative-models
Deep-Object-Removal
Using cGANs to remove objects from a photo
Stars: ✭ 82 (+382.35%)
Mutual labels:  gans
i3d-tensorflow
Inflated 3D ConvNets for video understanding
Stars: ✭ 46 (+170.59%)
Mutual labels:  inception

DeepDream

DeepDream is an artistic image-modification technique that uses the representations learnd by CNN. It was firstt released by Google in the summer of 2015, as an implementation written using the Caffe framework.

alt text

It quickly became an internet sensation thanks to the trippy pictures it could generate. DeepDream convnet was trained on ImageNet, where dog breeds and bird species are vastly overpresented.

The DeepDream algorithm is almost identical to the convnet filter-visualization technique, consisting of running a convnet in reverse: doing gradient ascent on the input to the convnet in order to maximize the activation of a spesific filter in an upper layer of the convnet.

DeepDream uses this same idea, with a few simple differences:

  • With DeepDream, you try to maximize the activation of entire layers rather than of a spesific filter, thus mixing together vis of large numbers of features at once.
  • You start not from blank, slighlty noisy input, but rather from an existing image-thus the resulting effects latch on to preexisting visual patters, distorting elements of the images in a somewhat artistic fashion.
  • The input images are processed at different scales (callede octaves), which improves the quality of the visualization.

In Keras, many such convnets are avaliable: VGG16, VGG19, Xception, ResNet50 and so on . In this application, I used inception_v3 convnets.

alt text

If you want to more about "Inception", you should check this out this paper.

The DeepDream process:

The method is to process images at a list of “scales” and run gradient ascent to maximize the loss at this scale. With each successive scale, upscale the image by 40%. In order to avoid the lose of image detail when upscaling image from small to large, also inserted the lost details back by using the larger original image.

alt text
Successive scales of spatial processing(octaves) and detail reinjection upon upscaling.

Quick Start Examples

Installation

$ git clone https://github.com/mftnakrsu/DeepDream.git  
$ cd DeepDream  
$ pip install -r requirements.txt  

Usage

$ python -m main.py --image /path/to/image
                 --step /numberofstep
                 --iterations /numberofiterations
                 --max_loss /maxlose/

If you want different variations, you can change the layers:

layer_settings = {
"mixed7": 1.5,
"mixed8":1.5,
"mixed5":1.0,
}

You can check the model layers with these commands:

print(model.summary())

Some Results:

References:

https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html  
https://arxiv.org/abs/1512.00567  
https://www.amazon.com/Deep-Learning-Python-Francois-Chollet/dp/1617294438
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].