All Projects → afiaka87 → clip-guided-diffusion

afiaka87 / clip-guided-diffusion

Licence: MIT License
A CLI tool/python module for generating images from text using guided diffusion and CLIP from OpenAI.

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to clip-guided-diffusion

ru-dalle
Generate images from texts. In Russian
Stars: ✭ 1,606 (+517.69%)
Mutual labels:  openai, image-generation, text-to-image
universum-contracts
text-to-image generation gems / libraries incl. moonbirds, cyberpunks, coolcats, shiba inu doge, nouns & more
Stars: ✭ 17 (-93.46%)
Mutual labels:  image-generation, text-to-image
CLIP-Guided-Diffusion
Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.
Stars: ✭ 328 (+26.15%)
Mutual labels:  text-to-image, openai-clip
Awesome-Text-to-Image
A Survey on Text-to-Image Generation/Synthesis.
Stars: ✭ 251 (-3.46%)
Mutual labels:  image-generation, text-to-image
feed forward vqgan clip
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
Stars: ✭ 135 (-48.08%)
Mutual labels:  text-to-image, openai-clip
MVGL
TCyb 2018: Graph learning for multiview clustering
Stars: ✭ 26 (-90%)
Mutual labels:  multimodality, multimodal
automatic-manga-colorization
Use keras.js and cyclegan-keras to colorize manga automatically. All computation in browser. Demo is online:
Stars: ✭ 20 (-92.31%)
Mutual labels:  image-generation
MNIST-invert-color
Invert the color of MNIST images with PyTorch
Stars: ✭ 13 (-95%)
Mutual labels:  image-generation
Diverse-Structure-Inpainting
CVPR 2021: "Generating Diverse Structure for Image Inpainting With Hierarchical VQ-VAE"
Stars: ✭ 131 (-49.62%)
Mutual labels:  multimodal
TriangleGAN
TriangleGAN, ACM MM 2019.
Stars: ✭ 28 (-89.23%)
Mutual labels:  image-generation
Modality-Transferable-MER
Modality-Transferable-MER, multimodal emotion recognition model with zero-shot and few-shot abilities.
Stars: ✭ 36 (-86.15%)
Mutual labels:  multimodal
LAVT-pytorch
LAVT: Language-Aware Vision Transformer for Referring Image Segmentation
Stars: ✭ 16 (-93.85%)
Mutual labels:  multimodal
NER-Multimodal-pytorch
Pytorch Implementation of "Adaptive Co-attention Network for Named Entity Recognition in Tweets" (AAAI 2018)
Stars: ✭ 42 (-83.85%)
Mutual labels:  multimodal
Deep-Exemplar-based-Video-Colorization
The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization".
Stars: ✭ 180 (-30.77%)
Mutual labels:  image-generation
emmental
A deep learning framework for building multimodal multi-task learning systems.
Stars: ✭ 93 (-64.23%)
Mutual labels:  multimodality
mSRGAN-A-GAN-for-single-image-super-resolution-on-high-content-screening-microscopy-images.
Generative Adversarial Network for single image super-resolution in high content screening microscopy images
Stars: ✭ 52 (-80%)
Mutual labels:  image-generation
lecam-gan
Regularizing Generative Adversarial Networks under Limited Data (CVPR 2021)
Stars: ✭ 127 (-51.15%)
Mutual labels:  image-generation
learning-to-drive-in-5-minutes
Implementation of reinforcement learning approach to make a car learn to drive smoothly in minutes
Stars: ✭ 227 (-12.69%)
Mutual labels:  openai
swd
unsupervised video and image generation
Stars: ✭ 50 (-80.77%)
Mutual labels:  image-generation
FEDOT
Automated modeling and machine learning framework FEDOT
Stars: ✭ 312 (+20%)
Mutual labels:  multimodality

CLIP Guided Diffusion

From @crowsonkb.

Disclaimer: I'm redirecting efforts to pyglide and may be slow to address bugs here.

I also recommend looking at @crowsonkb's v-diffusion-pytorch.

See captions and more generations in the Gallery.

Install

git clone https://github.com/afiaka87/clip-guided-diffusion.git && cd clip-guided-diffusion
git clone https://github.com/crowsonkb/guided-diffusion.git
pip3 install -e guided-diffusion
python3 setup.py install

Run

cgd -txt "Alien friend by Odilon Redo"

A gif of the full run will be saved to ./outputs/caption_{j}.gif by default.

Alien friend by Oidlon Redo

  • ./outputs will contain all intermediate outputs
  • current.png will contain the current generation.
  • (optional) Provide --wandb_project <project_name> to enable logging intermediate outputs to wandb. Requires free account. URL to run will be provided in CLI - example run
  • ~/.cache/clip-guided-diffusion/ will contain downloaded checkpoints from OpenAI/Katherine Crowson.

Usage - CLI

Text to image generation

--prompts / -txts --image_size / -size

cgd --image_size 256 --prompts "32K HUHD Mushroom"

32K HUHD Mushroom

Text to image generation (multiple prompts with weights)

  • multiple prompts can be specified with the | character.
  • you may optionally specify a weight for each prompt using a : character.
  • e.g. cgd --prompts "Noun to visualize:1.0|style:0.1|location:0.1|something you dont want:-0.1"
  • weights must not sum to 0

cgd -txt "32K HUHD Mushroom|Green grass:-0.1"

CPU

  • Using a CPU will take a very long time compared to using a GPU.

cgd --device cpu --prompt "Some text to be generated"

CUDA GPU

cgd --prompt "Theres no need to specify a device, it will be chosen automatically"

Iterations/Steps (Timestep Respacing)

--timestep_respacing or -respace (default: 1000)

  • Uses fewer timesteps over the same diffusion schedule. Sacrifices accuracy/alignment for quicker runtime.
  • options: - 25, 50, 150, 250, 500, 1000, ddim25,ddim50,ddim150, ddim250,ddim500,ddim1000
  • (default: 1000)
  • prepending a number with ddim will use the ddim scheduler. e.g. ddim25 will use the 25 timstep ddim scheduler. This method may be better at shorter timestep_respacing values.

Existing image

--init_image/-init

  • Blend an image with the diffusion for a number of steps.

--skip_timesteps/-skip

The number of timesteps to spend blending the image with the guided-diffusion samples. Must be less than --timestep_respacing and greater than 0. Good values using timestep_respacing of 1000 are 250 to 500.

  • -respace 1000 -skip 500
  • -respace 500 -skip 250
  • -respace 250 -skip 125
  • -respace 125 -skip 75

(optional)--init_scale/-is

To enable a VGG perceptual loss after the blending, you must specify an --init_scale value. 1000 seems to work well.

cgd --prompts "A mushroom in the style of Vincent Van Gogh" \
  --timestep_respacing 1000 \
  --init_image "images/32K_HUHD_Mushroom.png" \
  --init_scale 1000 \
  --skip_timesteps 350

Image size

  • options: 64, 128, 256, 512 pixels (square)
  • Note about 64x64 when using the 64x64 checkpoint, the cosine noise scheduler is used. For unclear reasons, this noise scheduler requires different values for --clip_guidance_scale and --tv_scale. I recommend starting with -cgs 5 -tvs 0.00001 and experimenting from around there. --clip_guidance_scale and --tv_scale will require experimentation.
  • For all other checkpoints, clip_guidance_scale seems to work well around 1000-2000 and tv_scale at 0, 100, 150 or 200
cgd --init_image=images/32K_HUHD_Mushroom.png \
    --skip_timesteps=500 \
    --image_size 64 \
    --prompt "8K HUHD Mushroom"

resized to 200 pixels for visibility

cgd --image_size 512 --prompt "8K HUHD Mushroom"

New: Non-square Generations (experimental) Generate portrait or landscape images by specifying a number to offset the width and/or height.

  • offset should be a multiple of 16 for image sizes 64x64, 128x128
  • offset should be a multiple of 32 for image sizes 256x256, 512x512
  • may cause NaN/Inf errors.
  • a positive offset will require more memory.
  • a negative offset uses less memory and is faster.
my_caption="a photo of beautiful green hills and a sunset, taken with a blackberry in 2004"
cgd --prompts "$my_caption" \
    --image_size 128 \
    --width_offset 32 

Full Usage - Python

# Initialize diffusion generator
from cgd import clip_guided_diffusion
import cgd_util

cgd_generator = clip_guided_diffusion(
    prompts=["an image of a fox in a forest"],
    image_prompts=["image_to_compare_with_clip.png"],
    batch_size=1,
    clip_guidance_scale=1500,
    sat_scale=0,
    tv_scale=150,
    init_scale=1000,
    range_scale=50,
    image_size=256,
    class_cond=False,
    randomize_class=False, # only works with class conditioned checkpoints
    cutout_power=1.0,
    num_cutouts=16,
    timestep_respacing="1000",
    seed=0,
    diffusion_steps=1000, # dont change this
    skip_timesteps=400,
    init_image="image_to_blend_and_compare_with_vgg.png",
    clip_model_name="ViT-B/16",
    dropout=0.0,
    device="cuda",
    prefix_path="store_images/",
    wandb_project=None,
    wandb_entity=None,
    progress=True,
)
prefix_path.mkdir(exist_ok=True)
list(enumerate(tqdm(cgd_generator))) # iterate over generator

Full Usage - CLI

usage: cgd [-h] [--prompts PROMPTS] [--image_prompts IMAGE_PROMPTS]
           [--image_size IMAGE_SIZE] [--init_image INIT_IMAGE]
           [--init_scale INIT_SCALE] [--skip_timesteps SKIP_TIMESTEPS]
           [--prefix PREFIX] [--checkpoints_dir CHECKPOINTS_DIR]
           [--batch_size BATCH_SIZE]
           [--clip_guidance_scale CLIP_GUIDANCE_SCALE] [--tv_scale TV_SCALE]
           [--range_scale RANGE_SCALE] [--sat_scale SAT_SCALE] [--seed SEED]
           [--save_frequency SAVE_FREQUENCY]
           [--diffusion_steps DIFFUSION_STEPS]
           [--timestep_respacing TIMESTEP_RESPACING]
           [--num_cutouts NUM_CUTOUTS] [--cutout_power CUTOUT_POWER]
           [--clip_model CLIP_MODEL] [--uncond]
           [--noise_schedule NOISE_SCHEDULE] [--dropout DROPOUT]
           [--device DEVICE] [--wandb_project WANDB_PROJECT]
           [--wandb_entity WANDB_ENTITY] [--height_offset HEIGHT_OFFSET]
           [--width_offset WIDTH_OFFSET] [--use_augs] [--use_magnitude]
           [--quiet]

optional arguments:
  -h, --help            show this help message and exit
  --prompts PROMPTS, -txts PROMPTS
                        the prompt/s to reward paired with weights. e.g. 'My
                        text:0.5|Other text:-0.5' (default: )
  --image_prompts IMAGE_PROMPTS, -imgs IMAGE_PROMPTS
                        the image prompt/s to reward paired with weights. e.g.
                        'img1.png:0.5,img2.png:-0.5' (default: )
  --image_size IMAGE_SIZE, -size IMAGE_SIZE
                        Diffusion image size. Must be one of [64, 128, 256,
                        512]. (default: 128)
  --init_image INIT_IMAGE, -init INIT_IMAGE
                        Blend an image with diffusion for n steps (default: )
  --init_scale INIT_SCALE, -is INIT_SCALE
                        (optional) Perceptual loss scale for init image.
                        (default: 0)
  --skip_timesteps SKIP_TIMESTEPS, -skip SKIP_TIMESTEPS
                        Number of timesteps to blend image for. CLIP guidance
                        occurs after this. (default: 0)
  --prefix PREFIX, -dir PREFIX
                        output directory (default: outputs)
  --checkpoints_dir CHECKPOINTS_DIR, -ckpts CHECKPOINTS_DIR
                        Path subdirectory containing checkpoints. (default:
                        /home/samsepiol/.cache/clip-guided-diffusion)
  --batch_size BATCH_SIZE, -bs BATCH_SIZE
                        the batch size (default: 1)
  --clip_guidance_scale CLIP_GUIDANCE_SCALE, -cgs CLIP_GUIDANCE_SCALE
                        Scale for CLIP spherical distance loss. Values will
                        need tinkering for different settings. (default: 1000)
  --tv_scale TV_SCALE, -tvs TV_SCALE
                        Controls the smoothness of the final output. (default:
                        150.0)
  --range_scale RANGE_SCALE, -rs RANGE_SCALE
                        Controls how far out of RGB range values may get.
                        (default: 50.0)
  --sat_scale SAT_SCALE, -sats SAT_SCALE
                        Controls how much saturation is allowed. Used for
                        ddim. From @nshepperd. (default: 0.0)
  --seed SEED, -seed SEED
                        Random number seed (default: 0)
  --save_frequency SAVE_FREQUENCY, -freq SAVE_FREQUENCY
                        Save frequency (default: 1)
  --diffusion_steps DIFFUSION_STEPS, -steps DIFFUSION_STEPS
                        Diffusion steps (default: 1000)
  --timestep_respacing TIMESTEP_RESPACING, -respace TIMESTEP_RESPACING
                        Timestep respacing (default: 1000)
  --num_cutouts NUM_CUTOUTS, -cutn NUM_CUTOUTS
                        Number of randomly cut patches to distort from
                        diffusion. (default: 16)
  --cutout_power CUTOUT_POWER, -cutpow CUTOUT_POWER
                        Cutout size power (default: 1.0)
  --clip_model CLIP_MODEL, -clip CLIP_MODEL
                        clip model name. Should be one of: ('ViT-B/16',
                        'ViT-B/32', 'RN50', 'RN101', 'RN50x4', 'RN50x16') or a
                        checkpoint filename ending in `.pt` (default:
                        ViT-B/32)
  --uncond, -uncond     Use finetuned unconditional checkpoints from OpenAI
                        (256px) and Katherine Crowson (512px) (default: False)
  --noise_schedule NOISE_SCHEDULE, -sched NOISE_SCHEDULE
                        Specify noise schedule. Either 'linear' or 'cosine'.
                        (default: linear)
  --dropout DROPOUT, -drop DROPOUT
                        Amount of dropout to apply. (default: 0.0)
  --device DEVICE, -dev DEVICE
                        Device to use. Either cpu or cuda. (default: )
  --wandb_project WANDB_PROJECT, -proj WANDB_PROJECT
                        Name W&B will use when saving results. e.g.
                        `--wandb_project "my_project"` (default: None)
  --wandb_entity WANDB_ENTITY, -ent WANDB_ENTITY
                        (optional) Name of W&B team/entity to log to.
                        (default: None)
  --height_offset HEIGHT_OFFSET, -ht HEIGHT_OFFSET
                        Height offset for image (default: 0)
  --width_offset WIDTH_OFFSET, -wd WIDTH_OFFSET
                        Width offset for image (default: 0)
  --use_augs, -augs     Uses augmentations from the `quick` clip guided
                        diffusion notebook (default: False)
  --use_magnitude, -mag
                        Uses magnitude of the gradient (default: False)
  --quiet, -q           Suppress output. (default: False)

Development

git clone https://github.com/afiaka87/clip-guided-diffusion.git
cd clip-guided-diffusion
git clone https://github.com/afiaka87/guided-diffusion.git
python3 -m venv cgd_venv
source cgd_venv/bin/activate
pip install -r requirements.txt
pip install -e guided-diffusion

Run integration tests

  • Some tests require a GPU; you may ignore them if you dont have one.
python -m unittest discover
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].