All Projects → bes-dev → pytorch_clip_guided_loss

bes-dev / pytorch_clip_guided_loss

Licence: Apache-2.0 License
A simple library that implements CLIP guided loss in PyTorch.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to pytorch clip guided loss

SLE-GAN
Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis
Stars: ✭ 53 (-20.9%)
Mutual labels:  gan, image-synthesis
CS231n
My solutions for Assignments of CS231n: Convolutional Neural Networks for Visual Recognition
Stars: ✭ 30 (-55.22%)
Mutual labels:  gan
CariMe-pytorch
Unpaired Caricature Generation with Multiple Exaggerations (TMM 2021)
Stars: ✭ 33 (-50.75%)
Mutual labels:  gan
infnet-spen
TensorFlow implementation [ICLR 18] "Learning Approximate Inference Networks for Structured Prediction"
Stars: ✭ 30 (-55.22%)
Mutual labels:  gan
srgan
Pytorch implementation of "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"
Stars: ✭ 39 (-41.79%)
Mutual labels:  gan
Simple-GAN-Base-on-Matlab
simple Generative Adversarial Networks base on matlab
Stars: ✭ 24 (-64.18%)
Mutual labels:  gan
Deep learning Coloring-Anime-image-and-satellite-image-house-damge-level-colorized
No description or website provided.
Stars: ✭ 16 (-76.12%)
Mutual labels:  gan
MoveSim
Codes for paper in KDD 2020 (AI for COVID-19): Learning to Simulate Human Mobility
Stars: ✭ 16 (-76.12%)
Mutual labels:  gan
automatic-manga-colorization
Use keras.js and cyclegan-keras to colorize manga automatically. All computation in browser. Demo is online:
Stars: ✭ 20 (-70.15%)
Mutual labels:  gan
metrics
IS, FID score Pytorch and TF implementation, TF implementation is a wrapper of the official ones.
Stars: ✭ 91 (+35.82%)
Mutual labels:  gan
StyleGANCpp
Unofficial implementation of StyleGAN's generator
Stars: ✭ 25 (-62.69%)
Mutual labels:  gan
GAN-Project-2018
GAN in Tensorflow to be run via Linux command line
Stars: ✭ 21 (-68.66%)
Mutual labels:  gan
mSRGAN-A-GAN-for-single-image-super-resolution-on-high-content-screening-microscopy-images.
Generative Adversarial Network for single image super-resolution in high content screening microscopy images
Stars: ✭ 52 (-22.39%)
Mutual labels:  gan
anime2clothing
Pytorch official implementation of Anime to Real Clothing: Cosplay Costume Generation via Image-to-Image Translation.
Stars: ✭ 65 (-2.99%)
Mutual labels:  gan
steam-stylegan2
Train a StyleGAN2 model on Colaboratory to generate Steam banners.
Stars: ✭ 30 (-55.22%)
Mutual labels:  gan
Computer-Vision
implemented some computer vision problems
Stars: ✭ 25 (-62.69%)
Mutual labels:  gan
TET-GAN
[AAAI 2019] TET-GAN: Text Effects Transfer via Stylization and Destylization
Stars: ✭ 74 (+10.45%)
Mutual labels:  gan
AdvSegLoss
Official Pytorch implementation of Adversarial Segmentation Loss for Sketch Colorization [ICIP 2021]
Stars: ✭ 24 (-64.18%)
Mutual labels:  gan
Course-Project---Speech-Driven-Facial-Animation
ECE 535 - Course Project, Deep Learning Framework
Stars: ✭ 63 (-5.97%)
Mutual labels:  gan
Deep-Exemplar-based-Video-Colorization
The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization".
Stars: ✭ 180 (+168.66%)
Mutual labels:  gan

pytorch_clip_guided_loss: Pytorch implementation of the CLIP guided loss for Text-To-Image, Image-To-Image, or Image-To-Text generation.

A simple library that implements CLIP guided loss in PyTorch.

Downloads Downloads Downloads

Install package

pip install pytorch_clip_guided_loss

Install the latest version

pip install --upgrade git+https://github.com/bes-dev/pytorch_clip_guided_loss.git

Features

  • The library supports multiple prompts (images or texts) as targets for optimization.
  • The library automatically detects the language of the input text, and multilingual translate it via google translate.
  • The library supports the original CLIP model by OpenAI and ruCLIP model by SberAI.

Usage

Simple code

import torch
from pytorch_clip_guided_loss import get_clip_guided_loss

loss_fn = get_clip_guided_loss(clip_type="ruclip", input_range = (-1, 1)).eval().requires_grad_(False)
# text prompt
loss_fn.add_prompt(text="text description of the what we would like to generate")
# image prompt
loss_fn.add_prompt(image=torch.randn(1, 3, 224, 224))

# variable
var = torch.randn(1, 3, 224, 224).requires_grad_(True)
loss = loss_fn.image_loss(image=var)["loss"]
loss.backward()
print(var.grad)

VQGAN-CLIP

We provide our tiny implementation of the VQGAN-CLIP pipeline for image generation as an example of the usage of our library. To start using our implementation of the VQGAN-CLIP please follow by documentation.

Zero-shot Object Detection

We provide our tiny implementation of the object detector based on Selective Search region proposals and CLIP guided loss. To start using our implementation of the ClipRCNN please follow by documentation.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].