All Projects → orpatashnik → Styleclip

orpatashnik / Styleclip

Projects that are alternatives of or similar to Styleclip

Ailearning Theory Applying
快速上手Ai理论及应用实战:基础知识Basic knowledge、机器学习MachineLearning、深度学习DeepLearning2、自然语言处理BERT,持续更新中。含大量注释及数据集,力求每一位能看懂并复现。
Stars: ✭ 280 (-1.75%)
Mutual labels:  jupyter-notebook
Git book
教材对应的源码
Stars: ✭ 285 (+0%)
Mutual labels:  jupyter-notebook
Brunel
Brunel Visualization
Stars: ✭ 285 (+0%)
Mutual labels:  jupyter-notebook
Tehran Stocks
A python package to access tsetmc data
Stars: ✭ 282 (-1.05%)
Mutual labels:  jupyter-notebook
Powerai Counting Cars
Run a Jupyter Notebook to detect, track, and count cars in a video using Maximo Visual Insights (formerly PowerAI Vision) and OpenCV
Stars: ✭ 282 (-1.05%)
Mutual labels:  jupyter-notebook
Gcn clustering
Code for CVPR'19 paper Linkage-based Face Clustering via GCN
Stars: ✭ 283 (-0.7%)
Mutual labels:  jupyter-notebook
Clip
Contrastive Language-Image Pretraining
Stars: ✭ 5,617 (+1870.88%)
Mutual labels:  jupyter-notebook
Reinforcement Learning
Implementation of Reinforcement Learning Algorithms. Python, OpenAI Gym, Tensorflow. Exercises and Solutions to accompany Sutton's Book and David Silver's course.
Stars: ✭ 17,453 (+6023.86%)
Mutual labels:  jupyter-notebook
Scipy 2017 Sklearn
Scipy 2017 scikit-learn tutorial by Alex Gramfort and Andreas Mueller
Stars: ✭ 284 (-0.35%)
Mutual labels:  jupyter-notebook
Dinoruntutorial
Accompanying code for Paperspace tutorial "Build an AI to play Dino Run"
Stars: ✭ 285 (+0%)
Mutual labels:  jupyter-notebook
Machine Learning Notebooks
Stanford Machine Learning course exercises implemented with scikit-learn
Stars: ✭ 282 (-1.05%)
Mutual labels:  jupyter-notebook
Bert Toxic Comments Multilabel
Multilabel classification for Toxic comments challenge using Bert
Stars: ✭ 284 (-0.35%)
Mutual labels:  jupyter-notebook
Tensorflow Tensorrt
This repository is for my YT video series about optimizing a Tensorflow deep learning model using TensorRT. We demonstrate optimizing LeNet-like model and YOLOv3 model, and get 3.7x and 1.5x faster for the former and the latter, respectively, compared to the original models.
Stars: ✭ 284 (-0.35%)
Mutual labels:  jupyter-notebook
Dltfpt
Deep Learning with TensorFlow, Keras, and PyTorch
Stars: ✭ 280 (-1.75%)
Mutual labels:  jupyter-notebook
Building Machine Learning Pipelines
Code repository for the O'Reilly publication "Building Machine Learning Pipelines" by Hannes Hapke & Catherine Nelson
Stars: ✭ 284 (-0.35%)
Mutual labels:  jupyter-notebook
Faceswap Gan
A denoising autoencoder + adversarial losses and attention mechanisms for face swapping.
Stars: ✭ 3,099 (+987.37%)
Mutual labels:  jupyter-notebook
Pyconuk Introtutorial
practical introduction to pandas and scikit-learn via Kaggle problems - Sept 2014
Stars: ✭ 284 (-0.35%)
Mutual labels:  jupyter-notebook
Google Research
Google Research
Stars: ✭ 20,927 (+7242.81%)
Mutual labels:  jupyter-notebook
100 Days Of Ml Code
100-Days-Of-ML-Code中文版
Stars: ✭ 16,797 (+5793.68%)
Mutual labels:  jupyter-notebook
Cdan
Code release for "Conditional Adversarial Domain Adaptation" (NIPS 2018)
Stars: ✭ 285 (+0%)
Mutual labels:  jupyter-notebook

Text-Guided Editing of Images (Using CLIP and StyleGAN)

Open In Colab

This repo contains a code and a few results of my experiments with StyleGAN and CLIP. Let's call it StyleCLIP. Given a textual description, my goal was to edit a given image, or generate one. The following diagram illustrates the way it works:

In this example, I took an image of Ariana Grande, inverted it using e4e, and edited the image so Ariana will look more tanned, using the text "A tanned woman". To keep the image close to the original one, I also used a simple L2 loss between the optimized latent vector and the original one.

I tried to apply edits that cannot be done with common traversal in latent space, for example, using celebs names as target direction (see below)! I hope you can be more creative.

Try, it is really fun. (Hope you will enjoy it like I did!)

Editing Examples

Here are some examples, and first of all, some manipulated images of myself :) The description I used to obtain each edited image is written above or below it.

And now a few celebs. The description I used to edit each image is written below it.

Setup

The code relies on the official implementation of CLIP, and the Rosinality pytorch implementation of StyleGAN2. Some parts of the StyleGAN implementation were modified, so that the whole implementation is native pytorch.

Requirements

  • Anaconda
  • Pretrained StyleGAN2 generator (can be downloaded from here)

In addition, run the following commands:

conda install --yes -c pytorch pytorch=1.7.1 torchvision cudatoolkit=<CUDA_VERSION>
pip install ftfy regex tqdm
pip install git+https://github.com/openai/CLIP.git

Usage

Given a textual description, one can both edit a given image, or generate a random image that best fits to the description. Both operations can be done through the main.py script, or the notebook.

Editing

To edit an image set --mode=edit. Editing can be done on both provided latent vector, and on a random latent vector from StyleGAN's latent space. It is recommended to adjust the --l2_lambda according to the desired edit. From my experience, different edits require different values of this parameter.

Generating Free-style Images

To generate a free-style image set --mode=free_generation.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].