All Projects → junyanz → Interactive Deep Colorization

junyanz / Interactive Deep Colorization

Licence: mit
Deep learning software for colorizing black and white images with a few clicks.

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to Interactive Deep Colorization

Colorization
Automatic colorization using deep neural networks. "Colorful Image Colorization." In ECCV, 2016.
Stars: ✭ 2,791 (+13.23%)
Mutual labels:  caffe, deep-learning-algorithms, colorization, automatic-colorization
colnet
🖌️ Automatic Image Colorization with Simultaneous Classification – based on "Let there be Color!"
Stars: ✭ 37 (-98.5%)
Mutual labels:  colorization, automatic-colorization
Ck Caffe
Collective Knowledge workflow for Caffe to automate installation across diverse platforms and to collaboratively evaluate and optimize Caffe-based workloads across diverse hardware, software and data sets (compilers, libraries, tools, models, inputs):
Stars: ✭ 192 (-92.21%)
Mutual labels:  caffe
Netron
Visualizer for neural network, deep learning, and machine learning models
Stars: ✭ 17,193 (+597.48%)
Mutual labels:  caffe
Up Down Captioner
Automatic image captioning model based on Caffe, using features from bottom-up attention.
Stars: ✭ 195 (-92.09%)
Mutual labels:  caffe
Pylustrator
Visualisations of data are at the core of every publication of scientific research results. They have to be as clear as possible to facilitate the communication of research. As data can have different formats and shapes, the visualisations often have to be adapted to reflect the data as well as possible. We developed Pylustrator, an interface to directly edit python generated matplotlib graphs to finalize them for publication. Therefore, subplots can be resized and dragged around by the mouse, text and annotations can be added. The changes can be saved to the initial plot file as python code.
Stars: ✭ 192 (-92.21%)
Mutual labels:  interactive
Survey
A golang library for building interactive and accessible prompts with full support for windows and posix terminals.
Stars: ✭ 2,843 (+15.33%)
Mutual labels:  interactive
Snn toolbox
Toolbox for converting analog to spiking neural networks (ANN to SNN), and running them in a spiking neuron simulator.
Stars: ✭ 187 (-92.41%)
Mutual labels:  caffe
Orn
Oriented Response Networks, in CVPR 2017
Stars: ✭ 207 (-91.6%)
Mutual labels:  caffe
Sharprompt
Interactive command line interface toolkit for C#
Stars: ✭ 197 (-92.01%)
Mutual labels:  interactive
Raspberrypi Facedetection Mtcnn Caffe With Motion
MTCNN with Motion Detection, on Raspberry Pi with Love
Stars: ✭ 204 (-91.72%)
Mutual labels:  caffe
Liteflownet2
A Lightweight Optical Flow CNN - Revisiting Data Fidelity and Regularization, TPAMI 2020
Stars: ✭ 195 (-92.09%)
Mutual labels:  caffe
Deepdetect
Deep Learning API and Server in C++14 support for Caffe, Caffe2, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
Stars: ✭ 2,306 (-6.45%)
Mutual labels:  caffe
Awesome Deep Learning And Machine Learning Questions
【不定期更新】收集整理的一些网站中(如知乎、Quora、Reddit、Stack Exchange等)与深度学习、机器学习、强化学习、数据科学相关的有价值的问题
Stars: ✭ 203 (-91.76%)
Mutual labels:  deep-learning-algorithms
Pyzo
Python to the people
Stars: ✭ 192 (-92.21%)
Mutual labels:  interactive
Dsrg
Weakly-Supervised Semantic Segmentation Network with Deep Seeded Region Growing (CVPR 2018).
Stars: ✭ 206 (-91.64%)
Mutual labels:  caffe
Light Field Video
Light field video applications (e.g. video refocusing, focus tracking, changing aperture and view)
Stars: ✭ 190 (-92.29%)
Mutual labels:  caffe
Pixelnet
The repository contains source code and models to use PixelNet architecture used for various pixel-level tasks. More details can be accessed at <http://www.cs.cmu.edu/~aayushb/pixelNet/>.
Stars: ✭ 194 (-92.13%)
Mutual labels:  caffe
Auto Reid And Others
Auto-ReID and Other Person Re-Identification Projects
Stars: ✭ 198 (-91.97%)
Mutual labels:  caffe
Colorizing With Gans
Grayscale Image Colorization with Generative Adversarial Networks. https://arxiv.org/abs/1803.05400
Stars: ✭ 209 (-91.52%)
Mutual labels:  colorization

Interactive Deep Colorization

Project Page | Paper | Demo Video | SIGGRAPH Talk

04/10/2020 Update: @mabdelhack provided a windows installation guide for the PyTorch model in Python 3.6. Check out the Windows branch for the guide.

10/3/2019 Update: Our technology is also now available in Adobe Photoshop Elements 2020. See this blog and video for more details.

9/3/2018 Update: The code now supports a backend PyTorch model (with PyTorch 0.5.0+). Please find the Local Hints Network training code in the colorization-pytorch repository.

Real-Time User-Guided Image Colorization with Learned Deep Priors.
Richard Zhang*, Jun-Yan Zhu*, Phillip Isola, Xinyang Geng, Angela S. Lin, Tianhe Yu, and Alexei A. Efros.
In ACM Transactions on Graphics (SIGGRAPH 2017).
(*indicates equal contribution)

We first describe the system (0) Prerequisities and steps for (1) Getting started. We then describe the interactive colorization demo (2) Interactive Colorization (Local Hints Network). There are two demos: (a) a "barebones" version in iPython notebook and (b) the full GUI we used in our paper. We then provide an example of the (3) Global Hints Network.

(0) Prerequisites

  • Linux or OSX
  • Caffe or PyTorch
  • CPU or NVIDIA GPU + CUDA CuDNN.

(1) Getting Started

  • Clone this repo:
git clone https://github.com/junyanz/interactive-deep-colorization ideepcolor
cd ideepcolor
  • Download the reference model
bash ./models/fetch_models.sh

(2) Interactive Colorization (Local Hints Network)

We provide a "barebones" demo in iPython notebook, which does not require QT. We also provide our full GUI demo.

(2a) Barebones Interactive Colorization Demo

If you need to convert the Notebook to an older version, use jupyter nbconvert --to notebook --nbformat 3 ./DemoInteractiveColorization.ipynb.

(2b) Full Demo GUI

  • Install Qt4 and QDarkStyle. (See Installation)

  • Run the UI: python ideepcolor.py --gpu [GPU_ID] --backend [CAFFE OR PYTORCH]. Arguments are described below:

--win_size    [512] GUI window size
--gpu         [0] GPU number
--image_file  ['./test_imgs/mortar_pestle.jpg'] path to the image file
--backend     ['caffe'] either use 'caffe' or 'pytorch'; 'caffe' is the official model from siggraph 2017, and 'pytorch' is the same weights converted
  • User interactions

  • Adding points: Left-click somewhere on the input pad
  • Moving points: Left-click and hold on a point on the input pad, drag to desired location, and let go
  • Changing colors: For currently selected point, choose a recommended color (middle-left) or choose a color on the ab color gamut (top-left)
  • Removing points: Right-click on a point on the input pad
  • Changing patch size: Mouse wheel changes the patch size from 1x1 to 9x9
  • Load image: Click the load image button and choose desired image
  • Restart: Click on the restart button. All points on the pad will be removed.
  • Save result: Click on the save button. This will save the resulting colorization in a directory where the image_file was, along with the user input ab values.
  • Quit: Click on the quit button.

(3) Global Hints Network

We include an example usage of our Global Hints Network, applied to global histogram transfer. We show its usage in an iPython notebook.

Installation

  • Install Caffe or PyTorch. The Caffe model is official. PyTorch is a reimplementation.

    • Install Caffe: see the Caffe installation and Ubuntu installation document. Please compile the Caffe with the python layer support (set WITH_PYTHON_LAYER=1 in the Makefile.config) and build Caffe python library by make pycaffe.

    You also need to add pycaffe to your PYTHONPATH. Use vi ~/.bashrc to edit the environment variables.

    PYTHONPATH=/path/to/caffe/python:$PYTHONPATH
    LD_LIBRARY_PATH=/path/to/caffe/build/lib:$LD_LIBRARY_PATH
  • Install scikit-image, scikit-learn, opencv, Qt4, and QDarkStyle pacakges:

# ./install/install_deps.sh
sudo pip install scikit-image
sudo pip install scikit-learn
sudo apt-get install python-opencv
sudo apt-get install python-qt4
sudo pip install qdarkstyle

For Conda users, type the following command lines (this may work for full Anaconda but not Miniconda):

# ./install/install_conda.sh
conda install -c anaconda protobuf  ## photobuf
conda install -c anaconda scikit-learn=0.19.1 ## scikit-learn
conda install -c anaconda scikit-image=0.13.0  ## scikit-image
conda install -c menpo opencv=2.4.11   ## opencv
conda install pyqt=4.11 ## qt4
conda install -c auto qdarkstyle  ## qdarkstyle

Training

Please find a PyTorch reimplementation of the Local Hints Network training code in the colorization-pytorch repository.

Citation

If you use this code for your research, please cite our paper:

@article{zhang2017real,
  title={Real-Time User-Guided Image Colorization with Learned Deep Priors},
  author={Zhang, Richard and Zhu, Jun-Yan and Isola, Phillip and Geng, Xinyang and Lin, Angela S and Yu, Tianhe and Efros, Alexei A},
  journal={ACM Transactions on Graphics (TOG)},
  volume={9},
  number={4},
  year={2017},
  publisher={ACM}
}

Cat Paper Collection

One of the authors objects to the inclusion of this list, due to an allergy. Another author objects on the basis that cats are silly creatures and this is a serious, scientific paper. However, if you love cats, and love reading cool graphics, vision, and learning papers, please check out the Cat Paper Collection: [Github] [Webpage]

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].