All Projects → Joker316701882 → Deep Image Matting

Joker316701882 / Deep Image Matting

This is tensorflow implementation for paper "Deep Image Matting"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Deep Image Matting

Androidtensorflowmnistexample
Android TensorFlow MachineLearning MNIST Example (Building Model with TensorFlow for Android)
Stars: ✭ 449 (-26.87%)
Mutual labels:  deeplearning
East
This is a pytorch re-implementation of EAST: An Efficient and Accurate Scene Text Detector.
Stars: ✭ 478 (-22.15%)
Mutual labels:  deeplearning
Bmw Yolov4 Training Automation
This repository allows you to get started with training a state-of-the-art Deep Learning model with little to no configuration needed! You provide your labeled dataset or label your dataset using our BMW-LabelTool-Lite and you can start the training right away and monitor it in many different ways like TensorBoard or a custom REST API and GUI. NoCode training with YOLOv4 and YOLOV3 has never been so easy.
Stars: ✭ 533 (-13.19%)
Mutual labels:  deeplearning
Alfred
alfred-py: A deep learning utility library for **human**, more detail about the usage of lib to: https://zhuanlan.zhihu.com/p/341446046
Stars: ✭ 460 (-25.08%)
Mutual labels:  deeplearning
Introtodeeplearning
Lab Materials for MIT 6.S191: Introduction to Deep Learning
Stars: ✭ 4,955 (+707%)
Mutual labels:  deeplearning
Nlp Paper
NLP Paper
Stars: ✭ 484 (-21.17%)
Mutual labels:  deeplearning
Onepanel
The open and extensible integrated development environment (IDE) for computer vision with built-in modules for model building, automated labeling, data processing, model training, hyperparameter tuning and workflow orchestration.
Stars: ✭ 428 (-30.29%)
Mutual labels:  deeplearning
Reversi Alpha Zero
Reversi reinforcement learning by AlphaGo Zero methods.
Stars: ✭ 598 (-2.61%)
Mutual labels:  deeplearning
Treelstm.pytorch
Tree LSTM implementation in PyTorch
Stars: ✭ 476 (-22.48%)
Mutual labels:  deeplearning
Convcrf
This repository contains the reference implementation for our proposed Convolutional CRFs.
Stars: ✭ 514 (-16.29%)
Mutual labels:  deeplearning
Additive Margin Softmax
This is the implementation of paper <Additive Margin Softmax for Face Verification>
Stars: ✭ 464 (-24.43%)
Mutual labels:  deeplearning
Liteflownet
LiteFlowNet: A Lightweight Convolutional Neural Network for Optical Flow Estimation, CVPR 2018 (Spotlight paper, 6.6%)
Stars: ✭ 474 (-22.8%)
Mutual labels:  deeplearning
Learn Data Science For Free
This repositary is a combination of different resources lying scattered all over the internet. The reason for making such an repositary is to combine all the valuable resources in a sequential manner, so that it helps every beginners who are in a search of free and structured learning resource for Data Science. For Constant Updates Follow me in …
Stars: ✭ 4,757 (+674.76%)
Mutual labels:  deeplearning
Pytorch tutorial
PyTorch Tutorial (1.7)
Stars: ✭ 450 (-26.71%)
Mutual labels:  deeplearning
Deeplearning
深度学习入门教程, 优秀文章, Deep Learning Tutorial
Stars: ✭ 6,783 (+1004.72%)
Mutual labels:  deeplearning
Monk object detection
A one-stop repository for low-code easily-installable object detection pipelines.
Stars: ✭ 437 (-28.83%)
Mutual labels:  deeplearning
Monk v1
Monk is a low code Deep Learning tool and a unified wrapper for Computer Vision.
Stars: ✭ 480 (-21.82%)
Mutual labels:  deeplearning
Tr
Free Offline OCR 离线的中文文本检测+识别SDK
Stars: ✭ 598 (-2.61%)
Mutual labels:  deeplearning
Deberta
The implementation of DeBERTa
Stars: ✭ 541 (-11.89%)
Mutual labels:  deeplearning
Openvino Yolov3
YoloV3/tiny-YoloV3+RaspberryPi3/Ubuntu LaptopPC+NCS/NCS2+USB Camera+Python+OpenVINO
Stars: ✭ 500 (-18.57%)
Mutual labels:  deeplearning

Deep-Image-Matting

This is tensorflow implementation for paper "Deep Image Matting".

Thanks to Davi Frossard, "vgg16_weights.npz" can be found in his blog: "https://www.cs.toronto.edu/~frossard/post/vgg16/"

2017-8-25: Now this code can be used to train, but the data is owned by company.I'll try my best to provide code and model that can do inference.Fix bugs about memory leak when training and change one of randomly crop size from 640 to 620 for boundary security issue.This can be avoid by preparing training data more carefully. Besides, it can save model and restore pre-trained model now, and can test on alphamatting set at rum time.

2017-9-1: Validation code and tensorboard view on 'alphamatting' dataset are added. Some bugs on compositional_loss and validation code are fixed. Missed 'fc6' layer is added now. And the decoder structure is exactly same with paper despide of replacing unpooling with deconvolution layer which means the network is more complex than before. The weight Wi of two loss is still vague, I'm trying to find best weight structure. Currently, general boundary is easy to predit. But some details or complex foregrounds like bike is still bad.

2017-9-14: Latest version of code has following changes:

  1. Rearrange the order of preprocessing so that there is no ground truth shift in preprocessing.(Composite bg,fg,alpha first then resize, or resize bg,fg,alpha then composite. My suggestion is that composition should always happen after resize.) The result RGB images of those two preprocessing order are slightly different from each other, although it's hard to tell the difference by eye.)
  2. Replace deconvolution with unpooling. Because in the experiment, it is shown that deconvolution is always hard to learn detailed information (like hair). And because of using unpooling, batch_size is also changed from 5 to 1 ( The code is not decent now, just can work).
  3. Another thing need to mentioned here is that when we training on single complex sample like bike, even with deconvolution (not unpooling), the network can overfitting. But deconvolution can't converge on whole dataset. (Maybe I didn't training enough time : lr = 1e-5 with 5 days training, can't converge). Discussion about 'whether deconvolution can replace unpooling' is welcomed!
  4. Add hard mode to allow training on tough samples

2018-2-19: I was working on other projects recently, so long time no maintaining this repo. In issues, I noticed some great comments may give the hint that why previous work can't reach author's performance! Here is some idea you can apply to improve this work:

  1. Preparing training set using author's code (I used to work with scipy.misc which has too many weird auto-settings, it hurts the performance! If you want to use scipy.misc, make sure you understand this lib very well. Or: try PIL or opencv, there won't be too much troublesome things).
  2. Generate trimap using random dilation and random erosion both! Previous code used random dilation only which is a fatal mistake!
  3. Testing time ,use original size (or resize it to the closest number which can be divided by 32). I don't have free GPU to keep working on this, so above suggestions are not verified to be useful. If it helps, let me know : )

2018-4-24: Because I changed implementation of 'unpool' so the test code can't work now. I have no plan to modify this repo but probably restart a new repo for Image Matting with brand new algorithm in near future.

My Chinese blog about the implementation of this paper http://blog.leanote.com/post/calebge/Deep-Image-Matting%E5%A4%8D%E7%8E%B0%E8%BF%87%E7%A8%8B%E6%80%BB%E7%BB%93

Usage

simply run:
python test.py --alpha --rgb
sample:
python test.py --alpha=./test_data/alpha/1.png --rgb=./test_data/RGB/1.png

Pretrained Model

Because I mid-delete the pretrained model on google drive, and that is the only one copy, so there is no pretrained model any more.

Important notification:

1. The pretrained model is trained on private dataset, which has large difference from authors data, so it performs struggling on author's data. You can test the model by feeding test_data.
2. 'fc6' is transformed into convolution operation by tricks proposed in FCN paper. This paper also follows this way. But in this code, convolutionarized 'fc6' is replaced by plain convolution whose weights and biases are initialilzed randomly.
3. Even test on our own data, this model still can't reach the performance mentioned in paper.

Salience Object Detection

Here is my implementation about paper "Deeply Supervised Salient Object Detection with Short Connections" in CVPR2017. Source code won't be pubulished because I did some modification in network structure, but trained model and inference code are available. Now it's only version 1, try it if u are interested!
https://github.com/Joker316701882/Salience-Object-Detection
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].