All Projects → dingzeyuli → Knn Matting

dingzeyuli / Knn Matting

Licence: mpl-2.0
Source Code for KNN Matting, CVPR 2012 / TPAMI 2013. MATLAB code ready to run. Simple and robust implementation under 40 lines.

Programming Languages

matlab
3953 projects

Projects that are alternatives of or similar to Knn Matting

HRFormer
This is an official implementation of our NeurIPS 2021 paper "HRFormer: High-Resolution Transformer for Dense Prediction".
Stars: ✭ 357 (+174.62%)
Mutual labels:  vision, segmentation
dd-ml-segmentation-benchmark
DroneDeploy Machine Learning Segmentation Benchmark
Stars: ✭ 179 (+37.69%)
Mutual labels:  vision, segmentation
Apc Vision Toolbox
MIT-Princeton Vision Toolbox for the Amazon Picking Challenge 2016 - RGB-D ConvNet-based object segmentation and 6D object pose estimation.
Stars: ✭ 277 (+113.08%)
Mutual labels:  segmentation, vision
SemanticSegmentation-Libtorch
Libtorch Examples
Stars: ✭ 38 (-70.77%)
Mutual labels:  vision, segmentation
Caer
High-performance Vision library in Python. Scale your research, not boilerplate.
Stars: ✭ 452 (+247.69%)
Mutual labels:  segmentation, vision
Fusionseg
Video Object Segmentation
Stars: ✭ 116 (-10.77%)
Mutual labels:  segmentation
Fast 3d Pointcloud Segmentation
Fast 3D point cloud segmentation using supervoxels with geometry and color for 3D scene understanding
Stars: ✭ 122 (-6.15%)
Mutual labels:  segmentation
Nnunet
No description or website provided.
Stars: ✭ 2,231 (+1616.15%)
Mutual labels:  segmentation
Deep Learning Based Ecg Annotator
Annotation of ECG signals using deep learning, tensorflow’ Keras
Stars: ✭ 110 (-15.38%)
Mutual labels:  segmentation
Deeplabv3 mobilenetv2 pytorch
A PyTorch Implementation of MobileNetv2+DeepLabv3
Stars: ✭ 130 (+0%)
Mutual labels:  segmentation
Unet Family
Paper and implementation of UNet-related model.
Stars: ✭ 1,924 (+1380%)
Mutual labels:  segmentation
Multi object datasets
Multi-object image datasets with ground-truth segmentation masks and generative factors.
Stars: ✭ 121 (-6.92%)
Mutual labels:  segmentation
Dstl unet
Dstl Satellite Imagery Feature Detection
Stars: ✭ 117 (-10%)
Mutual labels:  segmentation
Syntok
Text tokenization and sentence segmentation (segtok v2)
Stars: ✭ 123 (-5.38%)
Mutual labels:  segmentation
Neighbor
Nearest neighbor search for Rails and Postgres
Stars: ✭ 114 (-12.31%)
Mutual labels:  nearest-neighbor-search
Sudachidict
A lexicon for Sudachi
Stars: ✭ 127 (-2.31%)
Mutual labels:  segmentation
Masktrack
Implementation of MaskTrack method which is the baseline of several state-of-the-art video object segmentation methods in Pytorch
Stars: ✭ 110 (-15.38%)
Mutual labels:  segmentation
Openvehiclevision
An opensource lib. for vehicle vision applications (written by MATLAB), lane marking detection, road segmentation
Stars: ✭ 120 (-7.69%)
Mutual labels:  segmentation
Cn24
Convolutional (Patch) Networks for Semantic Segmentation
Stars: ✭ 125 (-3.85%)
Mutual labels:  segmentation
Nucleisegmentation
cGAN-based Multi Organ Nuclei Segmentation
Stars: ✭ 120 (-7.69%)
Mutual labels:  segmentation

KNN Matting

Qifeng Chen, Dingzeyu Li, Chi-Keung Tang
The Hong Kong University of Science and Technology
CVPR 2012 / TPAMI 2013

KNN Matting

Installation Steps

Linux and Mac

run "bash install.sh" to download all the required libraries and data. It would take several minutes to tens of minutes, depending on the network connection.

Windows or Manual Installation

  • Download the VLFeat library and extract into the same directory.
  • Download the training dataset from AlphaMatting.com .
  • Extract corresponding files into ${KNN_MATTING_DIR}/vlfeat/ and ${KNN_MATTING_DIR}/data/, for details please see the ${KNN_MATTING_DIR}/src/run_demo.m.

Optional Data

  • SVBRDF data from Jason Lawence, the inverse shaded tree database.

Running the Demo

We have been running our codes since Matlab R2011b. The latest version of code is tested on Matlab R2015a. Please let us know if you run into problem.

The input method 1.Left click on each layer (Press Space to seperate layers) 2.press Enter to terminate

Parameters to change are input at the begining of the code lambda: see equ(12) level: the degree of spatial coherence. normally between 0.5 and 3 factor: the degree of hue. normally between 0.5 and 3 im: an image or BRDF data scrib: scribble l: input windows size is (l*2+1)^2 nn: the number of neighbors. It can be a vector of two elements. For example [10;2] means 10 neighbors with default(level) spatial coherence and 2 neighbors with weak spatial coherence.

Note: Scribble inputs in RGB space usually perform better than HSV space.

More Information

For more information, please go to our project site for the detailed paper.

Disclaimer

The code is free for academic/research purpose. Use at your own risk and we are not responsible for any loss resulting from this code. Feel free to submit pull request for bug fixes.

Contact

[email protected]) and [email protected])

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].