All Projects → aharley → Segaware

aharley / Segaware

Segmentation-Aware Convolutional Networks Using Local Attention Masks

Projects that are alternatives of or similar to Segaware

Hypertools Paper Notebooks
Supporting notebooks and data from hypertools paper
Stars: ✭ 145 (-0.68%)
Mutual labels:  jupyter-notebook
Numbapro Examples
Examples of NumbaPro in use.
Stars: ✭ 145 (-0.68%)
Mutual labels:  jupyter-notebook
Face generator
DCGAN face generator 🧑.
Stars: ✭ 146 (+0%)
Mutual labels:  jupyter-notebook
Deep Deep
Adaptive crawler which uses Reinforcement Learning methods
Stars: ✭ 145 (-0.68%)
Mutual labels:  jupyter-notebook
Python Machine Learning Book
The "Python Machine Learning (1st edition)" book code repository and info resource
Stars: ✭ 11,428 (+7727.4%)
Mutual labels:  jupyter-notebook
Ds production
Stars: ✭ 146 (+0%)
Mutual labels:  jupyter-notebook
Ppdai risk evaluation
“魔镜杯”风控算法大赛 拍拍贷风控模型,接近冠军分数
Stars: ✭ 144 (-1.37%)
Mutual labels:  jupyter-notebook
Applied Dl 2018
Tel-Aviv Deep Learning Boot-camp: 12 Applied Deep Learning Labs
Stars: ✭ 146 (+0%)
Mutual labels:  jupyter-notebook
Deep Learning With Tensorflow Book
深度学习入门开源书,基于TensorFlow 2.0案例实战。Open source Deep Learning book, based on TensorFlow 2.0 framework.
Stars: ✭ 12,105 (+8191.1%)
Mutual labels:  jupyter-notebook
Formation Deep Learning
Supports de formation Deep Learning (diapos et exercices pratiques)
Stars: ✭ 146 (+0%)
Mutual labels:  jupyter-notebook
Alta
The Art of Literary Text Analysis
Stars: ✭ 145 (-0.68%)
Mutual labels:  jupyter-notebook
Digital video introduction
A hands-on introduction to video technology: image, video, codec (av1, vp9, h265) and more (ffmpeg encoding).
Stars: ✭ 12,184 (+8245.21%)
Mutual labels:  jupyter-notebook
Bertem
论文实现(ACL2019):《Matching the Blanks: Distributional Similarity for Relation Learning》
Stars: ✭ 146 (+0%)
Mutual labels:  jupyter-notebook
Sqlcell
SQLCell is a magic function for the Jupyter Notebook that executes raw, parallel, parameterized SQL queries with the ability to accept Python values as parameters and assign output data to Python variables while concurrently running Python code. And *much* more.
Stars: ✭ 145 (-0.68%)
Mutual labels:  jupyter-notebook
Siamese Networks
Few Shot Learning by Siamese Networks, using Keras.
Stars: ✭ 146 (+0%)
Mutual labels:  jupyter-notebook
Citeomatic
A citation recommendation system that allows users to find relevant citations for their paper drafts. The tool is backed by Semantic Scholar's OpenCorpus dataset.
Stars: ✭ 145 (-0.68%)
Mutual labels:  jupyter-notebook
100daysofmlcode
My journey to learn and grow in the domain of Machine Learning and Artificial Intelligence by performing the #100DaysofMLCode Challenge.
Stars: ✭ 146 (+0%)
Mutual labels:  jupyter-notebook
Deeplearningbookcode Volume1
Python/Jupyter notebooks for Volume 1 of "Deep Learning - From Basics to Practice" by Andrew Glassner
Stars: ✭ 146 (+0%)
Mutual labels:  jupyter-notebook
Fantasy Basketball
Scraping statistics, predicting NBA player performance with neural networks and boosting algorithms, and optimising lineups for Draft Kings with genetic algorithm. Capstone Project for Machine Learning Engineer Nanodegree by Udacity.
Stars: ✭ 146 (+0%)
Mutual labels:  jupyter-notebook
Chinesetextclassifier
中文商品评论短文本分类器,可用于情感分析
Stars: ✭ 146 (+0%)
Mutual labels:  jupyter-notebook

Segmentation-Aware Convolutional Networks Using Local Attention Masks

[Project Page] [Paper]

Segmentation-aware convolution filters are invariant to backgrounds. We achieve this in three steps: (i) compute segmentation cues for each pixel (i.e., “embeddings”), (ii) create a foreground mask for each patch, and (iii) combine the masks with convolution, so that the filters only process the local foreground in each image patch.

Installation

For prerequisites, refer to DeepLabV2. Our setup follows theirs almost exactly.

Once you have the prequisites, simply run make all -j4 from within caffe/ to compile the code with 4 cores.

Learning embeddings with dedicated loss

  • Use Convolution layers to create dense embeddings.
  • Use Im2dist to compute dense distance comparisons in an embedding map.
  • Use Im2parity to compute dense label comparisons in a label map.
  • Use DistLoss (with parameters alpha and beta) to set up a contrastive side loss on the distances.

See scripts/segaware/config/embs for a full example.

Setting up a segmentation-aware convolution layer

  • Use Im2col on the input, to arrange pixel/feature patches into columns.
  • Use Im2dist on the embeddings, to get their distances into columns.
  • Use Exp on the distances, with scale: -1, to get them into [0,1].
  • Tile the exponentiated distances, with a factor equal to the depth (i.e., channels) of the original convolution features.
  • Use Eltwise to multiply the Tile result with the Im2col result.
  • Use Convolution with bottom_is_im2col: true to matrix-multiply the convolution weights with the Eltwise output.

See scripts/segaware/config/vgg for an example in which every convolution layer in the VGG16 architecture is made segmentation-aware.

Using a segmentation-aware CRF

  • Use the NormConvMeanfield layer. As input, give it two copies of the unary potentials (produced by a Split layer), some embeddings, and a meshgrid-like input (produced by a DummyData layer with data_filler { type: "xy" }).

See scripts/segaware/config/res for an example in which a segmentation-aware CRF is added to a resnet architecture.

Replicating the segmentation results presented in our paper

  • Download pretrained model weights here, and put that file into scripts/segaware/model/res/.
  • From scripts, run ./test_res.sh. This will produce .mat files in scripts/segaware/features/res/voc_test/mycrf/.
  • From scripts, run ./gen_preds.sh. This will produce colorized .png results in scripts/segaware/results/res/voc_test/mycrf/none/results/VOC2012/Segmentation/comp6_test_cls. An example input-ouput pair is shown below:
- If you zip these results, and submit them to the official PASCAL VOC test server, you will get 79.83900% IOU.

If you run this set of steps for the validation set, you can run ./eval.sh to evaluate your results on the PASCAL VOC validation set. If you change the model, you may want to run ./edit_env.sh to update the evaluation instructions.

Citation

@inproceedings{harley_segaware,
  title = {Segmentation-Aware Convolutional Networks Using Local Attention Masks},
  author = {Adam W Harley, Konstantinos G. Derpanis, Iasonas Kokkinos},
  booktitle = {IEEE International Conference on Computer Vision (ICCV)},
  year = {2017},
}

Help

Feel free to open issues on here! Also, I'm pretty good with email: [email protected]

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].