All Projects → shekkizh → Fcn.tensorflow

shekkizh / Fcn.tensorflow

Licence: mit
Tensorflow implementation of Fully Convolutional Networks for Semantic Segmentation (http://fcn.berkeleyvision.org)

Projects that are alternatives of or similar to Fcn.tensorflow

Multiclass Semantic Segmentation Camvid
Tensorflow 2 implementation of complete pipeline for multiclass image semantic segmentation using UNet, SegNet and FCN32 architectures on Cambridge-driving Labeled Video Database (CamVid) dataset.
Stars: ✭ 67 (-94.55%)
Mutual labels:  jupyter-notebook, segmentation, fcn
Unet
unet for image segmentation
Stars: ✭ 3,751 (+204.96%)
Mutual labels:  jupyter-notebook, segmentation
Tianchi Medical Lungtumordetect
天池医疗AI大赛[第一季]:肺部结节智能诊断 UNet/VGG/Inception/ResNet/DenseNet
Stars: ✭ 314 (-74.47%)
Mutual labels:  jupyter-notebook, segmentation
Kittiseg
A Kitti Road Segmentation model implemented in tensorflow.
Stars: ✭ 873 (-29.02%)
Mutual labels:  segmentation, fcn
TensorFlow-Advanced-Segmentation-Models
A Python Library for High-Level Semantic Segmentation Models based on TensorFlow and Keras with pretrained backbones.
Stars: ✭ 64 (-94.8%)
Mutual labels:  segmentation, fcn
Sipmask
SipMask: Spatial Information Preservation for Fast Image and Video Instance Segmentation (ECCV2020)
Stars: ✭ 255 (-79.27%)
Mutual labels:  jupyter-notebook, segmentation
Fsgan
FSGAN - Official PyTorch Implementation
Stars: ✭ 420 (-65.85%)
Mutual labels:  jupyter-notebook, segmentation
Cellpose
a generalist algorithm for cellular segmentation
Stars: ✭ 244 (-80.16%)
Mutual labels:  jupyter-notebook, segmentation
Seg Mentor
TFslim based semantic segmentation models, modular&extensible boutique design
Stars: ✭ 43 (-96.5%)
Mutual labels:  segmentation, fcn
Pytorch connectomics
PyTorch Connectomics: segmentation toolbox for EM connectomics
Stars: ✭ 46 (-96.26%)
Mutual labels:  jupyter-notebook, segmentation
Avgn
A generative network for animal vocalizations. For dimensionality reduction, sequencing, clustering, corpus-building, and generating novel 'stimulus spaces'. All with notebook examples using freely available datasets.
Stars: ✭ 50 (-95.93%)
Mutual labels:  jupyter-notebook, segmentation
FCN-Segmentation-TensorFlow
FCN for Semantic Image Segmentation achieving 68.5 mIoU on PASCAL VOC
Stars: ✭ 34 (-97.24%)
Mutual labels:  segmentation, fcn
Brain-Tumor-Segmentation-using-Topological-Loss
A Tensorflow Implementation of Brain Tumor Segmentation using Topological Loss
Stars: ✭ 28 (-97.72%)
Mutual labels:  segmentation, fcn
Cascaded Fcn
Source code for the MICCAI 2016 Paper "Automatic Liver and Lesion Segmentation in CT Using Cascaded Fully Convolutional NeuralNetworks and 3D Conditional Random Fields"
Stars: ✭ 296 (-75.93%)
Mutual labels:  jupyter-notebook, segmentation
Pointrend Pytorch
A PyTorch implementation of PointRend: Image Segmentation as Rendering
Stars: ✭ 249 (-79.76%)
Mutual labels:  jupyter-notebook, segmentation
Fbrs interactive segmentation
[CVPR2020] f-BRS: Rethinking Backpropagating Refinement for Interactive Segmentation https://arxiv.org/abs/2001.10331
Stars: ✭ 366 (-70.24%)
Mutual labels:  jupyter-notebook, segmentation
Tfwss
Weakly Supervised Segmentation with Tensorflow. Implements instance segmentation as described in Simple Does It: Weakly Supervised Instance and Semantic Segmentation, by Khoreva et al. (CVPR 2017).
Stars: ✭ 212 (-82.76%)
Mutual labels:  jupyter-notebook, segmentation
Kaggle airbus ship detection
Kaggle airbus ship detection challenge 21st solution
Stars: ✭ 238 (-80.65%)
Mutual labels:  jupyter-notebook, segmentation
Deeplabv3 Plus
Tensorflow 2.3.0 implementation of DeepLabV3-Plus
Stars: ✭ 32 (-97.4%)
Mutual labels:  jupyter-notebook, segmentation
Relaynet pytorch
Pytorch Implementation of retinal OCT Layer Segmentation (with trained models)
Stars: ✭ 63 (-94.88%)
Mutual labels:  jupyter-notebook, segmentation

FCN.tensorflow

Tensorflow implementation of Fully Convolutional Networks for Semantic Segmentation (FCNs).

The implementation is largely based on the reference code provided by the authors of the paper link. The model was applied on the Scene Parsing Challenge dataset provided by MIT http://sceneparsing.csail.mit.edu/.

  1. Prerequisites
  2. Results
  3. Observations
  4. Useful links

Prerequisites

  • The results were obtained after training for ~6-7 hrs on a 12GB TitanX.
  • The code was originally written and tested with tensorflow0.11 and python2.7. The tf.summary calls have been updated to work with tensorflow version 0.12. To work with older versions of tensorflow use branch tf.0.11_compatible.
  • Some of the problems while working with tensorflow1.0 and in windows have been discussed in Issue #9.
  • To train model simply execute python FCN.py
  • To visualize results for a random batch of images use flag --mode=visualize
  • debug flag can be set during training to add information regarding activations, gradients, variables etc.
  • The IPython notebook in logs folder can be used to view results in color as below.

Results

Results were obtained by training the model in batches of 2 with resized image of 256x256. Note that although the training is done at this image size - Nothing prevents the model from working on arbitrary sized images. No post processing was done on the predicted images. Training was done for 9 epochs - The shorter training time explains why certain concepts seem semantically understood by the model while others were not. Results below are from randomly chosen images from validation dataset.

Pretty much used the same network design as in the reference model implementation of the paper in caffe. The weights for the new layers added were initialized with small values, and the learning was done using Adam Optimizer (Learning rate = 1e-4).

Observations

  • The small batch size was necessary to fit the training model in memory but explains the slow learning
  • Concepts that had many examples seem to be correctly identified and segmented - in the example above you can see that cars, persons were identified better. I believe this can be solved by training for longer epochs.
  • Also the resizing of images cause loss of information - you can notice this in the fact smaller objects are segmented with less accuracy.

Now for the gradients,

  • If you closely watch the gradients you will notice the inital training is almost entirely on the new layers added - it is only after these layers are reasonably trained do we see the VGG layers get some gradient flow. This is understandable as changes the new layers affect the loss objective much more in the beginning.
  • The earlier layers of the netowrk are initialized with VGG weights and so conceptually would require less tuning unless the train data is extremely varied - which in this case is not.
  • The first layer of convolutional model captures low level information and since this entrirely dataset dependent you notice the gradients adjusting the first layer weights to accustom the model to the dataset.
  • The other conv layers from VGG have very small gradients flowing as the concepts captured here are good enough for our end objective - Segmentation.
  • This is the core reason Transfer Learning works so well. Just thought of pointing this out while here.

Useful Links

  • Video of the presentaion given by the authors on the paper - link
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].