All Projects → aws-samples → Mask Rcnn Tensorflow

aws-samples / Mask Rcnn Tensorflow

Licence: apache-2.0
Fork of Tensorpack to make breaking performance improvements to the Mask RCNN example. Training is approximately 2x faster than the original implementation on AWS.

Projects that are alternatives of or similar to Mask Rcnn Tensorflow

Sdtm mapper
AI SDTM mapping (R for ML, Python, TensorFlow for DL)
Stars: ✭ 27 (-3.57%)
Mutual labels:  jupyter-notebook
Sgdoptim.jl
A julia package for Gradient Descent and Stochastic Gradient Descent
Stars: ✭ 27 (-3.57%)
Mutual labels:  jupyter-notebook
Sid
Official implementation for ICCV19 "Shadow Removal via Shadow Image Decomposition"
Stars: ✭ 28 (+0%)
Mutual labels:  jupyter-notebook
Mambo
Stars: ✭ 27 (-3.57%)
Mutual labels:  jupyter-notebook
Odsc east 2016
Stars: ✭ 27 (-3.57%)
Mutual labels:  jupyter-notebook
Anatomyofmatplotlib
Anatomy of Matplotlib -- tutorial developed for the SciPy conference
Stars: ✭ 943 (+3267.86%)
Mutual labels:  jupyter-notebook
Stat406
STAT406 @ UBC - "Elements of Statistical Learning"
Stars: ✭ 27 (-3.57%)
Mutual labels:  jupyter-notebook
Idb Idb Invest Coronavirus Impact Dashboard
Follow the impact of COVID-19 outbreak in Latin America in real time
Stars: ✭ 28 (+0%)
Mutual labels:  jupyter-notebook
Taxi
TAXI: a Taxonomy Induction Method based on Lexico-Syntactic Patterns, Substrings and Focused Crawling
Stars: ✭ 27 (-3.57%)
Mutual labels:  jupyter-notebook
Alfabattle2 1stproblem
Alfabattle 2.0 1st task Top-6 solution: 8-folds lgbm blend
Stars: ✭ 27 (-3.57%)
Mutual labels:  jupyter-notebook
Ucsandiegox Dse200x Python For Data Science
UCSandDiego Micro Masters Program
Stars: ✭ 27 (-3.57%)
Mutual labels:  jupyter-notebook
Oxford Deepnlp 2017
🚀 🎉 ✨ Oxford Deep NLP 2017 Course Materials and Practicals, Solutions
Stars: ✭ 27 (-3.57%)
Mutual labels:  jupyter-notebook
Textclassifier
tensorflow implementation
Stars: ✭ 944 (+3271.43%)
Mutual labels:  jupyter-notebook
Machine Learning Data Science Reuse
Gathers machine learning and data science techniques for problem solving.
Stars: ✭ 27 (-3.57%)
Mutual labels:  jupyter-notebook
Pacmap
PaCMAP: Large-scale Dimension Reduction Technique Preserving Both Global and Local Structure
Stars: ✭ 28 (+0%)
Mutual labels:  jupyter-notebook
World Models Sonic Pytorch
Attempt at reinforcement learning with curiosity for Sonic the Hedgehog games. Number 149 on OpenAI retro contest leaderboard, but more work needed
Stars: ✭ 27 (-3.57%)
Mutual labels:  jupyter-notebook
Sklearn ensae course
Materials for a course on scikit-learn at ENSAE
Stars: ✭ 27 (-3.57%)
Mutual labels:  jupyter-notebook
Data driven science python demos
IPython notebooks with demo code intended as a companion to the book "Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control" by J. Nathan Kutz and Steven L. Brunton
Stars: ✭ 27 (-3.57%)
Mutual labels:  jupyter-notebook
Advanced Gradient Obfuscating
Take further steps in the arms race of adversarial examples with only preprocessing.
Stars: ✭ 28 (+0%)
Mutual labels:  jupyter-notebook
Kispython
Курс программирования на языке Python
Stars: ✭ 27 (-3.57%)
Mutual labels:  jupyter-notebook

Mask RCNN

Performance focused implementation of Mask RCNN based on the Tensorpack implementation. The original paper: Mask R-CNN

Overview

This implementation of Mask RCNN is focused on increasing training throughput without sacrificing any accuracy. We do this by training with a batch size > 1 per GPU using FP16 and two custom TF ops.

Status

Training on N GPUs (V100s in our experiments) with a per-gpu batch size of M = NxM training

Training converges to target accuracy for configurations from 8x1 up to 32x4 training. Training throughput is substantially improved from original Tensorpack code.

A pre-built dockerfile is available in DockerHub under awssamples/mask-rcnn-tensorflow:latest. It is automatically built on each commit to master.

Notes

  • Running this codebase requires a custom TF binary - available under GitHub releases
    • The custom_op.patch contains the git diff from our custom TF
    • There are also pre-built TF wheels, the stable version is on TF 1.14.
  • We give some details the codebase and optimizations in CODEBASE.md

To launch training

  • Data preprocessing
    • We are using COCO 2017, you can download the data from COCO data.
    • The pre-trained resnet backbone can be downloaded from ImageNet-R50-AlignPadding.npz
    • The file folder needs to have the following directory structure:
    data/
      annotations/
        instances_train2017.json
        instances_val2017.json
      pretrained-models/
        ImageNet-R50-AlignPadding.npz
      train2017/
        # image files that are mentioned in the corresponding json
      val2017/
        # image files that are mentioned in corresponding json
    
    • If you want to use COCO 2014, please refer to here
    • If you want to use EKS or Sagemaker, you need to create your own S3 bucket which contains the data in the same directory structure, and change the S3 bucket name in the following files:
    • If you want to use EKS, you also need to create the a FSx filesystem
      • You don't need to link your S3 bucket if you have followed the previous steps
      • You need to change the FSx filesystem id in pv-fsx file.
  • Container is highly recommended for training
    • If you want to build your own image, please refer to our Dockerfile. Please note that you need to rebuild Tensorflow when building the docker image, it is time-consuming.
    • Alternatively, you can use our pre-built docker image: fewu/mask-rcnn-tensorflow:master-latest.
  • To run on AWS
    • To train with docker on EC2 (best performance), refer to Docker
    • To train with Amazon EKS, refer to EKS
    • To train with Amazon SageMaker, refer to SageMaker

Training results

The result was running on P3dn.24xl instances using EKS. 12 epochs training:

Num_GPUs x Images_Per_GPU Training time Box mAP Mask mAP
8x4 5.09h 37.47% 34.45%
16x4 3.11h 37.41% 34.47%
32x4 1.94h 37.20% 34.25%

24 epochs training:

Num_GPUs x Images_Per_GPU Training time Box mAP Mask mAP
8x4 9.78h 38.25% 35.08%
16x4 5.60h 38.44% 35.18%
32x4 3.33h 38.33% 35.12%

Example output

Tensorpack fork point

Forked from the excellent Tensorpack repo at commit a9dce5b220dca34b15122a9329ba9ff055e8edc6

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].