All Projects → catalyst-team → Segmentation

catalyst-team / Segmentation

Licence: apache-2.0
Catalyst.Segmentation

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Segmentation

Pytorch Toolbelt
PyTorch extensions for fast R&D prototyping and Kaggle farming
Stars: ✭ 942 (+3388.89%)
Mutual labels:  pipeline, segmentation, image-segmentation, augmentation, image-processing
Albumentations
Fast image augmentation library and an easy-to-use wrapper around other libraries. Documentation: https://albumentations.ai/docs/ Paper about the library: https://www.mdpi.com/2078-2489/11/2/125
Stars: ✭ 9,353 (+34540.74%)
Mutual labels:  segmentation, image-segmentation, augmentation, image-processing
Caer
High-performance Vision library in Python. Scale your research, not boilerplate.
Stars: ✭ 452 (+1574.07%)
Mutual labels:  segmentation, image-segmentation, augmentation, image-processing
Open Solution Salt Identification
Open solution to the TGS Salt Identification Challenge
Stars: ✭ 124 (+359.26%)
Mutual labels:  pipeline, image-segmentation, image-processing
Catalyst
Accelerated deep learning R&D
Stars: ✭ 2,804 (+10285.19%)
Mutual labels:  image-segmentation, image-processing, reproducibility
Segmentation models.pytorch
Segmentation models with pretrained backbones. PyTorch.
Stars: ✭ 4,584 (+16877.78%)
Mutual labels:  segmentation, image-segmentation, image-processing
Multiclass Semantic Segmentation Camvid
Tensorflow 2 implementation of complete pipeline for multiclass image semantic segmentation using UNet, SegNet and FCN32 architectures on Cambridge-driving Labeled Video Database (CamVid) dataset.
Stars: ✭ 67 (+148.15%)
Mutual labels:  segmentation, image-segmentation, image-processing
Steppy
Lightweight, Python library for fast and reproducible experimentation 🔬
Stars: ✭ 119 (+340.74%)
Mutual labels:  pipeline, image-processing, reproducibility
classification
Catalyst.Classification
Stars: ✭ 35 (+29.63%)
Mutual labels:  pipeline, reproducibility, augmentation
Targets
Function-oriented Make-like declarative workflows for R
Stars: ✭ 293 (+985.19%)
Mutual labels:  pipeline, reproducibility
Neural Pipeline
Neural networks training pipeline based on PyTorch
Stars: ✭ 315 (+1066.67%)
Mutual labels:  pipeline, image-segmentation
Instaboost
Code for ICCV2019 paper "InstaBoost: Boosting Instance Segmentation Via Probability Map Guided Copy-Pasting"
Stars: ✭ 368 (+1262.96%)
Mutual labels:  segmentation, augmentation
Segmentation models
Segmentation models with pretrained backbones. Keras and TensorFlow Keras.
Stars: ✭ 3,575 (+13140.74%)
Mutual labels:  segmentation, image-segmentation
Medpy
Medical image processing in Python
Stars: ✭ 321 (+1088.89%)
Mutual labels:  image-segmentation, image-processing
Slicer
Multi-platform, free open source software for visualization and image computing.
Stars: ✭ 263 (+874.07%)
Mutual labels:  segmentation, image-processing
Cvpr2021 Papers With Code
CVPR 2021 论文和开源项目合集
Stars: ✭ 7,138 (+26337.04%)
Mutual labels:  image-segmentation, image-processing
HyperDenseNet pytorch
Pytorch version of the HyperDenseNet deep neural network for multi-modal image segmentation
Stars: ✭ 58 (+114.81%)
Mutual labels:  segmentation, image-segmentation
Open Solution Home Credit
Open solution to the Home Credit Default Risk challenge 🏡
Stars: ✭ 397 (+1370.37%)
Mutual labels:  pipeline, reproducibility
Ttach
Image Test Time Augmentation with PyTorch!
Stars: ✭ 455 (+1585.19%)
Mutual labels:  segmentation, augmentation
Concise Ipython Notebooks For Deep Learning
Ipython Notebooks for solving problems like classification, segmentation, generation using latest Deep learning algorithms on different publicly available text and image data-sets.
Stars: ✭ 23 (-14.81%)
Mutual labels:  image-segmentation, image-processing

Catalyst logo

Accelerated DL & RL

Build Status CodeFactor Pipi version Docs PyPI Status

Twitter Telegram Slack Github contributors

PyTorch framework for Deep Learning research and development. It was developed with a focus on reproducibility, fast experimentation and code/ideas reusing. Being able to research/develop something new, rather than write another regular train loop.
Break the cycle - use the Catalyst!

Project manifest. Part of PyTorch Ecosystem. Part of Catalyst Ecosystem:

  • Alchemy - Experiments logging & visualization
  • Catalyst - Accelerated Deep Learning Research and Development
  • Reaction - Convenient Deep Learning models serving

Catalyst at AI Landscape.


Catalyst.Segmentation Build Status Github contributors

You will learn how to build image segmentation pipeline with transfer learning using the Catalyst framework.

Goals

  1. Install requirements
  2. Prepare data
  3. Run: raw data → production-ready model
  4. Get results
  5. Customize own pipeline

1. Install requirements

Using local environment:

pip install -r requirements/requirements.txt

Using docker:

This creates a build catalyst-segmentation with the necessary libraries:

make docker-build

2. Get Dataset

Try on open datasets

You can use one of the open datasets

export DATASET="isbi"

rm -rf data/
mkdir -p data

if [[ "$DATASET" == "isbi" ]]; then
    # binary segmentation
    # http://brainiac2.mit.edu/isbi_challenge/
    download-gdrive 1uyPb9WI0t2qMKIqOjFKMv1EtfQ5FAVEI isbi_cleared_191107.tar.gz
    tar -xf isbi_cleared_191107.tar.gz &>/dev/null
    mv isbi_cleared_191107 ./data/origin
elif [[ "$DATASET" == "voc2012" ]]; then
    # semantic segmentation
    # http://host.robots.ox.ac.uk/pascal/VOC/voc2012/
    wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
    tar -xf VOCtrainval_11-May-2012.tar &>/dev/null
    mkdir -p ./data/origin/images/; mv VOCdevkit/VOC2012/JPEGImages/* $_
    mkdir -p ./data/origin/raw_masks; mv VOCdevkit/VOC2012/SegmentationClass/* $_
fi

Use your own dataset

Prepare your dataset

Data structure

Make sure, that final folder with data has the required structure:

/path/to/your_dataset/
        images/
            image_1
            image_2
            ...
            image_N
        raw_masks/
            mask_1
            mask_2
            ...
            mask_N

Data location

  • The easiest way is to move your data:

    mv /path/to/your_dataset/* /catalyst.segmentation/data/origin
    

    In that way you can run pipeline with default settings.

  • If you prefer leave data in /path/to/your_dataset/

    • In local environment:

      • Link directory
        ln -s /path/to/your_dataset $(pwd)/data/origin
        
      • Or just set path to your dataset DATADIR=/path/to/your_dataset when you start the pipeline.
    • Using docker

      You need to set:

         -v /path/to/your_dataset:/data \ #instead default  $(pwd)/data/origin:/data
      

      in the script below to start the pipeline.

3. Segmentation pipeline

Fast&Furious: raw data → production-ready model

The pipeline will automatically guide you from raw data to the production-ready model.

We will initialize Unet model with a pre-trained ResNet-18 encoder. During current pipeline model will be trained sequentially in two stages.

Binary segmentation pipeline

Run in local environment:

CUDA_VISIBLE_DEVICES=0 \
CUDNN_BENCHMARK="True" \
CUDNN_DETERMINISTIC="True" \
WORKDIR=./logs \
DATADIR=./data/origin \
IMAGE_SIZE=256 \
CONFIG_TEMPLATE=./configs/templates/binary.yml \
NUM_WORKERS=4 \
BATCH_SIZE=256 \
bash ./bin/catalyst-binary-segmentation-pipeline.sh

Run in docker:

export LOGDIR=$(pwd)/logs
docker run -it --rm --shm-size 8G --runtime=nvidia \
   -v $(pwd):/workspace/ \
   -v $LOGDIR:/logdir/ \
   -v $(pwd)/data/origin:/data \
   -e "CUDA_VISIBLE_DEVICES=0" \
   -e "USE_WANDB=1" \
   -e "LOGDIR=/logdir" \
   -e "CUDNN_BENCHMARK='True'" \
   -e "CUDNN_DETERMINISTIC='True'" \
   -e "WORKDIR=/logdir" \
   -e "DATADIR=/data" \
   -e "IMAGE_SIZE=256" \
   -e "CONFIG_TEMPLATE=./configs/templates/binary.yml" \
   -e "NUM_WORKERS=4" \
   -e "BATCH_SIZE=256" \
   catalyst-segmentation ./bin/catalyst-binary-segmentation-pipeline.sh
Semantic segmentation pipeline

Run in local environment:

CUDA_VISIBLE_DEVICES=0 \
CUDNN_BENCHMARK="True" \
CUDNN_DETERMINISTIC="True" \
WORKDIR=./logs \
DATADIR=./data/origin \
IMAGE_SIZE=256 \
CONFIG_TEMPLATE=./configs/templates/semantic.yml \
NUM_WORKERS=4 \
BATCH_SIZE=256 \
bash ./bin/catalyst-semantic-segmentation-pipeline.sh

Run in docker:

export LOGDIR=$(pwd)/logs
docker run -it --rm --shm-size 8G --runtime=nvidia \
   -v $(pwd):/workspace/ \
   -v $LOGDIR:/logdir/ \
   -v $(pwd)/data/origin:/data \
   -e "CUDA_VISIBLE_DEVICES=0" \
   -e "USE_WANDB=1" \
   -e "LOGDIR=/logdir" \
   -e "CUDNN_BENCHMARK='True'" \
   -e "CUDNN_DETERMINISTIC='True'" \
   -e "WORKDIR=/logdir" \
   -e "DATADIR=/data" \
   -e "IMAGE_SIZE=256" \
   -e "CONFIG_TEMPLATE=./configs/templates/semantic.yml" \
   -e "NUM_WORKERS=4" \
   -e "BATCH_SIZE=256" \
   catalyst-segmentation ./bin/catalyst-semantic-segmentation-pipeline.sh

The pipeline is running and you don’t have to do anything else, it remains to wait for the best model!

Visualizations

You can use W&B account for visualisation right after pip install wandb:

wandb: (1) Create a W&B account
wandb: (2) Use an existing W&B account
wandb: (3) Don't visualize my results

Tensorboard also can be used for visualisation:

tensorboard --logdir=/catalyst.segmentation/logs

4. Results

All results of all experiments can be found locally in WORKDIR, by default catalyst.segmentation/logs. Results of experiment, for instance catalyst.segmentation/logs/logdir-191107-094627-2f31d790, contain:

checkpoints

  • The directory contains all checkpoints: best, last, also of all stages.
  • best.pth and last.pht can be also found in the corresponding experiment in your W&B account.

configs

  • The directory contains experiment`s configs for reproducibility.

logs

  • The directory contains all logs of experiment.
  • Metrics also logs can be displayed in the corresponding experiment in your W&B account.

code

  • The directory contains code on which calculations were performed. This is necessary for complete reproducibility.

5. Customize own pipeline

For your future experiments framework provides powerful configs allow to optimize configuration of the whole pipeline of segmentation in a controlled and reproducible way.

Configure your experiments

  • Common settings of stages of training and model parameters can be found in catalyst.segmentation/configs/_common.yml.

    • model_params: detailed configuration of models, including:
      • model, for instance ResnetUnet
      • detailed architecture description
      • using pretrained model
    • stages: you can configure training or inference in several stages with different hyperparameters. In our example:
      • optimizer params
      • first learn the head(s), then train the whole network
  • The CONFIG_TEMPLATE with other experiment`s hyperparameters, such as data_params and is here: catalyst.segmentation/configs/templates/binary.yml. The config allows you to define:

    • data_params: path, batch size, num of workers and so on
    • callbacks_params: Callbacks are used to execute code during training, for example, to get metrics or save checkpoints. Catalyst provide wide variety of helpful callbacks also you can use custom.

You can find much more options for configuring experiments in catalyst documentation.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].