All Projects → milesial → Pytorch Unet

milesial / Pytorch Unet

Licence: gpl-3.0
PyTorch implementation of the U-Net for image semantic segmentation with high quality images

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects
Dockerfile
14818 projects

Projects that are alternatives of or similar to Pytorch Unet

Segmentation
Tensorflow implementation : U-net and FCN with global convolution
Stars: ✭ 101 (-97.88%)
Mutual labels:  kaggle, semantic-segmentation, unet
satellite-Image-Semantic-Segmentation-Unet-Tensorflow-keras
Collection of different Unet Variant suchas VggUnet, ResUnet, DenseUnet, Unet. AttUnet, MobileNetUnet, NestedUNet, R2AttUNet, R2UNet, SEUnet, scSEUnet, Unet_Xception_ResNetBlock
Stars: ✭ 43 (-99.1%)
Mutual labels:  unet, semantic-segmentation
map-floodwater-satellite-imagery
This repository focuses on training semantic segmentation models to predict the presence of floodwater for disaster prevention. Models were trained using SageMaker and Colab.
Stars: ✭ 21 (-99.56%)
Mutual labels:  semantic-segmentation, weights-and-biases
Vedaseg
A semantic segmentation toolbox based on PyTorch
Stars: ✭ 367 (-92.31%)
Mutual labels:  semantic-segmentation, unet
celldetection
Cell Detection with PyTorch.
Stars: ✭ 44 (-99.08%)
Mutual labels:  unet, semantic-segmentation
lightning-hydra-template
PyTorch Lightning + Hydra. A very user-friendly template for rapid and reproducible ML experimentation with best practices. ⚡🔥⚡
Stars: ✭ 1,905 (-60.06%)
Mutual labels:  tensorboard, wandb
DST-CBC
Implementation of our paper "DMT: Dynamic Mutual Training for Semi-Supervised Learning"
Stars: ✭ 98 (-97.95%)
Mutual labels:  tensorboard, semantic-segmentation
multiclass-semantic-segmentation
Experiments with UNET/FPN models and cityscapes/kitti datasets [Pytorch]
Stars: ✭ 96 (-97.99%)
Mutual labels:  unet, semantic-segmentation
TensorFlow-Advanced-Segmentation-Models
A Python Library for High-Level Semantic Segmentation Models based on TensorFlow and Keras with pretrained backbones.
Stars: ✭ 64 (-98.66%)
Mutual labels:  unet, semantic-segmentation
PyTorch-UNet
PyTorch Implementation for Segmentation and Saliency Prediction
Stars: ✭ 21 (-99.56%)
Mutual labels:  unet, pytorch-unet
Open Solution Mapping Challenge
Open solution to the Mapping Challenge 🌎
Stars: ✭ 291 (-93.9%)
Mutual labels:  kaggle, unet
EDANet
Implementation details for EDANet
Stars: ✭ 34 (-99.29%)
Mutual labels:  convolutional-networks, semantic-segmentation
pytorch-UNet
2D and 3D UNet implementation in PyTorch.
Stars: ✭ 107 (-97.76%)
Mutual labels:  unet, semantic-segmentation
open-solution-ship-detection
Open solution to the Airbus Ship Detection Challenge
Stars: ✭ 54 (-98.87%)
Mutual labels:  kaggle, unet
3D-UNet-PyTorch-Implementation
The implementation of 3D-UNet using PyTorch
Stars: ✭ 78 (-98.36%)
Mutual labels:  unet, semantic-segmentation
unet pytorch
Pytorch implementation of UNet for converting aerial satellite images into google maps kinda images.
Stars: ✭ 27 (-99.43%)
Mutual labels:  unet, semantic-segmentation
Segmentation models.pytorch
Segmentation models with pretrained backbones. PyTorch.
Stars: ✭ 4,584 (-3.9%)
Mutual labels:  semantic-segmentation, unet
image-segmentation
Mask R-CNN, FPN, LinkNet, PSPNet and UNet with multiple backbone architectures support readily available
Stars: ✭ 62 (-98.7%)
Mutual labels:  unet, semantic-segmentation
ResUNetPlusPlus-with-CRF-and-TTA
ResUNet++, CRF, and TTA for segmentation of medical images (IEEE JBIHI)
Stars: ✭ 98 (-97.95%)
Mutual labels:  unet, semantic-segmentation
DDUnet-Modified-Unet-for-WMH-with-Dense-Dilate
WMH segmentaion with unet, dilated_unet, and with ideas from denseNet
Stars: ✭ 23 (-99.52%)
Mutual labels:  unet, semantic-segmentation

U-Net: Semantic segmentation with PyTorch

input and output for a random image in the test dataset

Customized implementation of the U-Net in PyTorch for Kaggle's Carvana Image Masking Challenge from high definition images.

Quick start

Without Docker

  1. Install CUDA

  2. Install PyTorch

  3. Install dependencies

pip install -r requirements.txt
  1. Download the data and run training:
bash scripts/download_data.sh
python train.py --amp

With Docker

  1. Install Docker 19.03 or later:
curl https://get.docker.com | sh && sudo systemctl --now enable docker
  1. Install the NVIDIA container toolkit:
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
   && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \
   && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install -y nvidia-docker2
sudo systemctl restart docker
  1. Download and run the image:
sudo docker run --rm --shm-size=8g --ulimit memlock=-1 --gpus all -it milesial/unet
  1. Download the data and run training:
bash scripts/download_data.sh
python train.py --amp

Description

This model was trained from scratch with 5k images and scored a Dice coefficient of 0.988423 on over 100k test images.

It can be easily used for multiclass segmentation, portrait segmentation, medical segmentation, ...

Usage

Note : Use Python 3.6 or newer

Docker

A docker image containing the code and the dependencies is available on DockerHub. You can download and jump in the container with (docker >=19.03):

docker run -it --rm --shm-size=8g --ulimit memlock=-1 --gpus all milesial/unet

Training

> python train.py -h
usage: train.py [-h] [--epochs E] [--batch-size B] [--learning-rate LR]
                [--load LOAD] [--scale SCALE] [--validation VAL] [--amp]

Train the UNet on images and target masks

optional arguments:
  -h, --help            show this help message and exit
  --epochs E, -e E      Number of epochs
  --batch-size B, -b B  Batch size
  --learning-rate LR, -l LR
                        Learning rate
  --load LOAD, -f LOAD  Load model from a .pth file
  --scale SCALE, -s SCALE
                        Downscaling factor of the images
  --validation VAL, -v VAL
                        Percent of the data that is used as validation (0-100)
  --amp                 Use mixed precision

By default, the scale is 0.5, so if you wish to obtain better results (but use more memory), set it to 1.

Automatic mixed precision is also available with the --amp flag. Mixed precision allows the model to use less memory and to be faster on recent GPUs by using FP16 arithmetic. Enabling AMP is recommended.

Prediction

After training your model and saving it to MODEL.pth, you can easily test the output masks on your images via the CLI.

To predict a single image and save it:

python predict.py -i image.jpg -o output.jpg

To predict a multiple images and show them without saving them:

python predict.py -i image1.jpg image2.jpg --viz --no-save

> python predict.py -h
usage: predict.py [-h] [--model FILE] --input INPUT [INPUT ...] 
                  [--output INPUT [INPUT ...]] [--viz] [--no-save]
                  [--mask-threshold MASK_THRESHOLD] [--scale SCALE]

Predict masks from input images

optional arguments:
  -h, --help            show this help message and exit
  --model FILE, -m FILE
                        Specify the file in which the model is stored
  --input INPUT [INPUT ...], -i INPUT [INPUT ...]
                        Filenames of input images
  --output INPUT [INPUT ...], -o INPUT [INPUT ...]
                        Filenames of output images
  --viz, -v             Visualize the images as they are processed
  --no-save, -n         Do not save the output masks
  --mask-threshold MASK_THRESHOLD, -t MASK_THRESHOLD
                        Minimum probability value to consider a mask pixel white
  --scale SCALE, -s SCALE
                        Scale factor for the input images

You can specify which model file to use with --model MODEL.pth.

Weights & Biases

The training progress can be visualized in real-time using Weights & Biases. Loss curves, validation curves, weights and gradient histograms, as well as predicted masks are logged to the platform.

When launching a training, a link will be printed in the console. Click on it to go to your dashboard. If you have an existing W&B account, you can link it by setting the WANDB_API_KEY environment variable.

Pretrained model

A pretrained model is available for the Carvana dataset. It can also be loaded from torch.hub:

net = torch.hub.load('milesial/Pytorch-UNet', 'unet_carvana', pretrained=True)

The training was done with a 50% scale and bilinear upsampling.

Data

The Carvana data is available on the Kaggle website.

You can also download it using the helper script:

bash scripts/download_data.sh

The input images and target masks should be in the data/imgs and data/masks folders respectively (note that the imgs and masks folder should not contain any sub-folder or any other files, due to the greedy data-loader). For Carvana, images are RGB and masks are black and white.

You can use your own dataset as long as you make sure it is loaded properly in utils/data_loading.py.


Original paper by Olaf Ronneberger, Philipp Fischer, Thomas Brox:

U-Net: Convolutional Networks for Biomedical Image Segmentation

network architecture

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].