All Projects → SimonVandenhende → Multi Task Learning Pytorch

SimonVandenhende / Multi Task Learning Pytorch

Licence: other
PyTorch implementation of multi-task learning architectures, incl. MTI-Net (ECCV2020).

Programming Languages

python
139335 projects - #7 most used programming language
pascal
1382 projects

Projects that are alternatives of or similar to Multi Task Learning Pytorch

Hair Segmentation
hair segmentation in mobile device
Stars: ✭ 160 (-15.79%)
Mutual labels:  segmentation
Awesome Cell Detection Segmentation
nucleus/cell and histopathology image classification,detection,segmentation
Stars: ✭ 173 (-8.95%)
Mutual labels:  segmentation
3dgnn pytorch
3D Graph Neural Networks for RGBD Semantic Segmentation
Stars: ✭ 187 (-1.58%)
Mutual labels:  segmentation
Keraspersonlab
Keras-tensorflow implementation of PersonLab (https://arxiv.org/abs/1803.08225)
Stars: ✭ 163 (-14.21%)
Mutual labels:  segmentation
Deep Learning For Image Processing
deep learning for image processing including classification and object-detection etc.
Stars: ✭ 5,808 (+2956.84%)
Mutual labels:  segmentation
Vocal Remover
Vocal Remover using Deep Neural Networks
Stars: ✭ 178 (-6.32%)
Mutual labels:  segmentation
Kaggle dstl submission
Code for a winning model (3 out of 419) in a Dstl Satellite Imagery Feature Detection challenge
Stars: ✭ 159 (-16.32%)
Mutual labels:  segmentation
Miscnn
A framework for Medical Image Segmentation with Convolutional Neural Networks and Deep Learning
Stars: ✭ 194 (+2.11%)
Mutual labels:  segmentation
3dunet Tensorflow Brats18
3D Unet biomedical segmentation model powered by tensorpack with fast io speed
Stars: ✭ 173 (-8.95%)
Mutual labels:  segmentation
Imantics
📷 Reactive python package for managing, creating and visualizing different deep-learning image annotation formats
Stars: ✭ 184 (-3.16%)
Mutual labels:  segmentation
Keras unet plus plus
keras implementation of unet plus plus
Stars: ✭ 166 (-12.63%)
Mutual labels:  segmentation
Unet Tensorflow Keras
A concise code for training and evaluating Unet using tensorflow+keras
Stars: ✭ 172 (-9.47%)
Mutual labels:  segmentation
6dpose
implement some algorithms of 6d pose estimation
Stars: ✭ 180 (-5.26%)
Mutual labels:  segmentation
Pointnet2
PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space
Stars: ✭ 2,197 (+1056.32%)
Mutual labels:  segmentation
Dataset loaders
A collection of dataset loaders
Stars: ✭ 187 (-1.58%)
Mutual labels:  segmentation
Medical Transformer
Pytorch Code for "Medical Transformer: Gated Axial-Attention for Medical Image Segmentation"
Stars: ✭ 153 (-19.47%)
Mutual labels:  segmentation
Imgclsmob
Sandbox for training deep learning networks
Stars: ✭ 2,405 (+1165.79%)
Mutual labels:  segmentation
Zf unet 224 pretrained model
Modification of convolutional neural net "UNET" for image segmentation in Keras framework
Stars: ✭ 195 (+2.63%)
Mutual labels:  segmentation
Squeeze and excitation
PyTorch Implementation of 2D and 3D 'squeeze and excitation' blocks for Fully Convolutional Neural Networks
Stars: ✭ 192 (+1.05%)
Mutual labels:  segmentation
3d Pointcloud
Papers and Datasets about Point Cloud.
Stars: ✭ 179 (-5.79%)
Mutual labels:  segmentation

Multi-Task Learning

This repo aims to implement several multi-task learning models and training strategies in PyTorch. The code base complements the following works:

Multi-Task Learning for Dense Prediction Tasks: A Survey

Simon Vandenhende, Stamatios Georgoulis, Wouter Van Gansbeke, Marc Proesmans, Dengxin Dai and Luc Van Gool.

MTI-Net: Multi-Scale Task Interaction Networks for Multi-Task Learning

Simon Vandenhende, Stamatios Georgoulis and Luc Van Gool.

An up-to-date list of works on multi-task learning can be found here.

Installation

The code runs with recent Pytorch version, e.g. 1.4. Assuming Anaconda, the most important packages can be installed as:

conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
conda install imageio scikit-image		   	   # Image operations
conda install -c conda-forge opencv		           # OpenCV
conda install pyyaml easydict                 		   # Configurations
conda install termcolor                       		   # Colorful print statements

We refer to the requirements.txt file for an overview of the package versions in our own environment.

Usage

Setup

The following files need to be adapted in order to run the code on your own machine:

  • Change the file paths to the datasets in utils/mypath.py, e.g. /path/to/pascal/.
  • Specify the output directory in configs/your_env.yml. All results will be stored under this directory.
  • The seism repository is needed to perform the edge evaluation. See the README in ./evaluation/seism/.
  • If you want to use the HRNet backbones, please download the pre-trained weights here. The provided config files use an HRNet-18 backbone. Download the hrnet_w18_small_model_v2.pth and save it to the directory ./models/pretrained_models/.

The datasets will be downloaded automatically to the specified paths when running the code for the first time.

Training

The configuration files to train the model can be found in the configs/ directory. The model can be trained by running the following command:

python main.py --config_env configs/env.yml --config_exp configs/$DATASET/$MODEL.yml

Evaluation

We evaluate the best model at the end of training. The evaluation criterion is based on Equation 10 from our survey paper and requires to pre-train a set of single-tasking networks beforehand. To speed-up training, it is possible to evaluate the model only during the final 10 epochs by adding the following line to your config file:

eval_final_10_epochs_only: True

Support

The following datasets and tasks are supported.

Dataset Sem. Seg. Depth Normals Edge Saliency Human Parts
PASCAL Y N Y Y Y Y
NYUD Y Y Aux Aux N N

The following models are supported.

Backbone HRNet ResNet
Single-Task Y Y
Multi-Task Y Y
Cross-Stitch Y
NDDR-CNN Y
MTAN Y
PAD-Net Y
MTI-Net Y

References

This code repository is heavily based on the ASTMT repository. In particular, the evaluation and dataloaders were taken from there.

Citation

If you find this repo useful for your research, please consider citing the following works:

@article{
  author={S. Vandenhende and S. Georgoulis and W. Van Gansbeke and M. Proesmans and D. Dai and L. Van Gool},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
  title={Multi-Task Learning for Dense Prediction Tasks: A Survey}, 
  year={2021},
  volume={},
  number={},
  pages={1-1},
  doi={10.1109/TPAMI.2021.3054719}}

@article{vandenhende2020mti,
  title={MTI-Net: Multi-Scale Task Interaction Networks for Multi-Task Learning},
  author={Vandenhende, Simon and Georgoulis, Stamatios and Van Gool, Luc},
  journal={ECCV2020},
  year={2020}
}

@InProceedings{MRK19,
  Author    = {Kevis-Kokitsi Maninis and Ilija Radosavovic and Iasonas Kokkinos},
  Title     = {Attentive Single-Tasking of Multiple Tasks},
  Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  Year      = {2019}
}

@article{pont2015supervised,
  title={Supervised evaluation of image segmentation and object proposal techniques},
  author={Pont-Tuset, Jordi and Marques, Ferran},
  journal={IEEE transactions on pattern analysis and machine intelligence},
  year={2015},
}

Updates

For more information see issue #1.

The initial code used the NYUDv2 dataloader from ASTMT. This implementation was different from the one we used to run our experiments in the survey. Therefore, we have re-written the NYUDv2 dataloader to be consistent with our survey results. To avoid any issues, it is best to remove your old version of the NYUDv2 dataset. The python script will then automatically download the correct version when using the NYUDv2 dataset.

The depth task is evaluated in a pixel-wise fashion to be consistent with the survey. This is different from ASTMT, which averages the results across the images.

License

This software is released under a creative commons license which allows for personal and research use only. For a commercial license please contact the authors. You can view a license summary here.

Acknoledgements

The authors acknowledge support by Toyota via the TRACE project and MACCHINA (KULeuven, C14/18/065).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].