All Projects → imatge-upc → Rsis

imatge-upc / Rsis

Licence: mit
Recurrent Neural Networks for Semantic Instance Segmentation

Projects that are alternatives of or similar to Rsis

Jupyter notebooks
Collection of jupyter notebooks
Stars: ✭ 127 (-0.78%)
Mutual labels:  jupyter-notebook
Doodlenet
A doodle classifier(CNN), trained on all 345 categories from Quickdraw dataset.
Stars: ✭ 128 (+0%)
Mutual labels:  jupyter-notebook
Ipyexperiments
jupyter/ipython experiment containers for GPU and general RAM re-use
Stars: ✭ 128 (+0%)
Mutual labels:  jupyter-notebook
Graf
Official code release for "GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis"
Stars: ✭ 127 (-0.78%)
Mutual labels:  jupyter-notebook
Robust Detection Benchmark
Code, data and benchmark from the paper "Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming" (NeurIPS 2019 ML4AD)
Stars: ✭ 128 (+0%)
Mutual labels:  jupyter-notebook
Insightface Just Works
Insightface face detection and recognition model that just works out of the box.
Stars: ✭ 127 (-0.78%)
Mutual labels:  jupyter-notebook
Colour Demosaicing
CFA (Colour Filter Array) Demosaicing Algorithms for Python
Stars: ✭ 127 (-0.78%)
Mutual labels:  jupyter-notebook
Ccm Site
NYU PSYCH-GA 3405.002 / DS-GS 3001.006 : Computational cognitive modeling
Stars: ✭ 127 (-0.78%)
Mutual labels:  jupyter-notebook
Chinese Chatbot
中文聊天机器人,基于10万组对白训练而成,采用注意力机制,对一般问题都会生成一个有意义的答复。已上传模型,可直接运行,跑不起来直播吃键盘。
Stars: ✭ 124 (-3.12%)
Mutual labels:  jupyter-notebook
Cuxfilter
GPU accelerated cross filtering with cuDF.
Stars: ✭ 128 (+0%)
Mutual labels:  jupyter-notebook
Blog
Public repo for HF blog posts
Stars: ✭ 126 (-1.56%)
Mutual labels:  jupyter-notebook
Focal Loss Pytorch
全中文注释.(The loss function of retinanet based on pytorch).(You can use it on one-stage detection task or classifical task, to solve data imbalance influence).用于one-stage目标检测算法,提升检测效果.你也可以在分类任务中使用该损失函数,解决数据不平衡问题.
Stars: ✭ 126 (-1.56%)
Mutual labels:  jupyter-notebook
Multimodal Speech Emotion
TensorFlow implementation of "Multimodal Speech Emotion Recognition using Audio and Text," IEEE SLT-18
Stars: ✭ 128 (+0%)
Mutual labels:  jupyter-notebook
Celegansneuroml
NeuroML based C elegans model, contained in a neuroConstruct project, as well as c302
Stars: ✭ 127 (-0.78%)
Mutual labels:  jupyter-notebook
Thepythonmegacourse
Stars: ✭ 128 (+0%)
Mutual labels:  jupyter-notebook
Data science
daily curated links in DS, DL, NLP, ML
Stars: ✭ 127 (-0.78%)
Mutual labels:  jupyter-notebook
Rasa Ptbr Boilerplate
Um template para criar um FAQ chatbot usando Rasa, Rocket.chat, elastic search
Stars: ✭ 128 (+0%)
Mutual labels:  jupyter-notebook
Byu econ applied machine learning
The course work for the applied machine learning course I am teaching at BYU
Stars: ✭ 128 (+0%)
Mutual labels:  jupyter-notebook
Slicerjupyter
Extension for 3D Slicer that allows the application to be used from Jupyter notebook
Stars: ✭ 127 (-0.78%)
Mutual labels:  jupyter-notebook
Ncnet
PyTorch code for Neighbourhood Consensus Networks
Stars: ✭ 128 (+0%)
Mutual labels:  jupyter-notebook

Recurrent Neural Networks for Semantic Instance Segmentation

See the paper in arXiv here.

Installation

  • Clone the repo:
git clone https://github.com/imatge-upc/rsis.git
  • Install requirements pip install -r requirements.txt
  • Install PyTorch 0.2 (choose the whl file according to your setup):
pip install http://download.pytorch.org/whl/cu80/torch-0.2.0.post3-cp27-cp27mu-manylinux1_x86_64.whl  
pip install torchvision
  • Compile COCO Python API and add it to your PYTHONPATH:
cd src/coco/PythonAPI;
make
# Run from the root directory of this project
export PYTHONPATH=$PYTHONPATH:./src/coco/PythonAPI

Data

Pascal VOC 2012

  • Download Pascal VOC 2012:
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
tar -xvf VOCtrainval_11-May-2012.tar
# berkeley augmented Pascal VOC
wget http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz # 1.3 GB
tar zxvf benchmark.tgz
  • Create a merged dataset out of the two sets of images and annotations:
python src/dataloader/pascalplus_gen.py --voc_dir /path/to/pascal --contours_dir /path/to/additional/dataset --vocplus_dir /path/to/merged
  • Precompute instance and semantic segmentation masks & ground truth files in COCO format:
python src/dataloader/pascal_precompute.py --split train --pascal_dir /path/to/merged

You must run this three times for the different splits (train, val and test).

Point args.pascal_dir to /path/to/merged.

CVPPP

Download the training CVPPP dataset from their website. In our case we just worked with the A1 dataset. Extract the A1 package and point args.leaves_dir to this folder. To obtain the test set for evaluation you will have to contact the organizers.

Cityscapes

Download the Cityscapes dataset from their website. Extract the images and the labels into the same directory and point args.cityscapes_dir to it.

Training

  • Train the model with python train.py -model_name model_name. Checkpoints and logs will be saved under ../models/model_name.
  • Other arguments can be passed as well. For convenience, scripts to train with typical parameters are provided under scripts/.
  • Visdom can be enabled to monitor training losses and outputs:
    • First run the visdom server withpython -m visdom.server.
    • Run training with the --visdom flag. Navigate to localhost:8097 to visualize training curves.
  • Plot loss curves at any time with python plot_curves.py -model_name model_name.

Evaluation

We provide bash scripts to display results and evaluate models for the three datasets. You can find them under the scripts folder.

In the case of cityscapes, the evaluation bash script will generate the results in the appropiate format to use the official evaluation code.

For CVPPP, the evaluation bash script will generate the results in the appropiate format to use the evaluation scripts that are provided with the dataset.

Pretrained models

Download weights for models trained with:

Extract and place the obtained folder under models directory. You can then run evaluation scripts with the downloaded model by setting args.model_name to the name of the folder.

Contact

For questions and suggestions use the issues section or send an e-mail to [email protected]

Additional notes to GPI users

Helpful commands to train on the GPI cluster and get visualizations in your browser:

  • Start server: with srun --tunnel $UID:$UID python -m visdom.server -port $UID.
  • Check the node where the server launched and (eg. c3).
  • Run training with:
srun --gres=gpu:1,gmem:12G --mem=10G python train.py --visdom -port $UID -server http://c3

Notice that the port and the server must match the ones used in the previous run.

  • echo $UID to know which port you are using.
  • ssh tunnel (run this in local machine): ssh -L 8889:localhost:YOUR_UID -p2222 [email protected].
  • Navigate to localhost:8889 in your browser locally.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].