All Projects → zju3dv → Pvnet

zju3dv / Pvnet

Licence: apache-2.0
Code for "PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation" CVPR 2019 oral

Projects that are alternatives of or similar to Pvnet

Yolov3 Complete Pruning
提供对YOLOv3及Tiny的多种剪枝版本,以适应不同的需求。
Stars: ✭ 596 (-2.45%)
Mutual labels:  jupyter-notebook
Machine learning tutorials
Code, exercises and tutorials of my personal blog ! 📝
Stars: ✭ 601 (-1.64%)
Mutual labels:  jupyter-notebook
K Nearest Neighbors With Dynamic Time Warping
Python implementation of KNN and DTW classification algorithm
Stars: ✭ 604 (-1.15%)
Mutual labels:  jupyter-notebook
Takehomedatachallenges
My solution to the book <A collection of Data Science Take-home Challenges>
Stars: ✭ 596 (-2.45%)
Mutual labels:  jupyter-notebook
Neuraltalk2
Efficient Image Captioning code in Torch, runs on GPU
Stars: ✭ 5,263 (+761.37%)
Mutual labels:  jupyter-notebook
Fastai dev
fast.ai early development experiments
Stars: ✭ 604 (-1.15%)
Mutual labels:  jupyter-notebook
Python Deepdive
Python Deep Dive Course - Accompanying Materials
Stars: ✭ 590 (-3.44%)
Mutual labels:  jupyter-notebook
Ubuntu Ranking Dataset Creator
A script that creates train, valid and test datasets for the ranking task from Ubuntu corpus dialogs.
Stars: ✭ 609 (-0.33%)
Mutual labels:  jupyter-notebook
Deep learning cookbook
Deep Learning Cookbox
Stars: ✭ 601 (-1.64%)
Mutual labels:  jupyter-notebook
Challenges
PyBites Code Challenges
Stars: ✭ 604 (-1.15%)
Mutual labels:  jupyter-notebook
Deeplearning Nlp
Introduction to Deep Learning for Natural Language Processing
Stars: ✭ 598 (-2.13%)
Mutual labels:  jupyter-notebook
Courses
fast.ai Courses
Stars: ✭ 5,253 (+759.74%)
Mutual labels:  jupyter-notebook
Cs231n spring 2017 assignment
My implementations of cs231n 2017
Stars: ✭ 603 (-1.31%)
Mutual labels:  jupyter-notebook
Basic model scratch
Implementation of some classic Machine Learning model from scratch and benchmarking against popular ML library
Stars: ✭ 595 (-2.62%)
Mutual labels:  jupyter-notebook
Stock Analysis Engine
Backtest 1000s of minute-by-minute trading algorithms for training AI with automated pricing data from: IEX, Tradier and FinViz. Datasets and trading performance automatically published to S3 for building AI training datasets for teaching DNNs how to trade. Runs on Kubernetes and docker-compose. >150 million trading history rows generated from +5000 algorithms. Heads up: Yahoo's Finance API was disabled on 2019-01-03 https://developer.yahoo.com/yql/
Stars: ✭ 605 (-0.98%)
Mutual labels:  jupyter-notebook
Single Cell Tutorial
Single cell current best practices tutorial case study for the paper:Luecken and Theis, "Current best practices in single-cell RNA-seq analysis: a tutorial"
Stars: ✭ 594 (-2.78%)
Mutual labels:  jupyter-notebook
Sqlitebiter
A CLI tool to convert CSV / Excel / HTML / JSON / Jupyter Notebook / LDJSON / LTSV / Markdown / SQLite / SSV / TSV / Google-Sheets to a SQLite database file.
Stars: ✭ 601 (-1.64%)
Mutual labels:  jupyter-notebook
Instagram 3d Photo
A Chrome extension that adds a 3d photo effect to instagram pages.
Stars: ✭ 611 (+0%)
Mutual labels:  jupyter-notebook
Info8010 Deep Learning
Lectures for INFO8010 - Deep Learning, ULiège
Stars: ✭ 608 (-0.49%)
Mutual labels:  jupyter-notebook
Tutorial
Stars: ✭ 602 (-1.47%)
Mutual labels:  jupyter-notebook

Good news! We release a clean version of PVNet: clean-pvnet, including

  1. how to train the PVNet on the custom dataset.
  2. Use PVNet with a detector.
  3. The training and testing on the tless dataset, where we detect multiple instances in an image.

PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation

introduction

PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation
Sida Peng, Yuan Liu, Qixing Huang, Xiaowei Zhou, Hujun Bao
CVPR 2019 oral
Project Page

Any questions or discussions are welcomed!

Truncation LINEMOD Dataset

Check TRUNCATION_LINEMOD.md for information about the Truncation LINEMOD dataset.

Installation

One way is to set up the environment with docker: How to install pvnet with docker.

Thanks Joe Dinius for providing the docker implementation.

Another way is to use the following commands.

  1. Set up python 3.6.7 environment
pip install -r requirements.txt

We need compile several files, which works fine with pytorch v0.4.1/v1.1 and gcc 5.4.0.

For users with a RTX GPU, you must use CUDA10 and pytorch v1.1 built from CUDA10.

  1. Compile the Ransac Voting Layer
ROOT=/path/to/pvnet
cd $ROOT/lib/ransac_voting_gpu_layer
python setup.py build_ext --inplace
  1. Compile some extension utils
cd $ROOT/lib/utils/extend_utils

Revise the cuda_include and dart in build_extend_utils_cffi.py to be compatible with the CUDA in your computer.

sudo apt-get install libgoogle-glog-dev=0.3.4-0.1
sudo apt-get install libsuitesparse-dev=1:4.4.6-1
sudo apt-get install libatlas-base-dev=3.10.2-9
python build_extend_utils_cffi.py

If you cannot install libsuitesparse-dev=1:4.4.6-1, please install libsuitesparse, run build_ceres.sh and move ceres/ceres-solver/build/lib/libceres.so* to lib/utils/extend_utils/lib.

Add the lib under extend_utils to the LD_LIBRARY_PATH

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/pvnet/lib/utils/extend_utils/lib

Dataset Configuration

Prepare the dataset

Download the LINEMOD, which can be found at here.

Download the LINEMOD_ORIG, which can be found at here.

Download the OCCLUSION_LINEMOD, which can be found at here.

Create the soft link

mkdir $ROOT/data
ln -s path/to/LINEMOD $ROOT/data/LINEMOD
ln -s path/to/LINEMOD_ORIG $ROOT/data/LINEMOD_ORIG
ln -s path/to/OCCLUSION_LINEMOD $ROOT/data/OCCLUSION_LINEMOD

Compute FPS keypoints

python lib/utils/data_utils.py

Synthesize images for each object

See pvnet-rendering for information about the image synthesis.

Demo

Download the pretrained model of cat from here and put it to $ROOT/data/model/cat_demo/199.pth.

Run the demo

python tools/demo.py

If setup correctly, the output will look like

cat

Visualization of the voting procedure

We add a jupyter notebook visualization.ipynb for the keypoint detection pipeline of PVNet, aiming to make it easier for readers to understand our paper. Thanks for Kudlur, M 's suggestion.

Training and testing

Training on the LINEMOD

Before training, remember to add the lib under extend_utils to the LD_LIDBRARY_PATH

export LD_LIDBRARY_PATH=$LD_LIDBRARY_PATH:/path/to/pvnet/lib/utils/extend_utils/lib

Training

python tools/train_linemod.py --cfg_file configs/linemod_train.json --linemod_cls cat

Testing

We provide the pretrained models of each object, which can be found at here.

Download the pretrained model and move it to $ROOT/data/model/{cls}_linemod_train/199.pth. For instance

mkdir $ROOT/data/model
mv ape_199.pth $ROOT/data/model/ape_linemod_train/199.pth

Testing

python tools/train_linemod.py --cfg_file configs/linemod_train.json --linemod_cls cat --test_model

Citation

If you find this code useful for your research, please use the following BibTeX entry.

@inproceedings{peng2019pvnet,
  title={PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation},
  author={Peng, Sida and Liu, Yuan and Huang, Qixing and Zhou, Xiaowei and Bao, Hujun},
  booktitle={CVPR},
  year={2019}
}

Acknowledgement

This work is affliated with ZJU-SenseTime Joint Lab of 3D Vision, and its intellectual property belongs to SenseTime Group Ltd.

Copyright (c) ZJU-SenseTime Joint Lab of 3D Vision. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].