All Projects → THU-DA-6D-Pose-Group → self6dpp

THU-DA-6D-Pose-Group / self6dpp

Licence: Apache-2.0 license
Self6D++: Occlusion-Aware Self-Supervised Monocular 6D Object Pose Estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 2021.

Programming Languages

python
139335 projects - #7 most used programming language
c
50402 projects - #5 most used programming language
C++
36643 projects - #6 most used programming language
shell
77523 projects
CMake
9771 projects
GLSL
2045 projects
Cuda
1817 projects

Projects that are alternatives of or similar to self6dpp

libai
LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training
Stars: ✭ 284 (+531.11%)
Mutual labels:  self-supervised-learning
IAST-ECCV2020
IAST: Instance Adaptive Self-training for Unsupervised Domain Adaptation (ECCV 2020) https://teacher.bupt.edu.cn/zhuchuang/en/index.htm
Stars: ✭ 84 (+86.67%)
Mutual labels:  self-training
GDR-Net
GDR-Net: Geometry-Guided Direct Regression Network for Monocular 6D Object Pose Estimation. (CVPR 2021)
Stars: ✭ 167 (+271.11%)
Mutual labels:  6d-pose-estimation
DPOD
DPOD: 6D Pose Estimator
Stars: ✭ 75 (+66.67%)
Mutual labels:  6d-pose-estimation
Self-Supervised-GANs
Tensorflow Implementation for paper "self-supervised generative adversarial networks"
Stars: ✭ 34 (-24.44%)
Mutual labels:  self-supervised-learning
sesemi
supervised and semi-supervised image classification with self-supervision (Keras)
Stars: ✭ 43 (-4.44%)
Mutual labels:  self-supervised-learning
Revisiting-Contrastive-SSL
Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [NeurIPS 2021]
Stars: ✭ 81 (+80%)
Mutual labels:  self-supervised-learning
latent-pose-reenactment
The authors' implementation of the "Neural Head Reenactment with Latent Pose Descriptors" (CVPR 2020) paper.
Stars: ✭ 132 (+193.33%)
Mutual labels:  self-supervised-learning
exponential-moving-average-normalization
PyTorch implementation of EMAN for self-supervised and semi-supervised learning: https://arxiv.org/abs/2101.08482
Stars: ✭ 76 (+68.89%)
Mutual labels:  self-supervised-learning
CLMR
Official PyTorch implementation of Contrastive Learning of Musical Representations
Stars: ✭ 216 (+380%)
Mutual labels:  self-supervised-learning
S2-BNN
S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration (CVPR 2021)
Stars: ✭ 53 (+17.78%)
Mutual labels:  self-supervised-learning
ccgl
TKDE 22. CCCL: Contrastive Cascade Graph Learning.
Stars: ✭ 20 (-55.56%)
Mutual labels:  self-supervised-learning
ViCC
[WACV'22] Code repository for the paper "Self-supervised Video Representation Learning with Cross-Stream Prototypical Contrasting", https://arxiv.org/abs/2106.10137.
Stars: ✭ 33 (-26.67%)
Mutual labels:  self-supervised-learning
video-pace
code for our ECCV-2020 paper: Self-supervised Video Representation Learning by Pace Prediction
Stars: ✭ 95 (+111.11%)
Mutual labels:  self-supervised-learning
lossyless
Generic image compressor for machine learning. Pytorch code for our paper "Lossy compression for lossless prediction".
Stars: ✭ 81 (+80%)
Mutual labels:  self-supervised-learning
SelfGNN
A PyTorch implementation of "SelfGNN: Self-supervised Graph Neural Networks without explicit negative sampling" paper, which appeared in The International Workshop on Self-Supervised Learning for the Web (SSL'21) @ the Web Conference 2021 (WWW'21).
Stars: ✭ 24 (-46.67%)
Mutual labels:  self-supervised-learning
barlowtwins
Implementation of Barlow Twins paper
Stars: ✭ 84 (+86.67%)
Mutual labels:  self-supervised-learning
awesome-graph-self-supervised-learning
Awesome Graph Self-Supervised Learning
Stars: ✭ 805 (+1688.89%)
Mutual labels:  self-supervised-learning
SimSiam
Exploring Simple Siamese Representation Learning
Stars: ✭ 52 (+15.56%)
Mutual labels:  self-supervised-learning
DisCont
Code for the paper "DisCont: Self-Supervised Visual Attribute Disentanglement using Context Vectors".
Stars: ✭ 13 (-71.11%)
Mutual labels:  self-supervised-learning

Self6D++

This repo provides the PyTorch implementation of the work:

Gu Wang †, Fabian Manhardt †, Xingyu Liu, Xiangyang Ji , Federico Tombari. Occlusion-Aware Self-Supervised Monocular 6D Object Pose Estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence. [Paper][arXiv][bibtex]

Requirements

  • Ubuntu 16.04/18.04, CUDA 10.2, python >= 3.6, PyTorch >= 1.7.1, torchvision
  • Install detectron2 from source
  • sh scripts/install_deps.sh
  • Compile the cpp extension for farthest points sampling (fps), optical flow, chamfer distance, and egl renderer:
    sh scripts/compile_all.sh
    

Datasets

Download the 6D pose datasets (LINEMOD, Occluded LINEMOD, YCB-Video) from the BOP website and VOC 2012 for background images.

The structure of datasets folder should look like below:

# recommend using soft links (ln -sf)
datasets/
├── BOP_DATASETS   # https://bop.felk.cvut.cz/datasets/
    ├──lm
    ├──lmo
    ├──ycbv
├── lm_renders_blender  # the blender rendered images
├── VOCdevkit

Train and test

  • Our method contains two-stages:

    • Stage I: train the detector, pose estimator, and refiner using PBR synthetic data
    • Stage II: self-supervised training for the pose estimator
  • In general, for each part, the training and test commands follow the template: <train/test_script.sh> <config_path> <gpu_ids> (other args)

    • <config_path> can be found at the directory configs/.
    • <gpu_ids> can be 0 or 1 for single-gpu training, or 0,1 for multi-gpu training. We use single-gpu training for all the experiments.
  • The trained models can be found at Pan.Baidu, pw: g1x6, or Onedrive.

  • Some other resources (including test_bboxes and image_sets) can be found at Pan.Baidu, pw: 8nWC, or Onedrive.

Stage I: train the detector, pose estimator, and refiner using PBR synthetic data

Train and test Yolov4:

det/yolov4/train_yolov4.sh <config_path> <gpu_ids> (other args)

det/yolov4/test_yolov4.sh <config_path> <gpu_ids> (other args)

Train and test GDR-Net:

core/gdrn_modeling/train_gdrn.sh <config_path> <gpu_ids> (other args)

core/gdrn_modeling/test_gdrn.sh <config_path> <gpu_ids> (other args)

Train and test Refiner (DeepIM):

core/deepim/train_deepim.sh <config_path> <gpu_ids> (other args)

core/deepim/test_deepim.sh <config_path> <gpu_ids> (other args)

Stage II: self-supervised training for the pose estimator

Train and test Self6D++:

core/self6dpp/train_self6dpp.sh <config_path> <gpu_ids> (other args)

core/self6dpp/test_self6dpp.sh <config_path> <gpu_ids> (other args)

Citation

If you find this useful in your research, please consider citing:

@article{Wang_2021_self6dpp,
  title     = {Occlusion-Aware Self-Supervised Monocular {6D} Object Pose Estimation},
  author    = {Wang, Gu and Manhardt, Fabian and Liu, Xingyu and Ji, Xiangyang and Tombari, Federico},
  journal   = {IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
  year      = {2021},
  doi       = {10.1109/TPAMI.2021.3136301}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].