All Projects → graspnet → graspnet-baseline

graspnet / graspnet-baseline

Licence: other
Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020)

Programming Languages

python
139335 projects - #7 most used programming language
Cuda
1817 projects
C++
36643 projects - #6 most used programming language
c
50402 projects - #5 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to graspnet-baseline

Extrinsic lidar camera calibration
This is a package for extrinsic calibration between a 3D LiDAR and a camera, described in paper: Improvements to Target-Based 3D LiDAR to Camera Calibration. This package is used for Cassie Blue's 3D LiDAR semantic mapping and automation.
Stars: ✭ 149 (+2.05%)
Mutual labels:  robotics, point-cloud
robotic-grasping
Antipodal Robotic Grasping using GR-ConvNet. IROS 2020.
Stars: ✭ 131 (-10.27%)
Mutual labels:  robotics, grasping
grasp multiObject
Robotic grasp dataset for multi-object multi-grasp evaluation with RGB-D data. This dataset is annotated using the same protocal as Cornell Dataset, and can be used as multi-object extension of Cornell Dataset.
Stars: ✭ 59 (-59.59%)
Mutual labels:  robotics, grasping
Pointcnn
PointCNN: Convolution On X-Transformed Points (NeurIPS 2018)
Stars: ✭ 1,120 (+667.12%)
Mutual labels:  robotics, point-cloud
Votenet
Deep Hough Voting for 3D Object Detection in Point Clouds
Stars: ✭ 1,183 (+710.27%)
Mutual labels:  robotics, point-cloud
Frustum Pointnets
Frustum PointNets for 3D Object Detection from RGB-D Data
Stars: ✭ 1,154 (+690.41%)
Mutual labels:  robotics, point-cloud
Awesome Robotics
A curated list of awesome links and software libraries that are useful for robots.
Stars: ✭ 478 (+227.4%)
Mutual labels:  robotics, point-cloud
Awesome Robotic Tooling
Tooling for professional robotic development in C++ and Python with a touch of ROS, autonomous driving and aerospace.
Stars: ✭ 1,876 (+1184.93%)
Mutual labels:  robotics, point-cloud
Cupoch
Robotics with GPU computing
Stars: ✭ 225 (+54.11%)
Mutual labels:  robotics, point-cloud
CurveNet
Official implementation of "Walk in the Cloud: Learning Curves for Point Clouds Shape Analysis", ICCV 2021
Stars: ✭ 94 (-35.62%)
Mutual labels:  point-cloud
academy
🎓 Video tutorials, slide decks and other training materials for developers learning about the FIWARE ecosystem.
Stars: ✭ 12 (-91.78%)
Mutual labels:  robotics
lepcc
Point Cloud Compression used in i3s Scene Layer Format
Stars: ✭ 22 (-84.93%)
Mutual labels:  point-cloud
multi-contact-grasping
This project implements a simulated grasp-and-lift process in V-REP using the Barrett Hand, with an interface through a python remote API.
Stars: ✭ 52 (-64.38%)
Mutual labels:  grasping
neonavigation
A 2-D/3-DOF seamless global/local mobile robot motion planner package for ROS
Stars: ✭ 199 (+36.3%)
Mutual labels:  robotics
2019-UGRP-DPoom
2019 DGIST DPoom project under UGRP : SBC and RGB-D camera based full autonomous driving system for mobile robot with indoor SLAM
Stars: ✭ 35 (-76.03%)
Mutual labels:  robotics
SLAM AND PATH PLANNING ALGORITHMS
This repository contains the solutions to all the exercises for the MOOC about SLAM and PATH-PLANNING algorithms given by professor Claus Brenner at Leibniz University. This repository also contains my personal notes, most of them in PDF format, and many vector graphics created by myself to illustrate the theoretical concepts. Hope you enjoy it! :)
Stars: ✭ 107 (-26.71%)
Mutual labels:  robotics
FloBaRoID
Framework for dynamical system identification of floating-base rigid body tree structures
Stars: ✭ 53 (-63.7%)
Mutual labels:  robotics
awesome-multimodal-ml
Reading list for research topics in multimodal machine learning
Stars: ✭ 3,125 (+2040.41%)
Mutual labels:  robotics
RoboticsAcademy
Learn Robotics with JdeRobot
Stars: ✭ 160 (+9.59%)
Mutual labels:  robotics
PnC
Planning and Control Algorithms for Robotics
Stars: ✭ 22 (-84.93%)
Mutual labels:  robotics

GraspNet Baseline

Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020).

[paper] [dataset] [API] [doc]


Top 50 grasps detected by our baseline model.

teaser

Requirements

  • Python 3
  • PyTorch 1.6
  • Open3d >=0.8
  • TensorBoard 2.3
  • NumPy
  • SciPy
  • Pillow
  • tqdm

Installation

Get the code.

git clone https://github.com/graspnet/graspnet-baseline.git
cd graspnet-baseline

Install packages via Pip.

pip install -r requirements.txt

Compile and install pointnet2 operators (code adapted from votenet).

cd pointnet2
python setup.py install

Compile and install knn operator (code adapted from pytorch_knn_cuda).

cd knn
python setup.py install

Install graspnetAPI for evaluation.

git clone https://github.com/graspnet/graspnetAPI.git
cd graspnetAPI
pip install .

Tolerance Label Generation

Tolerance labels are not included in the original dataset, and need additional generation. Make sure you have downloaded the orginal dataset from GraspNet. The generation code is in dataset/generate_tolerance_label.py. You can simply generate tolerance label by running the script: (--dataset_root and --num_workers should be specified according to your settings)

cd dataset
sh command_generate_tolerance_label.sh

Or you can download the tolerance labels from Google Drive/Baidu Pan and run:

mv tolerance.tar dataset/
cd dataset
tar -xvf tolerance.tar

Training and Testing

Training examples are shown in command_train.sh. --dataset_root, --camera and --log_dir should be specified according to your settings. You can use TensorBoard to visualize training process.

Testing examples are shown in command_test.sh, which contains inference and result evaluation. --dataset_root, --camera, --checkpoint_path and --dump_dir should be specified according to your settings. Set --collision_thresh to -1 for fast inference.

The pretrained weights can be downloaded from:

checkpoint-rs.tar and checkpoint-kn.tar are trained using RealSense data and Kinect data respectively.

Demo

A demo program is provided for grasp detection and visualization using RGB-D images. You can refer to command_demo.sh to run the program. --checkpoint_path should be specified according to your settings (make sure you have downloaded the pretrained weights). The output should be similar to the following example:

Try your own data by modifying get_and_process_data() in demo.py. Refer to doc/example_data/ for data preparation. RGB-D images and camera intrinsics are required for inference. factor_depth stands for the scale for depth value to be transformed into meters. You can also add a workspace mask for denser output.

Results

Results "In repo" report the model performance with single-view collision detection as post-processing. In evaluation we set --collision_thresh to 0.01.

Evaluation results on RealSense camera:

Seen Similar Novel
AP AP0.8 AP0.4 AP AP0.8 AP0.4 AP AP0.8 AP0.4
In paper 27.56 33.43 16.95 26.11 34.18 14.23 10.55 11.25 3.98
In repo 47.47 55.90 41.33 42.27 51.01 35.40 16.61 20.84 8.30

Evaluation results on Kinect camera:

Seen Similar Novel
AP AP0.8 AP0.4 AP AP0.8 AP0.4 AP AP0.8 AP0.4
In paper 29.88 36.19 19.31 27.84 33.19 16.62 11.51 12.92 3.56
In repo 42.02 49.91 35.34 37.35 44.82 30.40 12.17 15.17 5.51

Citation

Please cite our paper in your publications if it helps your research:

@inproceedings{fang2020graspnet,
  title={GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping},
  author={Fang, Hao-Shu and Wang, Chenxi and Gou, Minghao and Lu, Cewu},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR)},
  pages={11444--11453},
  year={2020}
}

License

All data, labels, code and models belong to the graspnet team, MVIG, SJTU and are freely available for free non-commercial use, and may be redistributed under these conditions. For commercial queries, please drop an email at fhaoshu at gmail_dot_com and cc lucewu at sjtu.edu.cn .

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].