All Projects → cmitash → Physim Dataset Generator

cmitash / Physim Dataset Generator

Licence: bsd-2-clause
generate physically realistic synthetic dataset of cluttered scenes using 3D CAD models to train CNN based object detectors

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Physim Dataset Generator

Ssd Knowledge Distillation
A PyTorch Implementation of Knowledge Distillation on SSD
Stars: ✭ 51 (-13.56%)
Mutual labels:  object-detection
Mask rcnn ros
The ROS Package of Mask R-CNN for Object Detection and Segmentation
Stars: ✭ 53 (-10.17%)
Mutual labels:  object-detection
Logo Detection Yolov2
Stars: ✭ 57 (-3.39%)
Mutual labels:  object-detection
Keras Retinanet For Teknofest 2019
Using RetinaNet for object detection from drone images in Teknofest istanbul 2019 Artificial Intelligence Competition
Stars: ✭ 50 (-15.25%)
Mutual labels:  object-detection
Math object detection
An image recognition/object detection model that detects handwritten digits and simple math operators. The output of the predicted objects (numbers & math operators) is then evaluated and solved.
Stars: ✭ 52 (-11.86%)
Mutual labels:  object-detection
Mish
Official Repsoitory for "Mish: A Self Regularized Non-Monotonic Neural Activation Function" [BMVC 2020]
Stars: ✭ 1,072 (+1716.95%)
Mutual labels:  object-detection
Inceptionvisiondemo
🎥 iOS11 demo application for dominant objects detection.
Stars: ✭ 48 (-18.64%)
Mutual labels:  object-detection
Darknet ros
YOLO ROS: Real-Time Object Detection for ROS
Stars: ✭ 1,101 (+1766.1%)
Mutual labels:  object-detection
Hsd
Hierarchical Shot Detector (ICCV2019)
Stars: ✭ 53 (-10.17%)
Mutual labels:  object-detection
Rrpn
Code for 'RRPN: Radar Region Proposal Network for Object Detection in Autonomous Vehicles' (ICIP 2019)
Stars: ✭ 57 (-3.39%)
Mutual labels:  object-detection
Ssd
High quality, fast, modular reference implementation of SSD in PyTorch
Stars: ✭ 1,060 (+1696.61%)
Mutual labels:  object-detection
Trafficvision
MIVisionX toolkit is a comprehensive computer vision and machine intelligence libraries, utilities and applications bundled into a single toolkit.
Stars: ✭ 52 (-11.86%)
Mutual labels:  object-detection
Yolov4 Pytorch
This is a pytorch repository of YOLOv4, attentive YOLOv4 and mobilenet YOLOv4 with PASCAL VOC and COCO
Stars: ✭ 1,070 (+1713.56%)
Mutual labels:  object-detection
Jacinto Ai Devkit
Training & Quantization of embedded friendly Deep Learning / Machine Learning / Computer Vision models
Stars: ✭ 49 (-16.95%)
Mutual labels:  object-detection
Maskrcnn Modanet
A Mask R-CNN Keras implementation with Modanet annotations on the Paperdoll dataset
Stars: ✭ 59 (+0%)
Mutual labels:  object-detection
Pytorch Ssd
MobileNetV1, MobileNetV2, VGG based SSD/SSD-lite implementation in Pytorch 1.0 / Pytorch 0.4. Out-of-box support for retraining on Open Images dataset. ONNX and Caffe2 support. Experiment Ideas like CoordConv.
Stars: ✭ 1,054 (+1686.44%)
Mutual labels:  object-detection
Wheat
Wheat Detection challenge on Kaggle
Stars: ✭ 54 (-8.47%)
Mutual labels:  object-detection
Mrlabeler
极速检测标注工具An efficient tool for objection detection annotation
Stars: ✭ 60 (+1.69%)
Mutual labels:  object-detection
Ssd keras
Port of Single Shot MultiBox Detector to Keras
Stars: ✭ 1,101 (+1766.1%)
Mutual labels:  object-detection
Mmdetection object detection demo
How to train an object detection model with mmdetection
Stars: ✭ 55 (-6.78%)
Mutual labels:  object-detection

PHYSIM-DATASET-GENERATOR

This repository implements a software tool for synthesizing images of physically realistic cluttered scenes using 3D CAD models as described in our paper:

A Self-supervised Learning System for Object Detection using Physics Simulation and Multi-view Pose Estimation (pdf)(website)

By Chaitanya Mitash, Kostas Bekris, Abdeslam Boularias (Rutgers University).

To appear at the IEEE International Conference on Intelligent Robots and Systems (IROS), Vancouver, Canada, 2017.

Citing

To cite the work:

@article{physim,
  title={A Self-supervised Learning System for Object Detection using Physics Simulation and Multi-view Pose Estimation},
  author={Mitash, Chaitanya and Bekris, Kostas and Boularias, Abdeslam},
  journal={arXiv:1703.03347},
  year={2017}
}

Setup

  1. Download and extract Blender
  2. In ~/.bashrc, add line export BLENDER_PATH=/path/to/blender/blender
  3. Get the get-pip.py file from the pip documentation
  4. Install the yaml package in the python packaged with blender using the commands below.
$ /path/to/blender/2.xx/python/bin/python3.5m /path/to/get-pip/get-pip.py
$ /path/to/blender/2.xx/python/bin/python3.5m /blender-version/2.xx/python/bin/pip install pyyaml

Demo

  1. In ~/.bashrc, add line export PHYSIM_GENDATA=/path/to/repo.
  2. Rename config.yml.shelf or config.yml.shelf to config.yml and modify simulation parameters if required.
  3. Run python generate_pictures.py
  4. The generated data can be found in the folder rendered_images. Available environments are table and shelf.

Output

  1. Images of scenes.
  2. Labeled bounding box files for each scene <label, tl_x, tl_y, br_x, br_y> or if the pixel label mode is selected, a pixel-wise labeled image is generated for each scene where the pixel value is the ground-truth class value.
  3. Debug images indicating the bounding-boxes over the objects.
  4. .blend files to debug the simulation parameters.

Parameters

the example cfg files contain the parameters of simulation.

camera:
  num_poses: <number of views to render from>
  camera_poses: [[pos_x, pos_y, pos_z, quat_w, quat_x, quat_y, quat_z], ...]
  camera_intrinsics: [[f_x, 0.0, c_x],[0.0, f_y, c_y],[0.0, 0.0, 1.0]]

rest_surface:
  type: shelf
  surface_pose: [pos_x, pos_y, pos_z, quat_w, quat_x, quat_y, quat_z]

Models: [model_1, model_2, ...]

params:
  num_images: <number of training images>
  label_type: box
  minimum_objects_in_scene: <minimum object per scene>
  maximum_objects_in_scene: <minimum object per scene>
  range_x: [<min_x>, <max_x>]
  range_y: [<min_y>, <max_y>]
  range_z: [<min_z>, <max_z>]
  num_simulation_steps: <number os simulation steps to run>
  light_position_range_x: [<min_x>, <max_x>]
  light_position_range_y: [<min_y>, <max_y>]
  light_position_range_z: [<min_z>, <max_z>]
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].