All Projects → RobotLocomotion → Pytorch Dense Correspondence

RobotLocomotion / Pytorch Dense Correspondence

Licence: other
Code for "Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Pytorch Dense Correspondence

Visual Pushing Grasping
Train robotic agents to learn to plan pushing and grasping actions for manipulation with deep reinforcement learning.
Stars: ✭ 516 (+15.96%)
Mutual labels:  artificial-intelligence, robotics, manipulation, 3d, vision
Arc Robot Vision
MIT-Princeton Vision Toolbox for Robotic Pick-and-Place at the Amazon Robotics Challenge 2017 - Robotic Grasping and One-shot Recognition of Novel Objects with Deep Learning.
Stars: ✭ 224 (-49.66%)
Mutual labels:  artificial-intelligence, manipulation, 3d, vision
Ravens
Train robotic agents to learn pick and place with deep learning for vision-based manipulation in PyBullet. Transporter Nets, CoRL 2020.
Stars: ✭ 133 (-70.11%)
Mutual labels:  artificial-intelligence, robotics, manipulation, vision
Pose Interpreter Networks
Real-Time Object Pose Estimation with Pose Interpreter Networks (IROS 2018)
Stars: ✭ 104 (-76.63%)
Mutual labels:  artificial-intelligence, robotics, 3d
Home Platform
HoME: a Household Multimodal Environment is a platform for artificial agents to learn from vision, audio, semantics, physics, and interaction with objects and other agents, all within a realistic context.
Stars: ✭ 370 (-16.85%)
Mutual labels:  artificial-intelligence, 3d, vision
Iros20 6d Pose Tracking
[IROS 2020] se(3)-TrackNet: Data-driven 6D Pose Tracking by Calibrating Image Residuals in Synthetic Domains
Stars: ✭ 113 (-74.61%)
Mutual labels:  robotics, manipulation, 3d
Apc Vision Toolbox
MIT-Princeton Vision Toolbox for the Amazon Picking Challenge 2016 - RGB-D ConvNet-based object segmentation and 6D object pose estimation.
Stars: ✭ 277 (-37.75%)
Mutual labels:  artificial-intelligence, 3d, vision
Tsdf Fusion Python
Python code to fuse multiple RGB-D images into a TSDF voxel volume.
Stars: ✭ 464 (+4.27%)
Mutual labels:  artificial-intelligence, 3d, vision
3dmatch Toolbox
3DMatch - a 3D ConvNet-based local geometric descriptor for aligning 3D meshes and point clouds.
Stars: ✭ 571 (+28.31%)
Mutual labels:  artificial-intelligence, 3d, vision
Tsdf Fusion
Fuse multiple depth frames into a TSDF voxel volume.
Stars: ✭ 426 (-4.27%)
Mutual labels:  artificial-intelligence, 3d, vision
Cocoaai
🤖 The Cocoa Artificial Intelligence Lab
Stars: ✭ 134 (-69.89%)
Mutual labels:  artificial-intelligence, vision
Toycarirl
Implementation of Inverse Reinforcement Learning Algorithm on a toy car in a 2D world problem, (Apprenticeship Learning via Inverse Reinforcement Learning Abbeel & Ng, 2004)
Stars: ✭ 128 (-71.24%)
Mutual labels:  artificial-intelligence, robotics
calvin
CALVIN - A benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks
Stars: ✭ 105 (-76.4%)
Mutual labels:  vision, manipulation
aerial autonomy
Easily extendable package for interacting with and defining state machines for autonomous aerial systems
Stars: ✭ 22 (-95.06%)
Mutual labels:  robotics, manipulation
Awesome Robotic Tooling
Tooling for professional robotic development in C++ and Python with a touch of ROS, autonomous driving and aerospace.
Stars: ✭ 1,876 (+321.57%)
Mutual labels:  artificial-intelligence, robotics
sparse-scene-flow
This repo contains C++ code for sparse scene flow method.
Stars: ✭ 23 (-94.83%)
Mutual labels:  robotics, vision
Dreamer
Dream to Control: Learning Behaviors by Latent Imagination
Stars: ✭ 269 (-39.55%)
Mutual labels:  artificial-intelligence, robotics
Pyswip
PySwip is a Python - SWI-Prolog bridge enabling to query SWI-Prolog in your Python programs. It features an (incomplete) SWI-Prolog foreign language interface, a utility class that makes it easy querying with Prolog and also a Pythonic interface.
Stars: ✭ 276 (-37.98%)
Mutual labels:  artificial-intelligence, robotics
Ai Deadlines
⏰ AI conference deadline countdowns
Stars: ✭ 3,852 (+765.62%)
Mutual labels:  artificial-intelligence, robotics
Grip
Program for rapidly developing computer vision applications
Stars: ✭ 314 (-29.44%)
Mutual labels:  robotics, vision

Updates

  • September 4, 2018: Tutorial and data now available! We have a tutorial now available here, which walks through step-by-step of getting this repo running.
  • June 26, 2019: We have updated the repo to pytorch 1.1 and CUDA 10. For code used for the experiments in the paper see here.

Dense Correspondence Learning in PyTorch

In this project we learn Dense Object Nets, i.e. dense descriptor networks for previously unseen, potentially deformable objects, and potentially classes of objects:

We also demonstrate using Dense Object Nets for robotic manipulation tasks:

Dense Object Nets: Learning Dense Visual Descriptors by and for Robotic Manipulation

This is the reference implementation for our paper:

PDF | Video

Pete Florence*, Lucas Manuelli*, Russ Tedrake

Abstract: What is the right object representation for manipulation? We would like robots to visually perceive scenes and learn an understanding of the objects in them that (i) is task-agnostic and can be used as a building block for a variety of manipulation tasks, (ii) is generally applicable to both rigid and non-rigid objects, (iii) takes advantage of the strong priors provided by 3D vision, and (iv) is entirely learned from self-supervision. This is hard to achieve with previous methods: much recent work in grasping does not extend to grasping specific objects or other tasks, whereas task-specific learning may require many trials to generalize well across object configurations or other tasks. In this paper we present Dense Object Nets, which build on recent developments in self-supervised dense descriptor learning, as a consistent object representation for visual understanding and manipulation. We demonstrate they can be trained quickly (approximately 20 minutes) for a wide variety of previously unseen and potentially non-rigid objects. We additionally present novel contributions to enable multi-object descriptor learning, and show that by modifying our training procedure, we can either acquire descriptors which generalize across classes of objects, or descriptors that are distinct for each object instance. Finally, we demonstrate the novel application of learned dense descriptors to robotic manipulation. We demonstrate grasping of specific points on an object across potentially deformed object configurations, and demonstrate using class general descriptors to transfer specific grasps across objects in a class.

Citing

If you find this code useful in your work, please consider citing:

@article{florencemanuelli2018dense,
  title={Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation},
  author={Florence, Peter and Manuelli, Lucas and Tedrake, Russ},
  journal={Conference on Robot Learning},
  year={2018}
}

Tutorial

Code Setup

Dataset

Training and Evaluation

Miscellaneous

Git management

To prevent the repo from growing in size, recommend always "restart and clear outputs" before committing any Jupyter notebooks. If you'd like to save what your notebook looks like, you can always "download as .html", which is a great way to snapshot the state of that notebook and share.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].