All Projects → otaheri → GrabNet

otaheri / GrabNet

Licence: other
GrabNet: A Generative model to generate realistic 3D hands grasping unseen objects (ECCV2020)

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to GrabNet

AI Learning Hub
AI Learning Hub for Machine Learning, Deep Learning, Computer Vision and Statistics
Stars: ✭ 53 (-63.7%)
Mutual labels:  generative-model
Diffusion-Models-Seminar
No description or website provided.
Stars: ✭ 75 (-48.63%)
Mutual labels:  generative-model
style-vae
Implementation of VAE and Style-GAN Architecture Achieving State of the Art Reconstruction
Stars: ✭ 25 (-82.88%)
Mutual labels:  generative-model
GDPP
Generator loss to reduce mode-collapse and to improve the generated samples quality.
Stars: ✭ 32 (-78.08%)
Mutual labels:  generative-model
Lr-LiVAE
Tensorflow implementation of Disentangling Latent Space for VAE by Label Relevant/Irrelevant Dimensions (CVPR 2019)
Stars: ✭ 29 (-80.14%)
Mutual labels:  generative-model
favorite-research-papers
Listing my favorite research papers 📝 from different fields as I read them.
Stars: ✭ 12 (-91.78%)
Mutual labels:  generative-model
pytorch-GAN
My pytorch implementation for GAN
Stars: ✭ 12 (-91.78%)
Mutual labels:  generative-model
pytorch-CycleGAN
Pytorch implementation of CycleGAN.
Stars: ✭ 39 (-73.29%)
Mutual labels:  generative-model
adaptive-f-divergence
A tensorflow implementation of the NIPS 2018 paper "Variational Inference with Tail-adaptive f-Divergence"
Stars: ✭ 20 (-86.3%)
Mutual labels:  generative-model
drl grasping
Deep Reinforcement Learning for Robotic Grasping from Octrees
Stars: ✭ 160 (+9.59%)
Mutual labels:  grasping
vqvae-2
PyTorch implementation of VQ-VAE-2 from "Generating Diverse High-Fidelity Images with VQ-VAE-2"
Stars: ✭ 65 (-55.48%)
Mutual labels:  generative-model
GraphCNN-GAN
Graph-convolutional GAN for point cloud generation. Code from ICLR 2019 paper Learning Localized Generative Models for 3D Point Clouds via Graph Convolution
Stars: ✭ 50 (-65.75%)
Mutual labels:  generative-model
ShapeFormer
Official repository for the ShapeFormer Project
Stars: ✭ 97 (-33.56%)
Mutual labels:  generative-model
graspit
The GraspIt! simulator
Stars: ✭ 142 (-2.74%)
Mutual labels:  grasping
Generalization-Causality
关于domain generalization,domain adaptation,causality,robutness,prompt,optimization,generative model各式各样研究的阅读笔记
Stars: ✭ 482 (+230.14%)
Mutual labels:  generative-model
simplegan
Tensorflow-based framework to ease training of generative models
Stars: ✭ 19 (-86.99%)
Mutual labels:  generative-model
Awesome-Vision-Transformer-Collection
Variants of Vision Transformer and its downstream tasks
Stars: ✭ 124 (-15.07%)
Mutual labels:  generative-model
py-msa-kdenlive
Python script to load a Kdenlive (OSS NLE video editor) project file, and conform the edit on video or numpy arrays.
Stars: ✭ 25 (-82.88%)
Mutual labels:  generative-model
MidiTok
A convenient MIDI / symbolic music tokenizer for Deep Learning networks, with multiple strategies 🎶
Stars: ✭ 180 (+23.29%)
Mutual labels:  generative-model
causal-semantic-generative-model
Codes for Causal Semantic Generative model (CSG), the model proposed in "Learning Causal Semantic Representation for Out-of-Distribution Prediction" (NeurIPS-21)
Stars: ✭ 51 (-65.07%)
Mutual labels:  generative-model

GrabNet

Generating realistic hand mesh grasping unseen 3D objects (ECCV 2020)

report

Run in Google-Colab

Open In Google-Colab

GRAB-Teaser [Paper Page] [Paper ]

GrabNet is a generative model for 3D hand grasps. Given a 3D object mesh, GrabNet can predict several hand grasps for it. GrabNet has two succesive models, CoarseNet (cVAE) and RefineNet. It is trained on a subset (right hand and object only) of GRAB dataset. For more details please refer to the Paper or the project website.

Below you can see some generated results from GrabNet:

Binoculars Mug Camera Toothpaste
Binoculars Mug Camera Toothpaste

Check out the YouTube videos below for more details.

Long Video Short Video
LongVideo ShortVideo

Table of Contents

Description

This implementation:

  • Can run GrabNet on arbitrary objects provided by users (incl. computing on the fly the BPS representation for them).
  • Provides a quick and easy demo on google colab to generate grasps for any given object.
  • Can run GrabNet on the test objects of our dataset (with pre-computed object centering and BPS representation).
  • Can retrain GrabNet, allowing users to change details in the training configuration.

Requirements

This package has the following requirements:

Installation

To install the dependencies please follow the next steps:

  • Clone this repository:
    git clone https://github.com/otaheri/GrabNet
  • Install the dependencies by the following command:
    pip install -r requirements.txt
    

Getting started

For a quick demo of GrabNet you can give it a try on google-colab here.

Inorder to use the GrabNet model please follow the below steps:

CoarseNet and RefineNet models

  • Download the GrabNet models from the GRAB website, and move the model files to the models folder as described below.
    GrabNet
        ├── grabnet
        │    │
        │    ├── models
        │    │     └── coarsenet.pt
        │    │     └── refinenet.pt
        │    │     │

Mano models

  • Download MANO models following the steps on the MANO repo (skip this part if you already followed this for GRAB dataset).

GrabNet data (only required for retraining the model or testing on the test objects)

  • Download the GrabNet dataset (ZIP files) from this website. Please do NOT unzip the files yet.

  • Put all the downloaded ZIP files for GrabNet in a folder.

  • Clone this repository and install the requirements:

    git clone https://github.com/otaheri/GrabNet
  • Run the following command to extract the ZIP files.

    python grabnet/data/unzip_data.py   --data-path $PATH_TO_FOLDER_WITH_ZIP_FILES \
                                        --ectract-path $PATH_TO_EXTRACT_DATASET_TO
  • The extracted data should be in the following structure.

    GRAB
    ├── data
    │    │
    │    ├── bps.npz
    │    └── obj_info.npy
    │    └── sbj_info.npy
    │    │
    │    └── [split_name] from (test, train, val)
    │          │
    │          └── frame_names.npz
    │          └── grabnet_[split_name].npz
    │          └── data
    │                └── s1
    │                └── ...
    │                └── s10
    └── tools
         │
         ├── object_meshes
         └── subject_meshes

Examples

After installing the GrabNet package, dependencies, and downloading the data and the models from mano website, you should be able to run the following examples:

  • Generate several grasps for new unseen objects

    python grabnet/tests/grab_new_objects.py --obj-path $NEW_OBJECT_PATH \
                                             --rhm-path $MANO_MODEL_FOLDER
                                            
    
  • Generate grasps for test data and compare to ground truth (GT)

    python grabnet/tests/test.py     --rhm-path $MANO_MODEL_FOLDER \
                                     --data-path $PATH_TO_GRABNET_DATA
  • Train GrabNet with new configurations

    To retrain GrabNet with a new configuration, please use the following code.

    python train.py  --work-dir $SAVING_PATH \
                     --rhm-path $MANO_MODEL_FOLDER \
                     --data-path $PATH_TO_GRABNET_DATA
  • Get the GrabNet evaluation errors on the dataset

    python eval.py     --rhm-path $MANO_MODEL_FOLDER \
                       --data-path $PATH_TO_GRABNET_DATA

Citation

@inproceedings{GRAB:2020,
  title = {{GRAB}: A Dataset of Whole-Body Human Grasping of Objects},
  author = {Taheri, Omid and Ghorbani, Nima and Black, Michael J. and Tzionas, Dimitrios},
  booktitle = {European Conference on Computer Vision (ECCV)},
  year = {2020},
  url = {https://grab.is.tue.mpg.de}
}

License

Software Copyright License for non-commercial scientific research purposes. Please read carefully the terms and conditions in the LICENSE file and any accompanying documentation before you download and/or use the GRAB data, model and software, (the "Data & Software"), including 3D meshes (body and objects), images, videos, textures, software, scripts, and animations. By downloading and/or using the Data & Software (including downloading, cloning, installing, and any other use of the corresponding github repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Data & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.

Acknowledgments

Special thanks to Mason Landry for his invaluable help with this project.

We thank:

  • Senya Polikovsky, Markus Hoschle (MH) and Mason Landry (ML) for the MoCap facility.
  • ML, Felipe Mattioni, David Hieber, and Alex Valis for MoCap cleaning.
  • ML and Tsvetelina Alexiadis for trial coordination, and MH and Felix Grimminger for 3D printing,
  • ML and Valerie Callaghan for voice recordings, Joachim Tesch for renderings.
  • Jonathan Williams for the website design, and Benjamin Pellkofer for the IT and web support.
  • Sergey Prokudin for early access to BPS code.
  • Sai Kumar Dwivedi and Nikos Athanasiou for proofreading.

Contact

The code of this repository was implemented by Omid Taheri and Nima Ghorbani.

For questions, please contact [email protected].

For commercial licensing (and all related questions for business applications), please contact [email protected].

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].