All Projects → saic-vul → point_based_clothing

saic-vul / point_based_clothing

Licence: MIT license
Official PyTorch code for the paper: "Point-Based Modeling of Human Clothing" (ICCV 2021)

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to point based clothing

LiDAR fog sim
LiDAR fog simulation
Stars: ✭ 101 (+77.19%)
Mutual labels:  point-cloud
CVPR-2020-point-cloud-analysis
CVPR 2020 papers focusing on point cloud analysis
Stars: ✭ 48 (-15.79%)
Mutual labels:  point-cloud
pcc geo cnn v2
Improved Deep Point Cloud Geometry Compression
Stars: ✭ 55 (-3.51%)
Mutual labels:  point-cloud
Open3D-PointNet2-Semantic3D
Semantic3D segmentation with Open3D and PointNet++
Stars: ✭ 422 (+640.35%)
Mutual labels:  point-cloud
Python-for-Remote-Sensing
python codes for remote sensing applications will be uploaded here. I will try to teach everything I learn during my projects in here.
Stars: ✭ 20 (-64.91%)
Mutual labels:  point-cloud
fastDesp-corrProp
Fast Descriptors and Correspondence Propagation for Robust Global Point Cloud Registration
Stars: ✭ 16 (-71.93%)
Mutual labels:  point-cloud
mix3d
Mix3D: Out-of-Context Data Augmentation for 3D Scenes (3DV 2021 Oral)
Stars: ✭ 183 (+221.05%)
Mutual labels:  point-cloud
efficient online learning
Efficient Online Transfer Learning for 3D Object Detection in Autonomous Driving
Stars: ✭ 20 (-64.91%)
Mutual labels:  point-cloud
FLAT
[ICCV2021 Oral] Fooling LiDAR by Attacking GPS Trajectory
Stars: ✭ 52 (-8.77%)
Mutual labels:  point-cloud
persee-depth-image-server
Stream openni2 depth images over the network
Stars: ✭ 21 (-63.16%)
Mutual labels:  point-cloud
New-View-Synthesis
Collecting papers about new view synthesis
Stars: ✭ 437 (+666.67%)
Mutual labels:  neural-rendering
DeepI2P
DeepI2P: Image-to-Point Cloud Registration via Deep Classification. CVPR 2021
Stars: ✭ 130 (+128.07%)
Mutual labels:  point-cloud
softpool
SoftPoolNet: Shape Descriptor for Point Cloud Completion and Classification - ECCV 2020 oral
Stars: ✭ 62 (+8.77%)
Mutual labels:  point-cloud
ECCV-2020-point-cloud-analysis
ECCV 2020 papers focusing on point cloud analysis
Stars: ✭ 22 (-61.4%)
Mutual labels:  point-cloud
PyKinect2-PyQtGraph-PointClouds
Creating real-time dynamic Point Clouds using PyQtGraph, Kinect 2 and the python library PyKinect2.
Stars: ✭ 42 (-26.32%)
Mutual labels:  point-cloud
YOHO
[ACM MM 2022] You Only Hypothesize Once: Point Cloud Registration with Rotation-equivariant Descriptors
Stars: ✭ 76 (+33.33%)
Mutual labels:  point-cloud
zed-ros2-wrapper
ROS 2 wrapper beta for the ZED SDK
Stars: ✭ 61 (+7.02%)
Mutual labels:  point-cloud
AwesomeMLForDigitalMedia
A curated list of awesome machine learning resources in the context of digital media and (interactive) computer graphics.
Stars: ✭ 17 (-70.18%)
Mutual labels:  neural-rendering
ACGPN
"Towards Photo-Realistic Virtual Try-On by Adaptively Generating↔Preserving Image Content",CVPR 2020. (Modified from original with fixes for inference)
Stars: ✭ 48 (-15.79%)
Mutual labels:  virtual-try-on
pyRANSAC-3D
A python tool for fitting primitives 3D shapes in point clouds using RANSAC algorithm
Stars: ✭ 253 (+343.86%)
Mutual labels:  point-cloud

Point-Based Modeling of Human Clothing

Paper | Project page | Video

This is an official PyTorch code repository of the paper "Point-Based Modeling of Human Clothing" (accepted to ICCV, 2021).

Setup

Build docker

  • Prerequisites: your nvidia driver should support cuda 10.2, Windows or Mac are not supported.
  • Clone repo:
    • git clone https://github.com/izakharkin/point_based_clothing.git
    • cd point_based_clothing
    • git submodule init && git submodule update
  • Docker setup:
  • Download 10_nvidia.json and place it in the docker/ folder
  • Create docker image:
    • Build on your own: run 2 commands
  • Inside the docker container: source activate pbc

Download data

  • Download the SMPL neutral model from SMPLify project page:
    • Register, go to the Downloads section, download SMPLIFY_CODE_V2.ZIP, and unpack it;
    • Move smplify_public/code/models/basicModel_neutral_lbs_10_207_0_v1.0.0.pkl to data/smpl_models/SMPL_NEUTRAL.pkl.
  • Download models checkpoints (~570 Mb): Google Drive and place them to the checkpoints/ folder;
  • Download a sample data we provide to check the appearance fitting (~480 Mb): Google Drive, unpack it, and place psp/ folder to the samples/ folder.

Custom data

To run our pipeline on custom data (images or videos):

  • run our fork of Graphonomy to obtain clothing segmentation mask in our format;
  • run e.g. SMPLify or any other suitable method to obtain the SMPL-parameters (3D body pose and shape ground truth).

We recommend to run these methods on internet_images/ test dataset to make sure that your outputs exactly match the format inside internet_images/segmentations/cloth and internet_images/smpl/results.

Run

We provide scripts for geometry fitting and inference and appearance fitting and inference.

Geometry (outfit code)

Fitting

To fit a style outfit code to a single image one can run:

python fit_outfit_code.py --config_name=outfit_code/psp

The learned outfit codes are saved to out/outfit_code/outfit_codes_<dset_name>.pkl by default. The visualization of the process is in out/outfit_code/vis_<dset_name>/:

  • Coarse fitting stage: four outfit codes initialized randomly and being optimized simultaneosly.

outfit_code_fitting_coarse

  • Fine fitting stage: mean of found outfit codes is being optimized further to possibly imrove the reconstruction.

outfit_code_fitting_fine

Note: visibility_thr hyperparameter in fit_outfit_code.py may affect the quality of result point cloud (e.f. make it more sparse). Feel free to tune it if the result seems not perfect.

vis_thr_360

Inference

outfit_code_inference

To further infer the fitted outfit style on the train or on new subjects please see infer_outfit_code.ipynb. To run jupyter notebook server from the docker, run this inside the container:

jupyter notebook --ip=0.0.0.0 --port=8087 --no-browser 

Appearance (neural descriptors)

Fitting

To fit a clothing appearance to a sequence of frames one can run:

python fit_appearance.py --config_name=appearance/psp_male-3-casual

The learned neural descriptors ntex0_<epoch>.pth and neural rendering network weights model0_<epoch>.pth are saved to out/appearance/<dset_name>/<subject_id>/<experiment_dir>/checkpoints/ by default. The visualization of the process is in out/appearance/<dset_name>/<subject_id>/<experiment_dir>/visuals/.

Inference

appearance_inference

To further infer the fitted clothing point cloud and its appearance on the train or on new subjects please see infer_appearance.ipynb. To run jupyter notebook server from the docker, run this inside the container:

jupyter notebook --ip=0.0.0.0 --port=8087 --no-browser 

Citation

If you find our work helpful, please do not hesitate to cite us:

@InProceedings{Zakharkin_2021_ICCV,
    author    = {Zakharkin, Ilya and Mazur, Kirill and Grigorev, Artur and Lempitsky, Victor},
    title     = {Point-Based Modeling of Human Clothing},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {14718-14727}
}

Non-commercial use only.

Related projects

We also thank the authors of Cloth3D and PeopleSnapshot datasets.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].