All Projects → minar09 → ACGPN

minar09 / ACGPN

Licence: other
"Towards Photo-Realistic Virtual Try-On by Adaptively Generating↔Preserving Image Content",CVPR 2020. (Modified from original with fixes for inference)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to ACGPN

SCT
SCT: Set Constrained Temporal Transformer for Set Supervised Action Segmentation (CVPR2020) https://arxiv.org/abs/2003.14266
Stars: ✭ 35 (-27.08%)
Mutual labels:  cvpr
face-recognition
얼굴 인식에 대한 기술 동향 및 관련 모델 자료
Stars: ✭ 38 (-20.83%)
Mutual labels:  cvpr
CVPR2021-Papers-with-Code-Demo
收集 CVPR 最新的成果,包括论文、代码和demo视频等,欢迎大家推荐!
Stars: ✭ 752 (+1466.67%)
Mutual labels:  cvpr
bakeware
Compile Elixir applications into single, easily distributed executable binaries. Spawnfest 2020 project winner 🏆
Stars: ✭ 106 (+120.83%)
Mutual labels:  2020
LUVLi
[CVPR 2020] Re-hosting of the LUVLi Face Alignment codebase. Please download the codebase from the original MERL website by agreeing to all terms and conditions. By using this code, you agree to MERL's research-only licensing terms.
Stars: ✭ 24 (-50%)
Mutual labels:  cvpr
single-positive-multi-label
Multi-Label Learning from Single Positive Labels - CVPR 2021
Stars: ✭ 63 (+31.25%)
Mutual labels:  cvpr
AIPaperCompleteDownload
Complete download for papers in various top conferences
Stars: ✭ 64 (+33.33%)
Mutual labels:  cvpr
LED2-Net
CVPR 2021 Oral paper "LED2-Net: Monocular 360˚ Layout Estimation via Differentiable Depth Rendering" official PyTorch implementation.
Stars: ✭ 79 (+64.58%)
Mutual labels:  cvpr
Guided-I2I-Translation-Papers
Guided Image-to-Image Translation Papers
Stars: ✭ 117 (+143.75%)
Mutual labels:  cvpr
TailCalibX
Pytorch implementation of Feature Generation for Long-Tail Classification by Rahul Vigneswaran, Marc T Law, Vineeth N Balasubramaniam and Makarand Tapaswi
Stars: ✭ 32 (-33.33%)
Mutual labels:  cvpr
awesome-video-sum
A curated list of the Video Summarization subject which is a computer science using machine learning and deep learning
Stars: ✭ 29 (-39.58%)
Mutual labels:  cvpr
FastAP-metric-learning
Code for CVPR 2019 paper "Deep Metric Learning to Rank"
Stars: ✭ 93 (+93.75%)
Mutual labels:  cvpr
CVPR-2020-point-cloud-analysis
CVPR 2020 papers focusing on point cloud analysis
Stars: ✭ 48 (+0%)
Mutual labels:  cvpr
BCNet
Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers [CVPR 2021]
Stars: ✭ 434 (+804.17%)
Mutual labels:  cvpr
SKNet-PyTorch
Nearly Perfect & Easily Understandable PyTorch Implementation of SKNet
Stars: ✭ 62 (+29.17%)
Mutual labels:  cvpr
awesome-visual-localization-papers
The relocalization task aims to estimate the 6-DoF pose of a novel (unseen) frame in the coordinate system given by the prior model of the world.
Stars: ✭ 60 (+25%)
Mutual labels:  cvpr
Java-Solutions-TCS-Xplore-Proctored-Assessment
Java Solution to the TCS Xplore Proctored Assessment 2020
Stars: ✭ 139 (+189.58%)
Mutual labels:  2020
Modaily-Aware-Audio-Visual-Video-Parsing
Code for CVPR 2021 paper Exploring Heterogeneous Clues for Weakly-Supervised Audio-Visual Video Parsing
Stars: ✭ 19 (-60.42%)
Mutual labels:  cvpr
MetaBIN
[CVPR2021] Meta Batch-Instance Normalization for Generalizable Person Re-Identification
Stars: ✭ 58 (+20.83%)
Mutual labels:  cvpr
cvpr-buzz
🐝 Explore Trending Papers at CVPR
Stars: ✭ 37 (-22.92%)
Mutual labels:  cvpr

Disclaimer

This is just a slightly modified repository of DeepFashion_Try_On (ACGPN) for inference and visualization. Please refer to the original repository for details.

Towards Photo-Realistic Virtual Try-On by Adaptively GeneratingPreserving Image Content, CVPR'20.

Official code for CVPR 2020 paper 'Towards Photo-Realistic Virtual Try-On by Adaptively GeneratingPreserving Image Content'. We rearrange the VITON dataset for easy access.

[Dataset Partition Label] [Sample Try-on Video] [Checkpoints]

[Dataset_Test] [Dataset_Train]

[Paper]

Inference

  1. Download the test dataset and unzip
  2. Download the checkpoints and unzip
  3. Then run - python test.py

Dataset Partition We present a criterion to introduce the difficulty of try-on for a certain reference image.

The specific key points we choose to evaluate the try-on difficulty

image

We use the pose map to calculate the difficulty level of try-on. The key motivation behind this is the more complex the occlusions and layouts are in the clothing area, the harder it will be. And the formula is given,

The formula to compute the difficulty of try-onreference image

image

where t is a certain key point, Mp' is the set of key point we take into consideration, and N is the size of the set.

Segmentation Label

0 -> Background
1 -> Hair
4 -> Upclothes
5 -> Left-shoe 
6 -> Right-shoe
7 -> Noise
8 -> Pants
9 -> Left_leg
10 -> Right_leg
11 -> Left_arm
12 -> Face
13 -> Right_arm

Sample images from different difficulty level

image

Sample Try-on Results

image

Training Details

For better inference performance, model G and G2 should be trained with 200 epoches, while model G1 and U net should be trained with 20 epoches.

License

The use of this software is RESTRICTED to non-commercial research and educational purposes.

Citation

If you use our code or models in your research, please cite with:

@InProceedings{Yang_2020_CVPR,
author = {Yang, Han and Zhang, Ruimao and Guo, Xiaobao and Liu, Wei and Zuo, Wangmeng and Luo, Ping},
title = {Towards Photo-Realistic Virtual Try-On by Adaptively Generating-Preserving Image Content},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}

Dataset

VITON Dataset This dataset is presented in VITON, containing 19,000 image pairs, each of which includes a front-view woman image and a top clothing image. After removing the invalid image pairs, it yields 16,253 pairs, further splitting into a training set of 14,221 paris and a testing set of 2,032 pairs.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].