All Projects → yanx27 → EverybodyDanceNow_reproduce_pytorch

yanx27 / EverybodyDanceNow_reproduce_pytorch

Licence: MIT license
Everybody dance now reproduced in pytorch

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to EverybodyDanceNow reproduce pytorch

Everybody-dance-now
Implementation of paper everybody dance now for Deep learning course project
Stars: ✭ 22 (-96.2%)
Mutual labels:  everybody-dance-now
openpose-pytorch
🔥 OpenPose api wrapper in PyTorch.
Stars: ✭ 52 (-91.02%)
Mutual labels:  openpose
motion trace colab
colab版MMD自動トレース
Stars: ✭ 32 (-94.47%)
Mutual labels:  openpose
Openpose
OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation
Stars: ✭ 22,892 (+3853.71%)
Mutual labels:  openpose
JoJoPoseEstimation
Using OpenCV and OpenPose to recognize reference poses.
Stars: ✭ 15 (-97.41%)
Mutual labels:  openpose
motion trace bulk
MMDモーショントレース自動化一括処理バッチ
Stars: ✭ 36 (-93.78%)
Mutual labels:  openpose
ClothingTransfer-NCNN
CT-Net, OpenPose, LIP_JPPNet, DensePose running with ncnn⚡服装迁移/虚拟试穿⚡ClothingTransfer/Virtual-Try-On⚡
Stars: ✭ 166 (-71.33%)
Mutual labels:  openpose
openpose-docker
A docker build file for CMU openpose with Python API support
Stars: ✭ 68 (-88.26%)
Mutual labels:  openpose
FastPose
pytorch realtime multi person keypoint estimation
Stars: ✭ 36 (-93.78%)
Mutual labels:  openpose
OpenPoseDotNet
OpenPose wrapper written in C++ and C# for Windows
Stars: ✭ 55 (-90.5%)
Mutual labels:  openpose
posture recognition
Posture recognition based on common camera
Stars: ✭ 91 (-84.28%)
Mutual labels:  openpose
Openpose-based-GUI-for-Realtime-Pose-Estimate-and-Action-Recognition
GUI based on the python api of openpose in windows using cuda10 and cudnn7. Support body , hand, face keypoints estimation and data saving. Realtime gesture recognition is realized through two-layer neural network based on the skeleton collected from the gui.
Stars: ✭ 69 (-88.08%)
Mutual labels:  openpose
ESP32-CAM-MJPEG-Stream-Decoder-and-Control-Library
The library is MJPEG stream decoder based on libcurl and OpenCV, and written in C/C++.
Stars: ✭ 40 (-93.09%)
Mutual labels:  openpose
Neural-Re-Rendering-of-Humans-from-a-Single-Image
Pytorch implementation of the paper, Neural re-rendering of humans from a single image.
Stars: ✭ 77 (-86.7%)
Mutual labels:  pix2pixhd

EverybodyDanceNow reproduced in pytorch

Written by Peihuan Wu, Jinghong Lin, Yutao Liao, Wei Qing and Yan Xu, including normalization and face enhancement parts.

We train and evaluate on Ubuntu 16.04, so if you don't have linux environment, you can set nThreads=0 in EverybodyDanceNow_reproduce_pytorch/src/config/train_opt.py.

Reference:

nyoki-mtl pytorch-EverybodyDanceNow

Lotayou everybody_dance_now_pytorch

Pre-trained models and source video

  • Download vgg19-dcbb9e9d.pth.crdownload here and put it in ./src/pix2pixHD/models/

  • Download pose_model.pth here and put it in ./src/PoseEstimation/network/weight/

  • Source video can be download from here

  • Download pre-trained vgg_16 for face enhancement here and put in ./face_enhancer/

Full process

Pose2vid network

Make source pictures

  • Put source video mv.mp4 in ./data/source/ and run make_source.py, the label images and coordinate of head will save in ./data/source/test_label_ori/ and ./data/source/pose_souce.npy (will use in step6). If you want to capture video by camera, you can directly run ./src/utils/save_img.py

Make target pictures

  • Rename your own target video as mv.mp4 and put it in ./data/target/ and run make_target.py, pose.npy will save in ./data/target/, which contain the coordinate of faces (will use in step6).

Train and use pose2vid network

  • Run train_pose2vid.py and check loss and full training process in ./checkpoints/

  • If you break the traning and want to continue last training, set load_pretrain = './checkpoints/target/ in ./src/config/train_opt.py

  • Run normalization.py rescale the label images, you can use two sample images from ./data/target/train/train_label/ and ./data/source/test_label_ori/ to complete normalization between two skeleton size

  • Run transfer.py and get results in ./results

Face enhancement network

Train and use face enhancement network

  • Run cd ./face_enhancer.
  • Run prepare.py and check the results in data directory at the root of the repo (data/face/test_sync and data/face/test_real).
  • Run main.py to rain the face enhancer. Then run enhance.py to obtain the results
    This is comparision in original (left), generated image before face enhancement (median) and after enhancement (right). FaceGAN can learn the residual error between the real picture and the generated picture faces.

Performance of face enhancement

Gain results

  • cd back to the root dir and run make_gif.py to create a gif out of the resulting images.

Result

TODO

  • Pose estimation
    • Pose
    • Face
    • Hand
  • pix2pixHD
  • FaceGAN
  • Temporal smoothing

Environments

Ubuntu 16.04
Python 3.6.5
Pytorch 0.4.1
OpenCV 3.4.4

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].