All Projects → kwea123 → Vtuber_unity

kwea123 / Vtuber_unity

Licence: mit
Use Unity 3D character and Python deep learning algorithms to stream as a VTuber!

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Vtuber unity

Facemoji
😆 A voice chatbot that can imitate your expression. OpenCV+Dlib+Live2D+Moments Recorder+Turing Robot+Iflytek IAT+Iflytek TTS
Stars: ✭ 320 (-11.36%)
Mutual labels:  unity, face-detection, dlib
Dlibfacelandmarkdetector
FaceLandmark Detector using Dlib (Unity Asset Plugin)
Stars: ✭ 80 (-77.84%)
Mutual labels:  unity, dlib
Add Christmas Hat
Add Christmas hat on one's head based on OpneCV and Dlib
Stars: ✭ 251 (-30.47%)
Mutual labels:  face-detection, dlib
T System
the moving objects tracking system via two axis camera motion (and as optionally n joint robotic arm) for raspberry pi distributions
Stars: ✭ 17 (-95.29%)
Mutual labels:  face-detection, dlib
Proctoring Ai
Creating a software for automatic monitoring in online proctoring
Stars: ✭ 155 (-57.06%)
Mutual labels:  face-detection, dlib
Attendance Using Face
Face-recognition using Siamese network
Stars: ✭ 174 (-51.8%)
Mutual labels:  face-detection, dlib
Openseeface
Robust realtime face and facial landmark tracking on CPU with Unity integration
Stars: ✭ 216 (-40.17%)
Mutual labels:  unity, face-detection
Predict Facial Attractiveness
Using OpenCV and Dlib to predict facial attractiveness.
Stars: ✭ 41 (-88.64%)
Mutual labels:  face-detection, dlib
facial-landmarks
Facial landmarks detection with OpenCV, Dlib, DNN
Stars: ✭ 25 (-93.07%)
Mutual labels:  face-detection, dlib
facenet-darknet-inference
Face recognition using facenet
Stars: ✭ 29 (-91.97%)
Mutual labels:  face-detection, dlib
Get Me Through
A Free, Offline, Real-Time, Open-source web-app to assist organisers of any event in allowing only authorised/invited people using Face-Recognition Technology or QR Code.
Stars: ✭ 255 (-29.36%)
Mutual labels:  face-detection, dlib
Awesome Face Detection
Compare with various detectors - s3fd, dlib, ocv, ocv-dnn, mtcnn-pytorch, face_recognition
Stars: ✭ 106 (-70.64%)
Mutual labels:  face-detection, dlib
Facer
Simple (🤞) face averaging (🙂) in Python (🐍)
Stars: ✭ 49 (-86.43%)
Mutual labels:  face-detection, dlib
Facerecognition
Nextcloud app that implement a basic facial recognition system.
Stars: ✭ 226 (-37.4%)
Mutual labels:  face-detection, dlib
Android Hpe
Android native application to perform head pose estimation using images coming from the front camera.
Stars: ✭ 46 (-87.26%)
Mutual labels:  face-detection, dlib
Hololenswithopencvforunityexample
HoloLens With OpenCVforUnity Example
Stars: ✭ 142 (-60.66%)
Mutual labels:  unity, face-detection
android-face-landmarks
Android app that localizes facial landmarks in nearly real-time
Stars: ✭ 62 (-82.83%)
Mutual labels:  face-detection, dlib
Libfaceid
libfaceid is a research framework for prototyping of face recognition solutions. It seamlessly integrates multiple detection, recognition and liveness models w/ speech synthesis and speech recognition.
Stars: ✭ 354 (-1.94%)
Mutual labels:  face-detection, dlib
Drishti
Real time eye tracking for embedded and mobile devices.
Stars: ✭ 325 (-9.97%)
Mutual labels:  face-detection, dlib
Retro3dpipeline
A minimal example of a custom render pipeline with the Retro3D shader.
Stars: ✭ 354 (-1.94%)
Mutual labels:  unity

VTuber_Unity

Use Unity 3D character and Python deep learning algorithms to stream as a VTuber!

This is part of the OpenVTuberProject, which provides many toolkits for becoming a VTuber.

Youtube Playlist (Chinese) (Covers videos 1-4): teaser


Credits

First of all, I'd like to give credits to the following projects that I borrow code from:

Project LICENSE
head-pose-estimation LICENSE
face-alignment LICENSE
GazeTracking LICENSE

And the virtual character unity-chan © UTJ/UCL.

Installation

Hardware

  • OS: Ubuntu 16.04 (18.04 may also work) or Windows 10 64bits or MacOS
  • (Optional but recommended) An NVIDIA GPU (tested with CUDA 9.0, 10.0 and 10.1, but may also work with other versions)

Software

  • Python3.x (installation via Anaconda is recommended; mandatory for Windows users)

    • (Optional) It is recommended to use conda environments. Run conda create -n vtuber python=3.6. Activate it by conda activate vtuber.
  • Python libraries

    • Ubuntu:
      • Install the requirements by pip install -r requirements_(cpu or gpu).txt
      • If you have CUDA 10.1, pip install onnxruntime-gpu to get faster inference speed using onnx model.
    • Windows:
      • CPU:
        • pip install -r requirements_cpu.txt
        • if dlib cannot be properly installed, follow here.
      • GPU:
        • Install pytorch using conda. Example: conda install pytorch==1.2.0 torchvision==0.4.0 cudatoolkit=10.0 -c pytorch
        • Install other dependencies by pip install -r requirements_gpu.txt.
        • If you have CUDA 10, pip install onnxruntime-gpu to get faster inference speed using onnx model.
  • Optional

Example usage

Here we assume that you have installed the requirements and activated the virtual environment you are using.

0. Model download

You need to download the models here, extract and put into face_alignment/ckpts.

If you don't use onnxruntime, you can omit this step as the script will automatically download them for you.

1. Face detection test

Run python demo.py --debug. (add --cpu if you have CPU only)

You should see the following:


Left: CPU model. Right: GPU model run on a GTX1080Ti.

2. Synchronize with the virtual character

  1. Download and launch the binaries here depending on your OS to launch the unity window featuring the virtual character (unity-chan here). Important: Ensure that only one window is opened at a time!
  2. After the vitual character shows up, run python demo.py --connect to synchronize your face features with the virtual character. (add --debug to see your face and --cpu if you have CPU only as step 1.)

You should see the following:


Left: CPU model. Right: GPU model run on a GTX1080Ti.

Enjoy your VTuber life!

Functionalities details

In this section, I will describe the functionalities implemented and a little about the technology behind.

Head pose estimation

Using head-pose-estimation and face-alignment, deep learning methods are applied to do the following: face detection and facial landmark detection. A face bounding box and the 68-point facial landmark is detected, then a PnP algorithm is used to obtain the head pose (the rotation of the face). Finally, kalman filters are applied to the pose to make it smoother.

The character's head pose is synchronized.

As for the visualization, the white bounding box is the detected face, on top of which 68 green face landmarks are plotted. The head pose is represented by the green frustum and the axes in front of the nose.

Gaze estimation

Using GazeTracking, The eyes are first extracted using the landmarks enclosing the eyes. Then the eye images are converted to grayscale, and a pixel intensity threshold is applied to detect the iris (the black part of the eye). Finally, the center of the iris is computed as the center of the black area.

The character's gaze is not synchronized. (Since I didn't find a way to move unity-chan's eyes)

As for the visualization, the red crosses indicate the iris.

Miscellaneous

  1. Estimate eye aspect ratio: The eye aspect ratio can be used to detect blinking, but currently I just use auto blinking since this estimation is not so accurate.

  2. Estimate mouth aspect ratio: I use this number to synchronize with the character's mouth.

  3. The mouth distance is used to detect smile and synchronize with the character.

Unity Project

If you want to customize the virtual character, you can find the unity project in release.

License

MIT License

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].