Realtime Facial Animation with On-the-fly Correctives

REALTIME FACIAL ANIMATION WITH ON-THE-FLY CORRECTIVES

Hao Li, Jihun Yu, Yuting Ye, Chris Bregler

ACM Transactions on Graphics, Proceedings of the 40th ACM SIGGRAPH Conference and Exhibition 2013, 07/2013 – SIGGRAPH 2013

[paper]   [video]   [slides]   [fast forward]   [bibtex]

Copyright © 2014 Hao Li

We introduce a real-time and calibration-free facial performance capture framework based on a sensor with video and depth input. In this framework, we develop an adaptive PCA model using shape correctives that adjust on-the-fly to the actor's expressions through incremental PCA-based learning. Since the fitting of the adaptive model progressively improves during the performance, we do not require an extra capture or training session to build this model. As a result, the system is highly deployable and easy to use: it can faithfully track any individual, starting from just a single face scan of the subject in a neutral pose. Like many real-time methods, we use a linear subspace to cope with incomplete input data and fast motion. To boost the training of our tracking model with reliable samples, we use a well-trained 2D facial feature tracker on the input video and an efficient mesh deformation algorithm to snap the result of the previous step to high frequency details in visible depth map regions. We show that the combination of dense depth maps and texture features around eyes and lips is essential in capturing natural dialogues and nuanced actor-specific emotions. We demonstrate that using an adaptive PCA model not only improves the fitting accuracy for tracking but also increases the expressiveness of the retargeted character.

PAPER VIDEO

TALK SLIDES

SIGGRAPH 2013 FAST FORWARD