User-Invariant Facial Animation With Convolutional Neural Network
NEURAL INFORMATION PROCESSING (ICONIP 2018), PT I(2018)
摘要
In this paper, we propose a robust approach for real-time userinvariant and performance-based face animation system using a single ordinary RGB camera with convolutional neural network (CNN), where the facial expression coefficients are used to drive the avatar. Existing shape regression algorithms usually take a two-step procedure to estimate facial expressions: The first is to estimate the 3D positions of facial landmarks, and the second is computing the head poses and expression coefficients. The proposed method directly regresses the face expression coefficients by using CNN. This single-shot regressor for facial expression coefficients is faster than the state-of-the-art single web camera based face animation system. Moreover, our method can avoid the user-specific 3D blendshapes, and thus it is user-invariant. Three different input size CNN architectures are designed and combined with Smoothed L1 and Gaussian loss functions to regress the expression coefficients. Experiments validate the proposed method.
更多查看译文
关键词
Facial animation, CNN, Face tracking, Expression regression
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络