3D facial expression retargeting framework based on an identity-independent expression feature vector

MULTIMEDIA TOOLS AND APPLICATIONS(2023)

引用 0|浏览7
暂无评分
摘要
One important aspect of multimedia application scenarios is the ability to control the facial expressions of virtual characters. One popular solution is to retarget the expressions of actors to virtual characters. Traditional 3D facial expression retargeting algorithms are mostly based on the Blendshape model. However, excessive reliance on the Blendshape model introduces several limitations. For example, the quality of the base expressions has a large influence on the expression retargeting results, requires large amounts of 3D face data, and must be calibrated for each user. We propose a 3D facial expression retargeting framework based on an identity-independent expression feature vector (hereafter referred to as the expression vector). This expression vector, which is related only to facial expressions, is originally extracted from face images; then, the corresponding expressions are transferred to the target (which can be any 3D face model) using V2ENet, a generative adversarial network (GAN)-structured model. Our framework requires only the expression vector and a neutral 3D face model to achieve natural and vivid expression retargeting, and it does not rely on the Blendshape model. When using the expression vector obtained from a cognitive perspective, our method can also perform 3D expression retargeting at the cognitive level. A series of experiments demonstrates that our method not only provides a simplified expression retargeting process but also achieves a better effect than the deformation transfer algorithm. The proposed framework is suitable for a wide range of applications and also achieves good expression retargeting for cartoon-style face models.
更多
查看译文
关键词
Virtual characters,3D face model,Expression retargeting,Deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要