FSRT: Facial Scene Representation Transformer for Face Reenactment from Factorized Appearance, Head-pose, and Facial Expression Features
CVPR 2024(2024)
Abstract
The task of face reenactment is to transfer the head motion and facial
expressions from a driving video to the appearance of a source image, which may
be of a different person (cross-reenactment). Most existing methods are
CNN-based and estimate optical flow from the source image to the current
driving frame, which is then inpainted and refined to produce the output
animation. We propose a transformer-based encoder for computing a set-latent
representation of the source image(s). We then predict the output color of a
query pixel using a transformer-based decoder, which is conditioned with
keypoints and a facial expression vector extracted from the driving frame.
Latent representations of the source person are learned in a self-supervised
manner that factorize their appearance, head pose, and facial expressions.
Thus, they are perfectly suited for cross-reenactment. In contrast to most
related work, our method naturally extends to multiple source images and can
thus adapt to person-specific facial dynamics. We also propose data
augmentation and regularization schemes that are necessary to prevent
overfitting and support generalizability of the learned representations. We
evaluated our approach in a randomized user study. The results indicate
superior performance compared to the state-of-the-art in terms of motion
transfer quality and temporal consistency.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined