Physically Plausible Animation of Human Upper Body from a Single Image
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)(2023)
摘要
We present a new method for generating controllable, dynamically responsive, and photorealistic human animations. Given an image of a person, our system allows the user to generate Physically plausible Upper Body Animation (PUBA) using interaction in the image space, such as dragging their hand to various locations. We formulate a reinforcement learning problem to train a dynamic model that predicts the person’s next 2D state (i.e., keypoints on the image) conditioned on a 3D action (i.e., joint torque), and a policy that outputs optimal actions to control the person to achieve desired goals. The dynamic model leverages the expressiveness of 3D simulation and the visual realism of 2D videos. PUBA generates 2D keypoint sequences that achieve task goals while being responsive to forceful perturbation. The sequences of keypoints are then translated by a pose-to-image generator to produce the final photorealistic video.
更多查看译文
关键词
Algorithms: Machine learning architectures,formulations,and algorithms (including transfer),Computational photography,image and video synthesis,Virtual/augmented reality
AI 理解论文
溯源树
样例

生成溯源树,研究论文发展脉络