AgentAvatar: Disentangling Planning, Driving and Rendering for Photorealistic Avatar Agents
CoRR(2023)
摘要
In this study, our goal is to create interactive avatar agents that can
autonomously plan and animate nuanced facial movements realistically, from both
visual and behavioral perspectives. Given high-level inputs about the
environment and agent profile, our framework harnesses LLMs to produce a series
of detailed text descriptions of the avatar agents' facial motions. These
descriptions are then processed by our task-agnostic driving engine into motion
token sequences, which are subsequently converted into continuous motion
embeddings that are further consumed by our standalone neural-based renderer to
generate the final photorealistic avatar animations. These streamlined
processes allow our framework to adapt to a variety of non-verbal avatar
interactions, both monadic and dyadic. Our extensive study, which includes
experiments on both newly compiled and existing datasets featuring two types of
agents -- one capable of monadic interaction with the environment, and the
other designed for dyadic conversation -- validates the effectiveness and
versatility of our approach. To our knowledge, we advanced a leap step by
combining LLMs and neural rendering for generalized non-verbal prediction and
photo-realistic rendering of avatar agents.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要