User Motion Accentuation in Social Pointing Scenario

VRW(2023)

引用 0|浏览6
暂无评分
摘要
Few existing methods produce full-body user motion in virtual environments from only the tracking from a consumer-level head-mounted-display. This preliminary project generates full-body motions from the user's hands and head positions through data-based motion accentuation. The method is evaluated in a simple collaborative scenario with one Pointer, represented by an avatar, pointing at targets while an Observer interprets the Pointer's movements. The Pointer's motion is modified by our motion accentuation algorithm SocialMoves. Comparisons on the Pointer's motion are made between SocialMoves, a system built around Final IK, and a ground truth capture. Our method showed the same level of user experience as the ground truth method.
更多
查看译文
关键词
Human-centered computing-Animation-Virtual Characters-Virtual Reality
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要