Automated acquisition of structured, semantic models of manipulation activities from human VR demonstration

2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021)(2021)

引用 10|浏览37
暂无评分
摘要
In this paper we present a system capable of collecting and annotating, human performed, robot understandable, everyday activities from virtual environments. The human movements are mapped in the simulated world using off-the-shelf virtual reality devices with full body, and eye tracking capabilities. All the interactions in the virtual world are physically simulated, thus movements and their effects are closely relatable to the real world. During the activity execution, a subsymbolic data logger is recording the environment and the human gaze on a per-frame basis, enabling offline scene reproduction and replays. Coupled with the physics engine, online monitors (symbolic data loggers) are parsing (using various grammars) and recording events, actions, and their effects in the simulated world.
更多
查看译文
关键词
manipulation activities,human VR demonstration,virtual environments,human movements,simulated world,off-the-shelf virtual reality devices,eye tracking capabilities,virtual world,activity execution,subsymbolic data logger,human gaze,offline scene reproduction,replays,physics engine,symbolic data loggers,automated acquisition,structured models,semantic models
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要