Self-supervised robot self-modeling using a single egocentric camera

Research Square (Research Square)(2023)

引用 0|浏览0
暂无评分
摘要
Abstract The ability of robots to model their own dynamics is key to autonomous planning and learning, as well as for autonomous damage detection and recovery. Traditionally dynamic models are pre-programmed, or learned from external observations and IMU data. Here, we demonstrate for the first time how a task-agnostic dynamic self-model can be learned using only a single first-person-view camera in a self-supervised manner, without any prior knowledge of robot morphology, kinematics, or task. We trained an egocentric visual self-model using random motor babbling on a 12-DoF robot. We then show how the robot can leverage its visual self-model to achieve various locomotion tasks, such as moving forward, backward and turning, all without any additional physical training. The accuracy of the egocentric model exceeds that of a model trained using an IMU. We also show how a robot can automatically detect and recover from damage. We suggest that self-supervised egocentric visual self-modeling could allow complex systems to continuously model themselves without additional sensors and prior knowledge.
更多
查看译文
关键词
self-supervised self-modeling
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要