Multimodal Imitation Using Self-Learned Sensorimotor Representations

2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(2016)

引用 7|浏览10
暂无评分
摘要
Although many tasks intrinsically involve multiple modalities, often only data from a single modality are used to improve complex robots acquisition of new skills. We present a method to equip robots with multimodal learning skills to achieve multimodal imitation on-the-fly on multiple concurrent task spaces, including vision, touch and proprioception, only using self-learned multimodal sensorimotor relations, without the need of solving inverse kinematic problems or explicit analytical models formulation. We evaluate the proposed method on a humanoid iCub robot learning to interact with a piano keyboard and imitating a human demonstration. Since no assumptions are made on the kinematic structure of the robot, the method can be also applied to different robotic platforms.
更多
查看译文
关键词
complex robot skill acquisition,multimodal learning skills,multimodal imitation,multiple concurrent task spaces,proprioception,self-learned multimodal sensorimotor representation,humanoid iCub robot,piano keyboard,human demonstration
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要