Eye-hand Coordination Develops from Active Multimodal Compression

2023 IEEE INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING, ICDL(2023)

引用 0|浏览3
暂无评分
摘要
During their first months of life, infants learn to coordinate their perceptions and actions across different modalities. For example, eye-hand coordination relies on combining visual and proprioceptive sensory inputs for controlling eye and hand movements. What drives the development and calibration of such coordination? Here, we put forward a multimodal hierarchical extension of the Active Efficient Coding framework to learn a simple form of eye-hand coordination. By learning to actively compress visual and proprioceptive inputs into a combined multimodal representation, our embodied infant model learns to make eye movements to track an object held in its hand. We find that the abstract multimodal representation improves the tracking accuracy, but only if it emerges after the establishment of the single-modality systems. This suggests the existence of a "less-is-more" effect for the development of coordinated multimodal sensorimotor behaviors.
更多
查看译文
关键词
Eye-hand coordination,Multimodality,Active perception,Sensorimotor development,Less-is-more
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要