Using Conceptual Spaces To Fuse Knowledge From Heterogeneous Robot Platforms

MULTISENSOR, MULTISOURCE INFORMATION FUSION: ARCHITECTURES, ALGORITHMS, AND APPLICATIONS 2010(2010)

引用 1|浏览2
暂无评分
摘要
As robots become more common, it becomes increasingly useful for many applications to use them in teams that sense the world in a distributed manner. In such situations, the robots or a central control center must communicate and fuse information received from multiple sources. A key challenge for this problem is perceptual heterogeneity, where the sensors, perceptual representations, and training instances used by the robots differ dramatically. In this paper, we use Gardenfors' conceptual spaces, a geometric representation with strong roots in cognitive science and psychology, in order to represent the appearance of objects and show how the problem of heterogeneity can be intuitively explored by looking at the situation where multiple robots differ in their conceptual spaces at different levels. To bridge low-level sensory differences, we abstract raw sensory data into properties (such as color or texture categories), represented as Gaussian Mixture Models, and demonstrate that this facilitates both individual learning and the fusion of concepts between robots. Concepts (e.g. objects) are represented as a fuzzy mixture of these properties. We then treat the problem where the conceptual spaces of two robots differ and they only share a subset of these properties. In this case, we use joint interaction and statistical metrics to determine which properties are shared. Finally, we show how conceptual spaces can handle the combination of such missing properties when fusing concepts received from different robots. We demonstrate the fusion of information in real-robot experiments with a Mobile Robots Amigobot and Pioneer 2DX with significantly different cameras and (on one robot) a SICK lidar.
更多
查看译文
关键词
Conceptual Spaces, Cognitive Sensor Fusion, Heterogeneous Robot Teams
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要