Graph-Based Visual Semantic Perception For Humanoid Robots

2017 IEEE-RAS 17TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTICS (HUMANOIDS)(2017)

引用 28|浏览21
暂无评分
摘要
Semantic understanding of unstructured environments plays an essential role in the autonomous planning and execution of whole-body humanoid locomotion and manipulation tasks. We introduce a new graph-based and data-driven method for semantic representation of unknown environments based on visual sensor data streams. The proposed method extends our previous work, in which loco-manipulation scene affordances are detected in a fully unsupervised manner. We build a geometric primitive-based model of the perceived scene and assign interaction possibilities, i.e. affordances, to the individual primitives. The major contribution of this paper is the enrichment of the extracted scene representation with semantic object information through spatio-temporal fusion of primitives during the perception. To this end, we combine the primitive-based scene representation with object detection methods to identify higher semantic structures in the scene. The qualitative and quantitative evaluation of the proposed method in various experiments in simulation and on the humanoid robot ARMAR-III demonstrates the effectiveness of the approach.
更多
查看译文
关键词
geometric primitive-based model,extracted scene representation,semantic object information,primitive-based scene representation,object detection methods,humanoid robot ARMAR-III,visual semantic perception,humanoid robots,unstructured environments,autonomous planning,whole-body humanoid locomotion,manipulation tasks,data-driven method,semantic representation,visual sensor data streams,loco-manipulation scene
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要