3D visual SLAM for an assistive robot in indoor environments using RGB-D cameras

Computer Science & Education(2014)

引用 17|浏览13
暂无评分
摘要
With a growing global aging population, assistive robots are becoming increasingly important. This paper presents an integrated hardware and software architecture for assistive robots. This modular and reusable software framework incorporates capabilities of perception and navigation. The paper presents as well a system for three-dimensional (3D) vision-based simultaneous localization and mapping (SLAM) using a Red-Green-Blue and Depth (RGB-D) camera, and illustrates its application on an assistive robot. The ORB features and depth information are extracted for ego-motion estimation. Random Sample Consensus algorithm (RANSAC) is adopted for outlier removal, while the integration of RGB-D and iterated closest point (ICP) is used for alignment. Pose-graph optimization is completed by g2o. Finally, a 3D volumetric map is generated for further navigation.
更多
查看译文
关键词
slam (robots),cameras,feature extraction,graph theory,image colour analysis,mobile robots,motion estimation,path planning,robot vision,service robots,3d visual slam,3d volumetric map,icp,orb features,ransac,rgb-d cameras,assistive robot,depth information,ego-motion estimation,integrated hardware-software architecture,iterated closest point,navigation capability,perception capability,pose-graph optimization,random sample consensus algorithm,red-green-blue-depth camera,simultaneous localization and planning,software framework,indoor environments,rgbd cameras
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要