Rgb-D-Based Human Detection And Segmentation For Mobile Robot Navigation In Industrial Environments

VISAPP: PROCEEDINGS OF THE 16TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS - VOL. 4: VISAPP(2021)

引用 2|浏览8
暂无评分
摘要
Automated guided vehicles (AGV) are nowadays a common option for the efficient and automated in-house transportation of various cargo and materials. By the additional application of unmanned aerial vehicles (UAV) in the delivery and intralogistics sector this flow of materials is expected to be extended by the third dimension within the next decade.To ensure a collision-free movement for those vehicles optical, ultrasonic or capacitive distance sensors are commonly employed. While such systems allow a collision-free navigation, they are not able to distinguish humans from static objects and therefore require the robot to move at a human-safe speed at any time. To overcome these limitations and allow an environment sensitive collision avoidance for UAVs and AGVs we provide a solution for the depth camera based real-time semantic segmentation of workers in industrial environments. The semantic segmentation is based on an adapted version of the deep convolutional neural network (CNN) architecture FuseNet. After explaining the underlying methodology we present an automated approach for the generation of weakly annotated training data and evaluate the performance of the trained model compared to other well-known approaches.
更多
查看译文
关键词
Image Segmentation, Object Recognition, Neural Networks, Deep Learning, Robotics, Autonomous Mobile Robots, Flexible Automation, Warehouse Automation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要