Scene Understanding Networks for Autonomous Driving based on Around View Monitoring System

2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)(2018)

引用 22|浏览18
暂无评分
摘要
Modern driver assistance systems rely on a wide range of sensors (RADAR, LIDAR, ultrasound and cameras) for scene understanding and prediction. These sensors are typically used for detecting traffic participants and scene elements required for navigation. In this paper we argue that relying on camera based systems, specifically Around View Monitoring (AVM) system has great potential to achieve these goals in both parking and driving modes with decreased costs. The contributions of this paper are as follows: we present a new end-to-end solution for delimiting the safe drivable area for each frame by means of identifying the closest obstacle in each direction from the driving vehicle, we use this approach to calculate the distance to the nearest obstacles and we incorporate it into a unified end-to-end architecture capable of joint object detection, curb detection and safe drivable area detection. Furthermore, we describe the family of networks for both a high accuracy solution and a low complexity solution. We also introduce further augmentation of the base architecture with 3D object detection.
更多
查看译文
关键词
scene understanding networks,autonomous driving,RADAR,LIDAR,ultrasound cameras,traffic participants,scene elements,camera based systems,driving modes,closest obstacle,driving vehicle,joint object detection,curb detection,safe drivable area detection,3D object detection,driving mode,parking mode,AVM system,driver assistance systems,around view monitoring system
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要