Deep Object Tracking on Dynamic Occupancy Grid Maps Using RNNs

2018 21st International Conference on Intelligent Transportation Systems (ITSC)(2018)

引用 17|浏览17
The comprehensive representation and understanding of the driving environment is crucial to improve the safety and reliability of autonomous vehicles. In this paper, we present a new approach to establish an environment model containing a segmentation between static and dynamic background and parametric modeled objects with shape, position and orientation. Multiple laser scanners are fused into a dynamic occupancy grid map resulting in a 360{\deg} perception of the environment. A single-stage deep convolutional neural network is combined with a recurrent neural network, which takes a time series of the occupancy grid map as input and tracks cell states and its corresponding object hypotheses. The labels for training are created unsupervised with an automatic label generation algorithm. The proposed methods are evaluated in real-world experiments in complex inner city scenarios using the aforementioned 360{\deg} laser perception. The results show a better object detection accuracy in comparison with our old approach as well as an AUC score of 0.946 for the dynamic and static segmentation. Furthermore, we gain an improved detection for occluded objects and a more consistent size estimation due to the usage of time series as input and the memory about previous states introduced by the recurrent neural network.
orientation parameters,multiple laser scanners,dynamic occupancy grid map,single-stage deep convolutional neural network,recurrent neural network,time series,corresponding object hypotheses,automatic label generation algorithm,object detection accuracy,dynamic segmentation,static segmentation,occluded objects,deep object,comprehensive representation,driving environment,environment model,static background,dynamic background,aforementioned 360-laser perception
AI 理解论文
Chat Paper