Egocentric Vision-based Future Vehicle Localization for Intelligent Driving Assistance Systems

2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)(2019)

引用 126|浏览98
暂无评分
摘要
Predicting the future location of vehicles is essential for safety-critical applications such as advanced driver assistance systems (ADAS) and autonomous driving. This paper introduces a novel approach to simultaneously predict both the location and scale of target vehicles in the first-person (egocentric) view of an ego-vehicle. We present a multi-stream recurrent neural network (RNN) encoder-decoder model that separately captures both object location and scale and pixel-level observations for future vehicle localization. We show that incorporating dense optical flow improves prediction results significantly since it captures information about motion as well as appearance change. We also find that explicitly modeling future motion of the ego-vehicle improves the prediction accuracy, which could be especially beneficial in intelligent and automated vehicles that have motion planning capability. To evaluate the performance of our approach, we present a new dataset of first-person videos collected from a variety of scenarios at road intersections, which are particularly challenging moments for prediction because vehicle trajectories are diverse and dynamic.
更多
查看译文
关键词
egocentric vision-based future vehicle localization,intelligent driving assistance systems,safety-critical applications,autonomous driving,target vehicles,first-person view,ego-vehicle,multistream recurrent neural network encoder-decoder model,object location,pixel-level observations,future motion,prediction accuracy,intelligent vehicles,automated vehicles,motion planning capability,vehicle trajectories,dense optical flow
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要