Policy-Based Reinforcement Learning for Training Autonomous Driving Agents in Urban Areas With Affordance Learning

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS(2022)

引用 4|浏览8
暂无评分
摘要
Learning to drive in urban areas is an open challenge for autonomous vehicles (AVs), as complex decision making requirements are needed in multi-task co-ordinations environments. In this paper, we propose a hybrid framework with a new perception model involving affordance learning to simplify the surrounding urban scenes for training an AV agent, along with a planned trajectory and the associated driving measurements. Our proposed solution encompasses two main aspects. Firstly, a supervised learning network is used to map the input sensory data into affordance predictions. The predicted affordances provide a low-dimensional representation of surrounding scenes of the AV in the form of key perception indicators, e.g., true or false with respect to a traffic light signal. Secondly, a deep deterministic policy gradient model that maps the perception information into a series of actions is devised. We evaluate the proposed solution using the CARLA driving simulator in an urban town and evaluate the performance in a new, unseen town under different weather conditions. The quantitative and qualitative results indicate that our proposed solution can generalize well to cope with different traffic solutions and environmental conditions. Our proposed solution also outperforms other baseline methods in a comparative study in handling various AV driving tasks with different levels of difficulty. In addition, the model trained with simulated scenes yields promising prediction results when testing on recorded video streams on real-world highway and suburb environments with varying traffic and weather conditions.
更多
查看译文
关键词
Affordances,Training,Autonomous vehicles,Urban areas,Task analysis,Predictive models,Planning,Autonomous driving,affordance predictions,deep deterministic policy gradient
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要