Learning from Visual Demonstrations through Differentiable Nonlinear MPC for Personalized Autonomous Driving
arxiv(2024)
摘要
Human-like autonomous driving controllers have the potential to enhance
passenger perception of autonomous vehicles. This paper proposes DriViDOC: a
model for Driving from Vision through Differentiable Optimal Control, and its
application to learn personalized autonomous driving controllers from human
demonstrations. DriViDOC combines the automatic inference of relevant features
from camera frames with the properties of nonlinear model predictive control
(NMPC), such as constraint satisfaction. Our approach leverages the
differentiability of parametric NMPC, allowing for end-to-end learning of the
driving model from images to control. The model is trained on an offline
dataset comprising various driving styles collected on a motion-base driving
simulator. During online testing, the model demonstrates successful imitation
of different driving styles, and the interpreted NMPC parameters provide
insights into the achievement of specific driving behaviors. Our experimental
results show that DriViDOC outperforms other methods involving NMPC and neural
networks, exhibiting an average improvement of 20
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要