Explainable Automation: Personalized and Adaptive UIs to Foster Trust and Understanding of Driving Automation Systems

Philipp Wintersberger, Hannah Nicklas, Thomas Martlbauer,Stephan Hammer,Andreas Riener

AutomotiveUI '20: 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications Virtual Event DC USA September, 2020(2020)

引用 24|浏览4
暂无评分
摘要
Recent research indicates that transparent information on the behavior of automated vehicles positively affects trust, but how such feedback should be composed and if user trust influences the amount of desired feedback is relatively unexplored. Consequently, we conducted an interview study with (N=56) participants, who were presented different videos of an automated vehicle from the ego-perspective. Subjects rated their trust in the vehicle in these situations and could arbitrarily select objects in the driving environment that should be included in augmented reality feedback systems, so that they are able to trust the vehicle and understand its actions. The results show an inverse correlation between situational trust and participants’ desire for feedback and further reveal reasons why certain objects should be included in feedback systems. The study also highlights the need for more adaptive in-vehicle interfaces for trust calibration and outlines necessary steps for automatically generating feedback in the future.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要