Human As Automation Failsafe: Concept, Implications, Guidelines and Innovations

Christopher Miller, Jay Shively,Summer Brandt, Helen Wauck, Vasanth Sarathy,Richard Freedman

Proceedings of the Human Factors and Ergonomics Society Annual Meeting(2022)

引用 0|浏览0
暂无评分
摘要
Humans are frequently left to “backstop” automated systems, and Human Factors specialists have argued against this for decades with, at best, partial success. What if we took a different tack… and designed to support it? The participants were involved in a recent effort to review and document cases across multiple domains where operators acted as a “failsafe” for automation, intervening in unanticipated situations to maximize success and minimize damage. We defined a “Human As Failsafe” (HAF) incident and then investigated conditions and practices making HAF success more or less likely. Analyzing these historical incidents, we suggested remediation approaches. The project also examined the legal concept of culpability (i.e., when intervention should have happened but didn’t) and proposed a state-machine-based analytic simulation to identify when HAF interventions are plausible. The panel objective will be to briefly present these concepts, but more generally to discuss designing for inevitable HAF events.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要