Trustworthy Autonomous System Development

ACM Transactions on Embedded Computing Systems(2023)

引用 2|浏览28
暂无评分
摘要
Autonomous systems emerge from the need to progressively replace human operators by autonomous agents in a wide variety of application areas. We offer an analysis of the state of the art in developing autonomous systems, focusing on design and validation and showing that the multi-faceted challenges involved go well beyond the limits of weak AI. We argue that traditional model-based techniques are defeated by the complexity of the problem, while solutions based on end-to-end machine learning fail to provide the necessary trustworthiness. We advocate a hybrid design approach, which combines the two, adopting the best of each, and seeks tradeoffs between trustworthiness and performance. We claim that traditional risk analysis and mitigation techniques fail to scale and discuss the trend of moving away from correctness at design time and toward reliance on runtime assurance techniques. We argue that simulation and testing remain the only realistic approach for global validation and show how current methods can be adapted to autonomous systems. We conclude by discussing the factors that will play a decisive role in the acceptance of autonomous systems and by highlighting the urgent need for new theoretical foundations.
更多
查看译文
关键词
Autonomous system, trustworthy system development, dependability, machine learning, critical systems engineering, validation, simulation and testing, requirement and scenario specification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要