Toward Mission-Critical AI: Interpretable, Actionable, and Resilient AI

CyCon(2023)

引用 0|浏览18
暂无评分
摘要
Artificial intelligence (AI) is widely used in science and practice. However, its use in mission-critical contexts is limited due to the lack of appropriate methods for establishing confidence and trust in AI’s decisions. To bridge this gap, we argue that instead of aiming to achieve Explainable AI, we need to develop Interpretable, Actionable, and Resilient AI (AI3). Our position is that aiming to provide military commanders and decision-makers with an understanding of how AI models make decisions risks constraining AI capabilities to only those reconcilable with human cognition. Instead, complex systems should be designed with features that build trust by bringing decision-analytic perspectives and formal tools into the AI development and application process. AI3 incorporates explicit quantifications and visualizations of user confidence in AI decisions. In doing so, it makes examining and testing of AI predictions possible in order to establish a basis for trust in the systems’ decision-making and ensure broad benefits from deploying and advancing its computational capabilities. This presentation provides a methodological frame and practical examples of integrating AI into mission-critical use cases and decision-analytical tools.
更多
查看译文
关键词
artificial intelligence, trust, mission-critical AI
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要