Using Assurance Cases to assure the fulfillment of non-functional requirements of AI-based systems - Lessons learned

Marc P. Hauer, Lena Müller-Kress, Gertraud Leimüller,Katharina Zweig

2023 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)(2023)

引用 0|浏览1
暂无评分
摘要
While AI as a cross-sector technology is applied to more and more use cases and contexts, concerns and questions about the impact of this use on the people involved and society as a whole are increasing. Many companies want to act on these concerns and put a high emphasize on the ethical and fair use of AI in their use cases. But up to date easily applicable methods to translate ethical values, such as fairness, into technical specifications are often missing. One method to do so previously described in literature is the development of an Assurance Case. An Assurance Case presents the argument structure of how a claim ("The system is fair") can be substantiated by evidences like tests. To test the application of the method for fairness requirements in the real world and derive important insights for future use, the method was tested with the real life use case of a software product, which assigns positions in the training of doctors in hospitals.By testing the method in a real life setting, it was possible to develop it further and both, increase its usability and facilitate the focus on ethical aspects such as fairness.Fundamental questions were: Can the method of Assurance Cases be used to enhance fairness of an AI system? What is the application like in an industry context? How can one improve the method of Assurance Cases to be applicable for the assessment of fairness in AI systems?Together with developers, software and domain experts as well as AI and Open Innovation experts and researchers, the method was applied to the use case of an industry partner.Key insights are that the developed Assurance Case is of great help to the industry partner, especially when thinking about future adaptions, communication and potential regulations or required evidences to support their claim of a fair system. Based on the insights derived from testing the process, the method can be improved, enabling it to be applied more efficiently on future use cases and thus enhancing the fairness of AI systems in the long term.Based on our experience, we consider the Assurance Case framework to be helpful and useful to be able to assure fairness of AI systems.
更多
查看译文
关键词
Assurance case,fairness,practical elaboration,medical rotations
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要