Human Value Requirements in AI Systems: Empirical Analysis of Amazon Alexa

2023 IEEE 31st International Requirements Engineering Conference Workshops (REW)(2023)

引用 0|浏览6
暂无评分
摘要
The importance of incorporating human values (e.g., transparency, privacy, social recognition, tradition) in the Requirements Engineering (RE) process is well-acknowledged, but there is a paucity of empirical research for integrating human values in RE. This shortfall becomes more pronounced when designing Artificial Intelligence (AI) systems due to their significant societal impact. Ignoring or violating human values in AI systems can lead to user dissatisfaction, negative socio-economic repercussions, and in some instances, societal harm. However, there is a lack of guidance on addressing human values within the RE process for specific contexts of AI system development. In this paper, we explore human value requirements from the end-users' feedback for an AI system. We conduct an empirical analysis of the Amazon Alexa app as a case study, examining 1003 users' reviews to identify relevant human values and assess the extent to which these values are addressed or ignored in the app. We identified 34 values of the end-users of Amazon Alexa. Among them, only one value is addressed (self-discipline) and 23 of them are ignored (freedom, equality, obedience) in the app. The feedback provided mixed experiences (both addressed and ignored) on the rest of the ten values. Through this analysis, we have tailored an approach for identifying human values from a specific type of AI system. We posit that this approach has the potential for utility across different AI systems and a broad range of contexts, providing guidance for developing human value requirements for values based AI systems.
更多
查看译文
关键词
Human values,Requirements,Artificial intelligence,App reviews,Empirical study
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要