Computer Vision, Human Likeness, and Problematic Behaviors Distinguishing Stereotypes from Social Norms

2023 ADJUNCT PROCEEDINGS OF THE 31ST ACM CONFERENCE ON USER MODELING, ADAPTATION AND PERSONALIZATION, UMAP 2023(2023)

引用 0|浏览2
暂无评分
摘要
Computer Vision (CV) has become an essential tool for developers looking to personalize user experiences. In particular, commercial CV services can be used by those who are not machine learning experts, but who want to enhance their apps and services with vision capabilities. While the performance of CV has become increasingly human-like, its "social behaviors" and their compatibility with human values are of concern. In contrast to algorithmic decision-making, where fairness is used to evaluate system behavior, CV is often evaluated for stereotyping - the extent to which systems reflect prevalent social beliefs. This paper proposes that viewing stereotyping negatively is unhelpful in improving human-AI interaction. Rather, it is more fruitful to separate the observation of a social behavior (i.e., documenting what a machine does in relation to a human) from its judgment (i.e., relating the behavior to social norms). As norms differ across contexts and application areas, such an approach better reflects the real world, which is characterized by diversity and opposing views. However, it requires us to face up to two truths: i) humans - not machines - are the problem; ii) we must decide what degree of human-likeness we ultimately want; technologies designed to mimic us will reflect social bias.
更多
查看译文
关键词
algorithmic bias,computer vision,fairness,social behaviors,social norms,stereotypes
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要