Tackle Cognitive Biases in Videosurveillance

PROCEEDINGS OF 2021 IEEE 30TH INTERNATIONAL SYMPOSIUM ON INDUSTRIAL ELECTRONICS (ISIE)(2021)

引用 0|浏览1
暂无评分
摘要
Artificial Intelligence and Deep Learning have been developing rapidly for almost a decade. New solutions emerged and many jobs, systems and processes were transformed using these technologies. They impacted lots of professions in a positive way, such as reducing repetitive and dangerous tasks, lessening operation costs and gaining productivity. Despite the real benefits coming from AI, limitations arose and may dramatically influence the results. The biases in AI are one of the major restrictions to mass industrialization of such tools. Indeed, biases exist in data and people. Algorithms might perpetuate or emphasize these biases and propose prejudiced decisions, causing societal questioning. Emergent research works aim at reducing the inequalities, in particular with the study of Ethics and Fairness in Artificial Intelligence algorithms. In this paper we propose a synthesis of bias origins and how to tackle them through fairness and ethics. This is illustrated by the chosen use case of videosurveillance in train stations.
更多
查看译文
关键词
AI ethics, bias, fairness, video
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要