If you worry about humanity, you should be more scared of humans than of AI

BULLETIN OF THE ATOMIC SCIENTISTS(2023)

引用 0|浏览0
暂无评分
摘要
Advances in artificial intelligence (AI) have prompted extensive and public concerns about this technology's capacity to contribute to the spread of misinformation, algorithmic bias, and cybersecurity breaches and to pose, potentially, existential threats to humanity. We suggest that although these threats are both real and important to address, the heightened attention to AI's harms has distracted from human beings' outsized role in perpetuating these same harms. We suggest the need to recalibrate standards for judging the dangers of AI in terms of their risks relative to those of human beings. Further, we suggest that, if anything, AI can aid human beings in decision making aimed at improving social equality, safety, productivity, and mitigating some existential threats.
更多
查看译文
关键词
Artificial intelligence, existential risk, algorithmic bias, ethics, cybersecurity, nuclear decision making
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要