Political economy of superhuman AI

CoRR(2022)

引用 0|浏览6
暂无评分
摘要
In this note, I study the institutions and game theoretic assumptions that would prevent the emergence of "superhuman-level" arfiticial general intelligence, denoted by AI*. These assumptions are (i) the "Freedom of the Mind," (ii) open source "access" to AI*, and (iii) rationality of the representative human agent, who competes against AI*. I prove that under these three assumptions it is impossible that an AI* exists. This result gives rise to two immediate recommendations for public policy. First, "cloning" digitally the human brain should be strictly regulated, and hypothetical AI*'s access to brain should be prohibited. Second, AI* research should be made widely, if not publicly, accessible.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要