On the Standardization of Behavioral Use Clauses and Their Adoption for Responsible Licensing of AI
CoRR(2024)
摘要
Growing concerns over negligent or malicious uses of AI have increased the
appetite for tools that help manage the risks of the technology. In 2018,
licenses with behaviorial-use clauses (commonly referred to as Responsible AI
Licenses) were proposed to give developers a framework for releasing AI assets
while specifying their users to mitigate negative applications. As of the end
of 2023, on the order of 40,000 software and model repositories have adopted
responsible AI licenses licenses. Notable models licensed with behavioral use
clauses include BLOOM (language) and LLaMA2 (language), Stable Diffusion
(image), and GRID (robotics). This paper explores why and how these licenses
have been adopted, and why and how they have been adapted to fit particular use
cases. We use a mixed-methods methodology of qualitative interviews, clustering
of license clauses, and quantitative analysis of license adoption. Based on
this evidence we take the position that responsible AI licenses need
standardization to avoid confusing users or diluting their impact. At the same
time, customization of behavioral restrictions is also appropriate in some
contexts (e.g., medical domains). We advocate for “standardized
customization” that can meet users' needs and can be supported via tooling.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要