AI in radiology: Legal responsibilities and the car paradox

European Journal of Radiology(2024)

引用 0|浏览1
暂无评分
摘要
The integration of AI in radiology raises significant legal questions about responsibility for errors. Radiologists fear AI may introduce new legal challenges, despite its potential to enhance diagnostic accuracy. AI tools, even those approved by regulatory bodies like the FDA or CE, are not perfect, posing a risk of failure. The key issue is how AI is implemented: as a stand-alone diagnostic tool or as an aid to radiologists. The latter approach could reduce undesired side effects. However, it's unclear who should be held liable for AI failures, with potential candidates ranging from engineers and radiologists involved in AI development to companies and department heads who integrate these tools into clinical practice. The EU's AI Act, recognizing AI's risks, categorizes applications by risk level, with many radiology-related AI tools considered high risk. Legal precedents in autonomous vehicles offer some guidance on assigning responsibility. Yet, the existing legal challenges in radiology, such as diagnostic errors, persist. AI's potential to improve diagnostics raises questions about the legal implications of not using available AI tools. For instance, an AI tool improving the detection of pediatric fractures could reduce legal risks. This situation parallels innovations like car turn signals, where ignoring available safety enhancements could lead to legal problems. The debate underscores the need for further research and regulation to clarify AI's role in radiology, balancing innovation with legal and ethical considerations.
更多
查看译文
关键词
Artificial intelligence,Legal,Errors
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要