eXplainable AI for routine outcome monitoring and clinical feedback

Counselling and Psychotherapy Research(2024)

引用 0|浏览1
暂无评分
摘要
AbstractArtificial intelligence (AI), specifically machine learning (ML), is adept at identifying patterns and insights from vast amounts of data from routine outcome monitoring (ROM) and clinical feedback during treatment. When applied to patient feedback data, AI/ML models can assist clinicians in predicting treatment outcomes. Common reasons for clinician resistance to integrating data‐driven decision‐support tools in clinical practice include concerns about the reliability, relevance and usefulness of the technology coupled with perceived conflicts between data‐driven recommendations and clinical judgement. While AI/ML‐based tools might be precise in guiding treatment decisions, it might not be possible to realise their potential at present, due to implementation, acceptability and ethical concerns. In this article, we will outline the concept of eXplainable AI (XAI), a potential solution to these concerns. XAI refers to a form of AI designed to articulate its purpose, rationale and decision‐making process in a manner that is comprehensible to humans. The key to this approach is that end‐users see a clear and understandable pathway from input data to recommendations. We use real Norse Feedback data to present an AI/ML example demonstrating one use case for XAI. Furthermore, we discuss key learning points that we will employ in future XAI implementations.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要