Design and Usability Study of PRIMO: An Explainable Artificial Intelligence Software Tool for Weight Management Experts (Preprint)

crossref(2022)

引用 0|浏览0
暂无评分
摘要
BACKGROUND Obesity-attributable medical expenditures remain high, but effective and economical interventions have not been adequately identified. Predicting the likelihood of success of weight loss in interventions using machine learning (ML) models may enhance intervention effectiveness by enabling timely and dynamic modification of intervention components. However, a lack of understanding and trust in these methods impacts adoption among weight management experts. Although many clinicians desire to follow ML recommendations, recent literature has shown that clinicians with high ML familiarity are less likely to use ML recommendations. Developments in explainable ML techniques enable the generation of explanations to interpret decisions of ML models, yet it is unknown how to enhance model understanding, trust, and adoption among weight management experts. OBJECTIVE To build and evaluate an ML model that can predict weight loss early for intervention, to assess whether providing ML-based explanations increases weight management experts’ agreement with ML model predictions, and to inform factors that influence understanding and trust of ML models to advance explainability in early prediction of weight loss among weight management experts. METHODS We generated a random forest (RF) model using data from a 6-month technology-supported weight loss treatment program (N=419). We leveraged findings from existing explainability metrics to develop PRIMO (Prime Implicant Maintenance of Outcome), an interactive tool to understand the reason behind predictions made by the RF model. We asked 14 weight management experts to predict hypothetical participants’ weight loss success before and after using PRIMO. We used generalized linear mixed effects models (GLMM) to evaluate participants’ agreement with ML predictions and conducted likelihood ratio tests to examine the relationship between explainability methods and outcomes for nested models. We performed guided interviews and thematic analysis to study the impact of our tool on experts’ understanding and trust in the model. RESULTS Our RF model had 81% accuracy in predicting weight loss success by week 2 (early timepoint). Weight management experts were significantly more likely to agree with the model when using PRIMO (X^2=9.86, 2 DF; P=0.007) compared with the other 2 methods (log odds ratio, –1.56 and –1.13). From our study, we inferred that our software not only influenced experts’ understanding and trust but also impacted their decision-making process. Several themes were identified through interviews: (1) preference for multiple explanation types (2) need to visualize uncertainty in the explanations provided by PRIMO; and (3) need for model performance metrics on similar participant test instances. CONCLUSIONS Our results show the potential for weight management experts to agree with ML-based early prediction of success in weight loss treatment programs, enabling timely and dynamic modification of intervention components to enhance intervention effectiveness. Our findings further inform methods for advancing the understandability and trust of ML models among weight management experts.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要