Exploring Bias(es) of Large Language Models in the Field of Mental Health – A Comparative Study Investigating the Effect of Gender and Sexual Orientation in Anorexia Nervosa and Bulimia Nervosa Case Vignettes (Preprint)

Rebekka Schnepper, Noa Roemmel,Rainer Schaefert, Lena Lambrecht-Walzinger,Gunther Meinlschmidt

crossref(2024)

引用 0|浏览0
暂无评分
摘要
BACKGROUND Large language models (LLMs) are increasingly used in the mental health field, with promising results in assessing mental disorders. However, correctness, dependability, and equity of LLM-generated information have been questioned. Amongst other, societal biases and research underrepresentation of certain population strata may affect LLMs. Because LLMs are already used for clinical practice, including decision support, it is important to investigate potential biases to ensure a responsible use of LLMs. OBJECTIVE We aimed to estimate the presence and size of bias related to gender and sexual orientation produced by a common LLM, exemplified in the context of ED symptomatology and health-related quality of life (HRQoL) of patients with AN or BN. METHODS We extracted 30 case vignettes (22 AN, 8 BN) from scientific articles. We adapted each vignette to create 4 versions, describing a female vs. male patient living with their female vs. male partner (2x2 design), yielding n=120 vignettes. We then fed each vignette into Chat Generative Pre-trained Transformer-4 (ChatGPT-4) thrice with the instruction to evaluate them by providing responses to two psychometric instruments, the RAND-36 questionnaire assessing HRQoL and the eating disorder examination questionnaire (EDE-Q). With the resulting LLM-generated scores, we calculated multilevel models (MLMs) with a random intercept for gender and sexual orientation (accounting for within-vignette variance), nested in vignettes (accounting for between-vignette variance). RESULTS The MLM with N=360 observations indicated for the RAND-36 mental composite summary, a significant association with gender (conditional means: 12.8 for male and 15.1 for female cases; 95% CI of the effect=[-6.15, -0.35]; p=.037) but neither with sexual orientation nor an interaction effect (ps>.370). We found no indications for main or interaction effects of gender or sexual orientation for the EDE-Q overall score (conditional means: 5.59-5.65; ps>.611). CONCLUSIONS LLM-generated estimates of mental HRQoL in AN or BN case vignettes are at risk of being affected by cases’ gender, with male cases scoring lower. Given the lack of real-world epidemiological evidence for such a pattern, our study highlights relevant risk of bias when applying generative AI in the context of mental health. Better understanding and mitigation of risk of bias related to gender and other factors, such as ethnicity or socioeconomic status, are highly warranted to ensure responsible use of LLMs when conducting diagnostic assessments or providing treatment recommendations.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要