Evaluating Biases in Context-Dependent Health Questions
arxiv(2024)
摘要
Chat-based large language models have the opportunity to empower individuals
lacking high-quality healthcare access to receive personalized information
across a variety of topics. However, users may ask underspecified questions
that require additional context for a model to correctly answer. We study how
large language model biases are exhibited through these contextual questions in
the healthcare domain. To accomplish this, we curate a dataset of sexual and
reproductive healthcare questions that are dependent on age, sex, and location
attributes. We compare models' outputs with and without demographic context to
determine group alignment among our contextual questions. Our experiments
reveal biases in each of these attributes, where young adult female users are
favored.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要