CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System
arxiv(2024)
摘要
In the evolving landscape of recommender systems, the integration of Large
Language Models (LLMs) such as ChatGPT marks a new era, introducing the concept
of Recommendation via LLM (RecLLM). While these advancements promise
unprecedented personalization and efficiency, they also bring to the fore
critical concerns regarding fairness, particularly in how recommendations might
inadvertently perpetuate or amplify biases associated with sensitive user
attributes. In order to address these concerns, our study introduces a
comprehensive evaluation framework, CFaiRLLM, aimed at evaluating (and thereby
mitigating) biases on the consumer side within RecLLMs.
Our research methodically assesses the fairness of RecLLMs by examining how
recommendations might vary with the inclusion of sensitive attributes such as
gender, age, and their intersections, through both similarity alignment and
true preference alignment. By analyzing recommendations generated under
different conditions-including the use of sensitive attributes in user
prompts-our framework identifies potential biases in the recommendations
provided. A key part of our study involves exploring how different detailed
strategies for constructing user profiles (random, top-rated, recent) impact
the alignment between recommendations made without consideration of sensitive
attributes and those that are sensitive-attribute-aware, highlighting the bias
mechanisms within RecLLMs.
The findings in our study highlight notable disparities in the fairness of
recommendations, particularly when sensitive attributes are integrated into the
recommendation process, either individually or in combination. The analysis
demonstrates that the choice of user profile sampling strategy plays a
significant role in affecting fairness outcomes, highlighting the complexity of
achieving fair recommendations in the era of LLMs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要