Exploring Value Biases: How LLMs Deviate Towards the Ideal
CoRR(2024)
摘要
Large-Language-Models (LLMs) are deployed in a wide range of applications,
and their response has an increasing social impact. Understanding the
non-deliberate(ive) mechanism of LLMs in giving responses is essential in
explaining their performance and discerning their biases in real-world
applications. This is analogous to human studies, where such inadvertent
responses are referred to as sampling. We study this sampling of LLMs in light
of value bias and show that the sampling of LLMs tends to favour high-value
options. Value bias corresponds to this shift of response from the most likely
towards an ideal value represented in the LLM. In fact, this effect can be
reproduced even with new entities learnt via in-context prompting. We show that
this bias manifests in unexpected places and has implications on relevant
application scenarios, like choosing exemplars. The results show that value
bias is strong in LLMs across different categories, similar to the results
found in human studies.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要