Investigating Bias in LLM-Based Bias Detection: Disparities between LLMs and Human Perception
arxiv(2024)
摘要
The pervasive spread of misinformation and disinformation in social media
underscores the critical importance of detecting media bias. While robust Large
Language Models (LLMs) have emerged as foundational tools for bias prediction,
concerns about inherent biases within these models persist. In this work, we
investigate the presence and nature of bias within LLMs and its consequential
impact on media bias detection. Departing from conventional approaches that
focus solely on bias detection in media content, we delve into biases within
the LLM systems themselves. Through meticulous examination, we probe whether
LLMs exhibit biases, particularly in political bias prediction and text
continuation tasks. Additionally, we explore bias across diverse topics, aiming
to uncover nuanced variations in bias expression within the LLM framework.
Importantly, we propose debiasing strategies, including prompt engineering and
model fine-tuning. Extensive analysis of bias tendencies across different LLMs
sheds light on the broader landscape of bias propagation in language models.
This study advances our understanding of LLM bias, offering critical insights
into its implications for bias detection tasks and paving the way for more
robust and equitable AI systems
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要