Can Large Language Models (or Humans) Disentangle Text?
arxiv(2024)
摘要
We investigate the potential of large language models (LLMs) to disentangle
text variables–to remove the textual traces of an undesired forbidden variable
in a task sometimes known as text distillation and closely related to the
fairness in AI and causal inference literature. We employ a range of various
LLM approaches in an attempt to disentangle text by identifying and removing
information about a target variable while preserving other relevant signals. We
show that in the strong test of removing sentiment, the statistical association
between the processed text and sentiment is still detectable to machine
learning classifiers post-LLM-disentanglement. Furthermore, we find that human
annotators also struggle to disentangle sentiment while preserving other
semantic content. This suggests there may be limited separability between
concept variables in some text contexts, highlighting limitations of methods
relying on text-level transformations and also raising questions about the
robustness of disentanglement methods that achieve statistical independence in
representation space.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要