Ranking Entities along Conceptual Space Dimensions with LLMs: An Analysis of Fine-Tuning Strategies
CoRR(2024)
摘要
Conceptual spaces represent entities in terms of their primitive semantic
features. Such representations are highly valuable but they are notoriously
difficult to learn, especially when it comes to modelling perceptual and
subjective features. Distilling conceptual spaces from Large Language Models
(LLMs) has recently emerged as a promising strategy. However, existing work has
been limited to probing pre-trained LLMs using relatively simple zero-shot
strategies. We focus in particular on the task of ranking entities according to
a given conceptual space dimension. Unfortunately, we cannot directly fine-tune
LLMs on this task, because ground truth rankings for conceptual space
dimensions are rare. We therefore use more readily available features as
training data and analyse whether the ranking capabilities of the resulting
models transfer to perceptual and subjective features. We find that this is
indeed the case, to some extent, but having perceptual and subjective features
in the training data seems essential for achieving the best results. We
furthermore find that pointwise ranking strategies are competitive against
pairwise approaches, in defiance of common wisdom.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要