Multi-Conditional Ranking with Large Language Models
arxiv(2024)
摘要
Utilizing large language models (LLMs) to rank a set of items has become a
common approach in recommendation and retrieval systems. Typically, these
systems focus on ordering a substantial number of documents in a monotonic
order based on a given query. However, real-world scenarios often present a
different challenge: ranking a comparatively smaller set of items, but
according to a variety of diverse and occasionally conflicting conditions. In
this paper, we define and explore the task of multi-conditional ranking by
introducing MCRank, a benchmark tailored for assessing multi-conditional
ranking across various item types and conditions. Our analysis of LLMs using
MCRank indicates a significant decrease in performance as the number and
complexity of items and conditions grow. To overcome this limitation, we
propose a novel decomposed reasoning method, consisting of EXtracting and
Sorting the conditions, and then Iterativly Ranking the items (EXSIR). Our
extensive experiments show that this decomposed reasoning method enhances LLMs'
performance significantly, achieving up to a 12
LLMs. We also provide a detailed analysis of LLMs performance across various
condition categories, and examine the effectiveness of decomposition step.
Furthermore, we compare our method with existing approaches such as
Chain-of-Thought and an encoder-type ranking model, demonstrating the
superiority of our approach and complexity of MCR task. We released our dataset
and code.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要