No more optimization rules: LLM-enabled policy-based multi-modal query optimizer (version 1)
arxiv(2024)
摘要
Large language model (LLM) has marked a pivotal moment in the field of
machine learning and deep learning. Recently its capability for query planning
has been investigated, including both single-modal and multi-modal queries.
However, there is no work on the query optimization capability of LLM. As a
critical (or could even be the most important) step that significantly impacts
the execution performance of the query plan, such analysis and attempts should
not be missed. From another aspect, existing query optimizers are usually
rule-based or rule-based + cost-based, i.e., they are dependent on manually
created rules to complete the query plan rewrite/transformation. Given the fact
that modern optimizers include hundreds to thousands of rules, designing a
multi-modal query optimizer following a similar way is significantly
time-consuming since we will have to enumerate as many multi-modal optimization
rules as possible, which has not been well addressed today. In this paper, we
investigate the query optimization ability of LLM and use LLM to design LaPuda,
a novel LLM and Policy based multi-modal query optimizer. Instead of
enumerating specific and detailed rules, LaPuda only needs a few abstract
policies to guide LLM in the optimization, by which much time and human effort
are saved. Furthermore, to prevent LLM from making mistakes or negative
optimization, we borrow the idea of gradient descent and propose a guided cost
descent (GCD) algorithm to perform the optimization, such that the optimization
can be kept in the correct direction. In our evaluation, our methods
consistently outperform the baselines in most cases. For example, the optimized
plans generated by our methods result in 1 3x higher execution speed than those
by the baselines.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要