Exploring the Impact of Table-to-Text Methods on Augmenting LLM-based Question Answering with Domain Hybrid Data
CoRR(2024)
摘要
Augmenting Large Language Models (LLMs) for Question Answering (QA) with
domain specific data has attracted wide attention. However, domain data often
exists in a hybrid format, including text and semi-structured tables, posing
challenges for the seamless integration of information. Table-to-Text
Generation is a promising solution by facilitating the transformation of hybrid
data into a uniformly text-formatted corpus. Although this technique has been
widely studied by the NLP community, there is currently no comparative analysis
on how corpora generated by different table-to-text methods affect the
performance of QA systems. In this paper, we address this research gap in two
steps. First, we innovatively integrate table-to-text generation into the
framework of enhancing LLM-based QA systems with domain hybrid data. Then, we
utilize this framework in real-world industrial data to conduct extensive
experiments on two types of QA systems (DSFT and RAG frameworks) with four
representative methods: Markdown format, Template serialization, TPLM-based
method, and LLM-based method. Based on the experimental results, we draw some
empirical findings and explore the underlying reasons behind the success of
some methods. We hope the findings of this work will provide a valuable
reference for the academic and industrial communities in developing robust QA
systems.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要