LLM Instruction-Example Adaptive Prompting (LEAP) Framework for Clinical Relation Extraction.

Huixue Zhou,Mingchen Li, Yongkang Xiao, Han Yang,Rui Zhang

medRxiv : the preprint server for health sciences(2023)

引用 0|浏览1
暂无评分
摘要
Objective:To investigate the demonstration in Large Language Models (LLMs) for clinical relation extraction. We focus on examining two types of adaptive demonstration: instruction adaptive prompting, and example adaptive prompting to understand their impacts and effectiveness. Materials and Methods:The study unfolds in two stages. Initially, we explored a range of demonstration components vital to LLMs' clinical data extraction, such as task descriptions and examples, and tested their combinations. Subsequently, we introduced the Instruction-Example Adaptive Prompting (LEAP) Framework, a system that integrates two types of adaptive prompts: one preceding instruction and another before examples. This framework is designed to systematically explore both adaptive task description and adaptive examples within the demonstration. We evaluated LEAP framework's performance on the DDI and BC5CDR chemical interaction datasets, applying it across LLMs such as Llama2-7b, Llama2-13b, and MedLLaMA_13B. Results:The study revealed that Instruction + Options + Examples and its expanded form substantially raised F1-scores over the standard Instruction + Options mode. LEAP framework excelled, especially with example adaptive prompting that outdid traditional instruction tuning across models. Notably, the MedLLAMA-13b model scored an impressive 95.13 F1 on the BC5CDR dataset with this method. Significant improvements were also seen in the DDI 2013 dataset, confirming the method's robustness in sophisticated data extraction. Conclusion:The LEAP framework presents a promising avenue for refining LLM training strategies, steering away from extensive finetuning towards more contextually rich and dynamic prompting methodologies.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要