Towards Automatic Evaluation for LLMs' Clinical Capabilities: Metric, Data, and Algorithm
arxiv(2024)
摘要
Large language models (LLMs) are gaining increasing interests to improve
clinical efficiency for medical diagnosis, owing to their unprecedented
performance in modelling natural language. Ensuring the safe and reliable
clinical applications, the evaluation of LLMs indeed becomes critical for
better mitigating the potential risks, e.g., hallucinations. However, current
evaluation methods heavily rely on labor-intensive human participation to
achieve human-preferred judgements. To overcome this challenge, we propose an
automatic evaluation paradigm tailored to assess the LLMs' capabilities in
delivering clinical services, e.g., disease diagnosis and treatment. The
evaluation paradigm contains three basic elements: metric, data, and algorithm.
Specifically, inspired by professional clinical practice pathways, we formulate
a LLM-specific clinical pathway (LCP) to define the clinical capabilities that
a doctor agent should possess. Then, Standardized Patients (SPs) from the
medical education are introduced as the guideline for collecting medical data
for evaluation, which can well ensure the completeness of the evaluation
procedure. Leveraging these steps, we develop a multi-agent framework to
simulate the interactive environment between SPs and a doctor agent, which is
equipped with a Retrieval-Augmented Evaluation (RAE) to determine whether the
behaviors of a doctor agent are in accordance with LCP. The above paradigm can
be extended to any similar clinical scenarios to automatically evaluate the
LLMs' medical capabilities. Applying such paradigm, we construct an evaluation
benchmark in the field of urology, including a LCP, a SPs dataset, and an
automated RAE. Extensive experiments are conducted to demonstrate the
effectiveness of the proposed approach, providing more insights for LLMs' safe
and reliable deployments in clinical practice.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要