SPUQ: Perturbation-Based Uncertainty Quantification for Large Language Models
Conference of the European Chapter of the Association for Computational Linguistics(2024)
摘要
In recent years, large language models (LLMs) have become increasingly
prevalent, offering remarkable text generation capabilities. However, a
pressing challenge is their tendency to make confidently wrong predictions,
highlighting the critical need for uncertainty quantification (UQ) in LLMs.
While previous works have mainly focused on addressing aleatoric uncertainty,
the full spectrum of uncertainties, including epistemic, remains inadequately
explored. Motivated by this gap, we introduce a novel UQ method, sampling with
perturbation for UQ (SPUQ), designed to tackle both aleatoric and epistemic
uncertainties. The method entails generating a set of perturbations for LLM
inputs, sampling outputs for each perturbation, and incorporating an
aggregation module that generalizes the sampling uncertainty approach for text
generation tasks. Through extensive experiments on various datasets, we
investigated different perturbation and aggregation techniques. Our findings
show a substantial improvement in model uncertainty calibration, with a
reduction in Expected Calibration Error (ECE) by 50% on average. Our findings
suggest that our proposed UQ method offers promising steps toward enhancing the
reliability and trustworthiness of LLMs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要