How to evaluate uncertainty estimates in machine learning for regression?

Neural Networks(2024)

引用 0|浏览4
暂无评分
摘要
As neural networks become more popular, the need for accompanying uncertainty estimates increases. There are currently two main approaches to test the quality of these estimates. Most methods output a density. They can be compared by evaluating their loglikelihood on a test set. Other methods output a prediction interval directly. These methods are often tested by examining the fraction of test points that fall inside the corresponding prediction intervals. Intuitively, both approaches seem logical. However, we demonstrate through both theoretical arguments and simulations that both ways of evaluating the quality of uncertainty estimates have serious flaws. Firstly, both approaches cannot disentangle the separate components that jointly create the predictive uncertainty, making it difficult to evaluate the quality of the estimates of these components. Specifically, the quality of a confidence interval cannot reliably be tested by estimating the performance of a prediction interval. Secondly, the loglikelihood does not allow a comparison between methods that output a prediction interval directly and methods that output a density. A better loglikelihood also does not necessarily guarantee better prediction intervals, which is what the methods are often used for in practice. Moreover, the current approach to test prediction intervals directly has additional flaws. We show why testing a prediction or confidence interval on a single test set is fundamentally flawed. At best, marginal coverage is measured, implicitly averaging out overconfident and underconfident predictions. A much more desirable property is pointwise coverage, requiring the correct coverage for each prediction. We demonstrate through practical examples that these effects can result in favouring a method, based on the predictive uncertainty, that has undesirable behaviour of the confidence or prediction intervals. Finally, we propose a simulation-based testing approach that addresses these problems while still allowing easy comparison between different methods. This approach can be used for the development of new uncertainty quantification methods.
更多
查看译文
关键词
Neural networks,Uncertainty,Bootstrap,Dropout,Regression
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要