Reducing Over-confident Errors outside the Known Distribution

arXiv: Computer Vision and Pattern Recognition(2018)

引用 24|浏览55
暂无评分
摘要
Intuitively, unfamiliarity should lead to lack of confidence. In reality, current algorithms often make highly confident yet wrong predictions when faced with test samples from an unknown distribution different from training. Unlike domain adaptation methods, we cannot gather an unexpected dataset prior to test, and unlike novelty detection methods, a best-effort original task prediction is still expected. We compare a number of methods from related fields such as calibration and epistemic uncertainty modeling, as well as two proposed methods that reduce overconfident errors of samples from an unknown distribution without drastically increasing evaluation time: (1) G-distillation, training an ensemble of classifiers and then distill into a single model using both labeled and unlabeled examples, or (2) NCR, reducing prediction confidence based on its novelty detection score. Experimentally, we investigate the overconfidence problem and evaluate our solution by creating and novel test splits, where are identically distributed with training and novel are not. We discover that calibrating using temperature scaling on familiar data is the best single-model method for improving confidence, followed by our proposed methods. In addition, some methodsu0027 NLL performance are roughly equivalent to a regularly trained model with certain degree of smoothing. Calibrating can also reduce confident errors, for example, in gender recognition by 95% on demographic groups different from the training data.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要