The Challenge of Classification Confidence Estimation in Dynamically-Adaptive Neural Networks.

International Conference / Workshop on Embedded Computer Systems: Architectures, Modeling and Simulation (SAMOS)(2021)

引用 0|浏览12
暂无评分
摘要
An emerging trend to improve the power efficiency of neural network computations consists of dynamically adapting the network architecture or parameters to different inputs. In particular, many such dynamic network models are able to output 'easy' samples at early exits if a certain confidence-based criterion is satisfied. Traditional methods to estimate inference confidence of a monitored neural network, or of intermediate predictions thereof, include the maximum element of the Soft-Max output (score), or the difference between the largest and the second largest score values (score margin). Such methods only rely on a small and position-agnostic subset of the available information at the output of the monitored neural network classifier. For the first time, this paper reports on the lessons learned while trying to extrapolate confidence information from the whole distribution of the classifier outputs rather than from the top scores only. Our experimental campaign indicates that capturing specific patterns associated with misclassifications is nontrivial due to counterintuitive empirical evidence. Rather than disqualifying the approach, this paper calls for further fine-tuning to unfold its potential, and is a first step toward a systematic assessment of confidence-based criteria for dynamically-adaptive neural network computations.
更多
查看译文
关键词
Neural network,Runtime adaptivity,Inference confidence,Monitoring neural network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要