An Inverse Chance-Constrained Approach to the Calibration of Robust Models.

2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC)(2023)

引用 0|浏览0
暂无评分
摘要
This paper proposes a strategy to calibrate computational models according to uncertain input-output data. To this end, uncertainty in the data is first characterized using adversarial data sets. Samples drawn from such sets are then mapped from the input-output space to the parameter space using an inverse mapping. This mapping minimizes the collective output spread of an ensemble of point predictions while satisfying a set of individual data-matching requirements. The distribution of the resulting parameter points, which often exhibits strong parameter dependencies, is then modeled using Sliced Normal distributions. The chance-constrained formulation used to learn this distribution enables the analyst to trade-off a greater likelihood for most of the data against a lower likelihood for some of the data, thereby relaxing the conservatism of the calibrated model. This formulation neglects the worst-performing quantiles of each adversarial distribution and eliminates the detrimental effects that outliers have on the resulting model. This calibration approach not only has a considerably lower computational cost than the standard forward approach but it also allows for the identification of suitable distribution classes, which in turn yield better calibrated models.
更多
查看译文
关键词
data,uncertainty,outliers,risk,robustness
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要