Input Uncertainty Quantification Via Simulation Bootstrapping.

Manjing Zhang,Yulin He,Guangwu Liu, Shan Dai

Winter Simulation Conference(2023)

Cited 0|Views10
No score
Input uncertainty, which refers to the output variability arising from statistical noise in specifying the input models, has been intensively studied recently. Ignoring input uncertainty often leads to poor estimates of system performance. In the non-parametric setting, input uncertainty is commonly estimated via bootstrap, but the performance by traditional bootstrap resampling is compromised when input uncertainty is also associated with simulation uncertainty. Nested simulation is studied to improve the performance by taking variance estimation into account, but suffers from a substantial burden on required simulation effort. To tackle the above problems, this paper introduces a non-nested method to build asymptotically valid confidence intervals for input uncertainty quantification. The convergence properties are studied, which establish statistical guarantees for the proposed estimators related to real-data size and bootstrap budget. An easy-implemented algorithm is also provided. Numerical examples show that the estimated confidence intervals perform satisfactorily under given confidence levels.
Translated text
Key words
Uncertainty Quantification,Input Uncertainty,Confidence Interval,Confidence Level,Variance Estimates,Model Input,Bootstrap Resampling,Confidence Intervals For Estimates,Statistical Noise,Simulation Uncertainty,Random Variables,Numerical Results,Bootstrap Samples,Statistical Distribution,Real-world Data,Empirical Distribution,Stock Price,Bootstrap Confidence Intervals,Central Limit Theorem,Conditional Expectation,Unknown Distribution,Coverage Probability,Limited Amount Of Data,Input Distribution,Joint Probability Distribution,Delta Method,Standard Intervals
AI Read Science
Must-Reading Tree
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined