Explainability in Predictive Process Monitoring - When Understanding Helps Improving.

BPM(2020)

引用 21|浏览12
暂无评分
摘要
Predictive business process monitoring techniques aim at making predictions about the future state of the executions of a business process, as for instance the remaining execution time, the next activity that will be executed, or the final outcome with respect to a set of possible outcomes. However, in general, the accuracy of a predictive model is not optimal so that, in some cases, the predictions provided by the model are wrong. In addition, state-of-the-art techniques for predictive process monitoring do not give an explanation about what features induced the predictive model to provide wrong predictions, so that it is difficult to understand why the predictive model was mistaken. In this paper, we propose a novel approach to explain why a predictive model for outcome-oriented predictions provides wrong predictions, and eventually improve its accuracy. The approach leverages post-hoc explainers and different encodings for identifying the most common features that induce a predictor to make mistakes. By reducing the impact of those features, the accuracy of the predictive model is increased. The approach has been validated on both synthetic and real-life logs.
更多
查看译文
关键词
predictive process monitoring,explainability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要