Discovering Key Sub-Trajectories to Explain Traffic Prediction.

Sensors(2023)

引用 0|浏览19
暂无评分
摘要
Flow prediction has attracted extensive research attention; however, achieving reliable efficiency and interpretability from a unified model remains a challenging problem. In the literature, the Shapley method offers interpretable and explanatory insights for a unified framework for interpreting predictions. Nevertheless, using the Shapley value directly in traffic prediction results in certain issues. On the one hand, the correlation of positive and negative regions of fine-grained interpretation areas is difficult to understand. On the other hand, the method is an NP-hard problem with numerous possibilities for grid-based interpretation. Therefore, in this paper, we propose Trajectory Shapley, an approximate Shapley approach that functions by decomposing a flow tensor input with a multitude of trajectories and outputting the trajectories' Shapley values in a specific region. However, the appearance of the trajectory is often random, leading to instability in interpreting results. Therefore, we propose a feature-based submodular algorithm to summarize the representative Shapley patterns. The summarization method can quickly generate the summary of Shapley distributions on overall trajectories so that users can understand the mechanisms of the deep model. Experimental results show that our algorithm can find multiple traffic trends from the different arterial roads and their Shapley distributions. Our approach was tested on real-world taxi trajectory datasets and exceeded explainable baseline models.
更多
查看译文
关键词
Trajectory,explainable,neural networks,submodular
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要