Explainability techniques applied to road traffic forecasting using Graph Neural Network models

Information Sciences(2023)

引用 1|浏览3
暂无评分
摘要
In recent years, several new Artificial Intelligence methods have been developed to make models more explainable and interpretable. The techniques essentially deal with the implementation of transparency and traceability of black box machine learning methods. Black box refers to the inability to explain why the model turns the input into the output, which may be problematic in some fields. To overcome this problem, our approach provides a comprehensive combination of predictive and explainability techniques. Firstly, we compared statistical regression, classic machine learning and deep learning models, reaching the conclusion that models based on deep learning exhibit greater accuracy. Of the great variety of deep learning models, the best predictive model in spatio-temporal traffic datasets was found to be the Adaptive Graph Convolutional Recurrent Network. Regarding the explainability technique, GraphMask shows a notably higher fidelity metric than other methods. The integration of both techniques was tested by means of experimental results, concluding that our approach improves deep learning model accuracy, making such models more transparent and interpretable. It allows us to discard up to 95% of the nodes used, facilitating an analysis of its behavior and thus improving the understanding of the model.
更多
查看译文
关键词
Graph neural networks,Deep learning,Data analysis,Explainability,Traffic flow
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要