Explainable Mobile Traffic Classification: the Case of Incremental Learning.

SAFE '23: Proceedings of the 2023 on Explainable and Safety Bounded, Fidelitous, Machine Learning for Networking(2023)

引用 0|浏览3
暂无评分
摘要
The surge in mobile network usage has contributed to the adoption of Deep Learning (DL) techniques for Traffic Classification (TC) to ensure efficient network management. However, DL-based classifiers still face challenges due to the frequent release of new apps (making them outdated) and the lack of interpretability (limiting their adoption). In this regard, Class Incremental Learning and eXplainable Artificial Intelligence have emerged as fundamental methodological tools. This work aims at reducing the gap between the DL models' performance and their interpretability in the TC domain. In this study, we examine from different perspectives the differences between classifiers when trained from scratch and incrementally. Using Deep SHAP, we derive global explanations to emphasize disparities in input importance. We comprehensively analyze base classifiers' behavior to understand the starting point of the incremental process and examine updated models to uncover architectures' features resulting from the incremental training. The analysis is based on MIRAGE19, an open dataset focused on mobile app traffic.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要