Detecting Deep Neural Network Defects with Data Flow Analysis

arxiv(2021)

引用 1|浏览66
暂无评分
摘要
Deep neural networks (DNNs) are shown to be promising solutions in many challenging artificial intelligence tasks, including object recognition, natural language processing, and even unmanned driving. A DNN model, generally based on statistical summarization of in-house training data, aims to predict correct output given an input encountered in the wild. In general, 100% precision is therefore impossible due to its probabilistic nature. For DNN practitioners, it is very hard, if not impossible, to figure out whether the low precision of a DNN model is an inevitable result, or caused by defects such as bad network design or improper training process. This paper aims at addressing this challenging problem. We approach with a careful categorization of the root causes of low precision. We find that the internal data flow footprints of a DNN model can provide insights to locate the root cause effectively. We then develop a tool, namely, DeepMorph (DNN Tomography) to analyze the root cause, which can instantly guide a DNN developer to improve the model. Case studies on four popular datasets show the effectiveness of DeepMorph.
更多
查看译文
关键词
data flow analysis,artificial intelligence,object recognition,natural language processing,unmanned driving,statistical summarization,in-house training data,probabilistic nature,network design,internal data flow footprints,DNN developer,DNN tomography,deep neural network defects detection,DeepMorph
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要