Towards secure and robust stateful deep learning systems with model based analysis

user-5efd71244c775ed682ed8a03(2019)

引用 0|浏览19
暂无评分
摘要
More and more we start to embrace the convenience and effectiveness of the rapidly advancing artificial intelligence (AI) technologies in our lives and different industries. Within this revolution, deep learning (DL), as one of the key innovation in AI, has made significant progress over the past decades. However, even the state-of-the-art DL systems are susceptible to minor adversarial perturbations, and suffer from quality, reliability and security problems, preventing the deployment of DL systems on safety- and security-critical applications. An early-stage assessment of DL systems is crucial in discovering defects and improving the overall product quality. Mature analysis processes and techniques have been established for traditional software, but it is highly non-trivial to directly apply them to DL systems. These challenges have motivated researchers to investigate testing, verification and adversarial sample detection of feed-forward neural networks, but little has been done on the recurrent neural network (RNN)-based stateful DL systems. In this thesis, we initiate the first major effort on the white-box RNN analysis using model-based approach to focus on the security and robustness properties and demonstrate the usefulness with applications on test case production, attack generation, and adversarial sample detection. To further protect the DL systems, we propose an efficient monitoring algorithm which could be potentially used to shield DL systems against adversarial samples at runtime, based on the RNN behaviors reflected by the abstract models. The first part of the thesis focuses on RNN model extraction and offline analysis on the security …
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要