Systemization of Knowledge: Robust Deep Learning using Hardware-software co-design in Centralized and Federated Settings

ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS(2023)

引用 0|浏览17
暂无评分
摘要
Deep learning (DL) models are enabling a significant paradigm shift in a diverse range of fields, including natural language processing and computer vision, as well as the design and automation of complex integrated circuits. While the deep models - and optimizations based on them, e.g., Deep Reinforcement Learning (RL) - demonstrate a superior performance and a great capability for automated representation learning, earlier works have revealed the vulnerability ofDL to various attacks. The vulnerabilities include adversarial samples, model poisoning, and fault injection attacks. On the one hand, these security threats could divert the behavior of the DL model and lead to incorrect decisions in critical tasks. On the other hand, the susceptibility of DL to potential attacks might thwart trustworthy technology transfer as well as reliable DL deployment. In this work, we investigate the existing defense techniques to protect DL against the above-mentioned security threats. Particularly, we review end-to-end defense schemes for robust deep learning in both centralized and federated learning settings. Our comprehensive taxonomy and horizontal comparisons reveal an important fact that defense strategies developed using DL/software/hardware co-design outperform the DL/software-only counterparts and show how they can achieve very efficient and latency-optimized defenses for real-world applications. We believe our systemization of knowledge sheds light on the promising performance of hardware-software co-design of DL security methodologies and can guide the development of future defenses.
更多
查看译文
关键词
Machine learning,federated learning,security,robustness
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要