RoCo-NAS: Robust and Compact Neural Architecture Search

2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)(2021)

引用 4|浏览20
暂无评分
摘要
Deep model compression has been studied widely, and state-of-the-art methods can now achieve high compression ratios with minimum accuracy loss. Recent advances in adversarial attacks reveal the inherent vulnerability of deep neural networks to slightly perturbed images called adversarial examples. Since then, extensive efforts have been performed to enhance deep networks' robustness via specialized loss functions and learning algorithms. Previous works suggest that network size and robustness against adversarial examples contradict on most occasions. In this paper, we investigate how to optimize compactness and robustness to adversarial attacks of neural network architectures while maintaining the accuracy using multi-objective neural architecture search. We propose the use of previously generated adversarial examples as an objective to evaluate the robustness of our models in addition to the number of floating-point operations to assess model complexity i.e. compactness. Experiments on some recent neural architecture search algorithms show that due to their limited search space they fail to find robust and compact architectures. By creating a novel neural architecture search (RoCo-NAS), we were able to evolve an architecture that is up to 7% more accurate against adversarial samples than its more complex architecture counterpart. Thus, the results show inherently robust architectures regardless of their size. This opens up a new range of possibilities for the exploration and design of deep neural networks using automatic architecture search.
更多
查看译文
关键词
RoCo-NAS,robust architecture search,compact neural architecture search,deep model compression,high compression ratios,adversarial attacks,inherent vulnerability,deep neural networks,loss functions,neural network architectures,model complexity,automatic architecture search,multiobjective neural architecture search algorithms
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要