Multi-task Learning with Self-defined Tasks for Adversarial Robustness of Deep Networks

IEEE Access(2024)

引用 0|浏览0
暂无评分
摘要
Despite the considerable progress made in the development of deep neural networks (DNNs), their vulnerability to adversarial attacks remains a major hindrance to their practical application. Consequently, there has been a surge of interest and investment in researching adversarial attacks and defense mechanisms, with a considerable focus on comprehending the properties of adversarial robustness. Among these intriguing studies, a couple of works show that multi-task learning can enhance the adversarial robustness of DNNs. Based on the previous works, we propose an efficient way to improve the adversarial robustness of a given main task in a more practical multi-task learning scenario by leveraging self-defined auxiliary task. The core concept of our proposed approach lies not just in jointly training predefined auxiliary tasks but in manually defining auxiliary tasks based on the built-in labels of given data, which enables users to efficiently perform multi-task learning without the need for pre-defined auxiliary tasks. The newly generated self-defined tasks remain “hidden” from attackers and serve a supplementary role in improving the adversarial accuracy of the main task. In addition, the hidden auxiliary tasks also enable to build a rejection module that utilizes predictions from the auxiliary tasks to enhance the reliability of the prediction results. Through experiments conducted on five benchmark datasets, we confirmed that multi-task learning with self-defined hidden tasks can be actively employed to enhance the adversarial robustness and reliability.
更多
查看译文
关键词
Adversarial attack,adversarial robustness,multi-task learning,adversarial training,self-defined auxiliary tasks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要