Vulnerability of Feature Extractors in 2D Image-Based 3D Object Retrieval

IEEE Transactions on Multimedia(2023)

引用 0|浏览8
暂无评分
摘要
Recent advances in 3D modeling software and 3D capture devices contribute to the availability of large-scale 3D objects. Together with the prevalence of deep neural networks (DNNs), DNN-based 3D object retrieval systems are widely applied, especially by inputting 2D images to retrieve 3D objects. Although DNNs have shown vulnerable to adversarial attacks in classification, the vulnerability of DNN-based 3D object retrieval system remains under-explored. In this paper, we formulate the problem of attacking against DNN-based feature extractors in the 2D image-based 3D object retrieval system. Specifically, we consider the attack happens under a reasonable scenario that the candidate 3D object database is unknown to the adversary, which challenges adversarial example generation. To tackle this difficulty, we set up a reasonable hypothesis on the information which the adversary can be accessible, and then propose two effective perturbation generation methods: one is to corrupt domain-level alignment (CDA) and the other one is to corrupt class-level alignment (CCA). In converse, we propose a novel progressive adversarial training (PAT) method to improve the feature extractor robustness, which can effectively and stably mitigate both CDA and CCA attacks. Experimental results demonstrate that a typical feature extractor can be effectively compromised by attacks. Moreover, the transferability of the adversarial query illustrates the possibility of realistic black-box attacks. The successful defense against both CDA and CCA attacks by PAT can validate the superiority of the proposed defense method.
更多
查看译文
关键词
3d object retrieval,feature extractors,image-based
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要