Benchmarking Adversarial Robustness of Image Shadow Removal with Shadow-adaptive Attacks
ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2024)
摘要
Shadow removal is a task aimed at erasing regional shadows present in images
and reinstating visually pleasing natural scenes with consistent illumination.
While recent deep learning techniques have demonstrated impressive performance
in image shadow removal, their robustness against adversarial attacks remains
largely unexplored. Furthermore, many existing attack frameworks typically
allocate a uniform budget for perturbations across the entire input image,
which may not be suitable for attacking shadow images. This is primarily due to
the unique characteristic of spatially varying illumination within shadow
images. In this paper, we propose a novel approach, called shadow-adaptive
adversarial attack. Different from standard adversarial attacks, our attack
budget is adjusted based on the pixel intensity in different regions of shadow
images. Consequently, the optimized adversarial noise in the shadowed regions
becomes visually less perceptible while permitting a greater tolerance for
perturbations in non-shadow regions. The proposed shadow-adaptive attacks
naturally align with the varying illumination distribution in shadow images,
resulting in perturbations that are less conspicuous. Building on this, we
conduct a comprehensive empirical evaluation of existing shadow removal
methods, subjecting them to various levels of attack on publicly available
datasets.
更多查看译文
关键词
Shadow Removal,Adversarial Robustness,Shadow-adaptive Attack
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要