A Bayesian Approach to OOD Robustness in Image Classification
CVPR 2024(2024)
摘要
An important and unsolved problem in computer vision is to ensure that the
algorithms are robust to changes in image domains. We address this problem in
the scenario where we have access to images from the target domains but no
annotations. Motivated by the challenges of the OOD-CV benchmark where we
encounter real world Out-of-Domain (OOD) nuisances and occlusion, we introduce
a novel Bayesian approach to OOD robustness for object classification. Our work
extends Compositional Neural Networks (CompNets), which have been shown to be
robust to occlusion but degrade badly when tested on OOD data. We exploit the
fact that CompNets contain a generative head defined over feature vectors
represented by von Mises-Fisher (vMF) kernels, which correspond roughly to
object parts, and can be learned without supervision. We obverse that some vMF
kernels are similar between different domains, while others are not. This
enables us to learn a transitional dictionary of vMF kernels that are
intermediate between the source and target domains and train the generative
model on this dictionary using the annotations on the source domain, followed
by iterative refinement. This approach, termed Unsupervised Generative
Transition (UGT), performs very well in OOD scenarios even when occlusion is
present. UGT is evaluated on different OOD benchmarks including the OOD-CV
dataset, several popular datasets (e.g., ImageNet-C [9]), artificial image
corruptions (including adding occluders), and synthetic-to-real domain
transfer, and does well in all scenarios outperforming SOTA alternatives (e.g.
up to 10
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要