Strike (with) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects

CVPR, Volume abs/1811.11553, 2019, Pages 4845-4854.

Cited by: 61|Bibtex|Views31|Links
EI

Abstract:

Despite excellent performance on stationary test sets, deep neural networks (DNNs) can fail to generalize to out-of-distribution (OoD) inputs, including natural, non-adversarial ones, which are common in real-world settings. In this paper, we present a framework for discovering DNN failures that harnesses 3D renderers and 3D models. That ...More

Code:

Data:

Your rating :
0

 

Tags
Comments