Cross-Domain Video Anomaly Detection without Target Domain Adaptation

WACV(2023)

引用 4|浏览36
暂无评分
摘要
Most cross-domain unsupervised Video Anomaly Detection (VAD) works assume that at least few task-relevant target domain training data are available for adaptation from the source to the target domain. However, this requires laborious modeltuning by the end-user who may prefer to have a system that works out-of-the-box.degrees To address such practical scenarios, we identify a novel target domain (inference-time) VAD task where no target domain training data are available. To this end, we propose a new `Zero-shot Cross-domain Video Anomaly Detection (zxVAD)' framework that includes a future-frame prediction generative model setup. Different from prior futureframe prediction models, our model uses a novel Normalcy Classifier module to learn the features of normal event videos by learning how such features are different relatively degrees to features in pseudo-abnormal examples. A novel Untrained Convolutional Neural Network based Anomaly Synthesis module crafts these pseudo-abnormal examples by adding foreign objects in normal video frames with no extra training cost. With our novel relative normalcy feature learning strategy, zxVAD generalizes and learns to distinguish between normal and abnormal frames in a new target domain without adaptation during inference. Through evaluations on common datasets, we show that zxVAD outperforms the state-of-the-art (SOTA), regardless of whether task-relevant (i.e., VAD) source training data are available or not. Lastly, zxVAD also beats the SOTA methods in inference-time efficiency metrics including the model size, total parameters, GPU energy consumption, and GMACs.
更多
查看译文
关键词
adaptation,detection,video,cross-domain
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要