Latent-Insensitive Autoencoders for Anomaly Detection and Class-Incremental Learning

arxiv(2021)

引用 3|浏览2
暂无评分
摘要
Reconstruction-based approaches to anomaly detection tend to fall short when applied to complex datasets with target classes that possess high inter-class variance. Similar to the idea of self-taught learning used in transfer learning, many domains are rich with \textit{similar} unlabeled datasets that could be leveraged as a proxy for out-of-distribution samples. In this paper we introduce Latent-Insensitive Autoencoder (LIS-AE) where unlabeled data from a similar domain is utilized as negative examples to shape the latent layer (bottleneck) of a regular autoencoder such that it is only capable of reconstructing one task. Since the underlying goal of LIS-AE is to only reconstruct in-distribution samples, this makes it naturally applicable in the domain of class-incremental learning. We treat class-incremental learning as multiple anomaly detection tasks by adding a different latent layer for each class and use other available classes in task as negative examples to shape each latent layer. We test our model in multiple anomaly detection and class-incremental settings presenting quantitative and qualitative analysis showcasing the accuracy and the flexibility of our model for both anomaly detection and class-incremental learning.
更多
查看译文
关键词
anomaly detection,autoencoders,one-class classification,principal components analysis,self-taught learning,negative examples
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要