Robust and structural sparsity auto-encoder with L21-norm minimization

Neurocomputing(2021)

引用 16|浏览51
暂无评分
摘要
The mean square error (MSE), as the most commonly used cost function for auto-encoder, is sensitive to outliers or impulsive noises in real-world application, which may misguide the training process. At the same time, stacked auto-encoder(SAE) is indeed a totally fully connected network, the parameters exponentially increase as the nodes and layers increase, which may cause over-fitting, huge computational complexity and storage overhead. So the robustness and sparseness problem of the auto-encoder need to be further investigated. In this paper, we develop a robust and structural sparsity stacked auto-encoder with L21-norm loss function and regularization (LR21-SAE). Our L21-norm loss function can alleviate the negative impact of outlier samples, thus show superior robust performance. Our L21-norm regularization can enforce some rows/columns of weight matrix shrink to zero entirely, thus promote to learn sparse features and choose compact network. We have validated our LR21-SAE model on several common datasets. Experimental results show that LR21-SAE is significantly robust to outlier noises for real-world data, it also can get sparse nodes connection deep neural network with notable less number of parameters than what the original non-sparse network has, while maintain outstanding performance.
更多
查看译文
关键词
Auto-encoder,L21-norm regularization,L21-norm loss,Sparsity,Robust
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要