Gram regularization for sparse and disentangled representation

Pattern Analysis and Applications(2022)

引用 0|浏览28
暂无评分
摘要
Relationship between samples is often ignored when training neural networks for classification tasks. If properly utilized, such information can bring many benefits for the trained models. On the one hand, neural networks trained ignoring similarities between samples may represent different samples closely even if they belong to different classes, which undermines discrimination abilities of the trained models. On the other hand, regularizing inter-class and intra-class similarities in the feature space during training can effectively disentangle the representation between classes and make the representation sparse. To achieve this, a new regularization method is proposed to penalize positive inter-class similarities and negative intra-class similarities in the feature space. Experimental results show that the proposed method can not only obtain sparse and disentangled representation but also improve the performance of the trained models on many datasets.
更多
查看译文
关键词
Regularization,Sparse representation,Disentangled representation,Decision margin
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要