Cross-Layer Contrastive Learning of Latent Semantics for Facial Expression Recognition

Weicheng Xie, Zhibin Peng,Linlin Shen, Wenya Lu, Yang Zhang,Siyang Song

IEEE TRANSACTIONS ON IMAGE PROCESSING(2024)

引用 0|浏览0
暂无评分
摘要
Convolutional neural networks (CNNs) have achieved significant improvement for the task of facial expression recognition. However, current training still suffers from the inconsistent learning intensities among different layers, i.e., the feature representations in the shallow layers are not sufficiently learned compared with those in deep layers. To this end, this work proposes a contrastive learning framework to align the feature semantics of shallow and deep layers, followed by an attention module for representing the multi-scale features in the weight-adaptive manner. The proposed algorithm has three main merits. First, the learning intensity, defined as the magnitude of the backpropagation gradient, of the features on the shallow layer is enhanced by cross-layer contrastive learning. Second, the latent semantics in the shallow-layer and deep-layer features are explored and aligned in the contrastive learning, and thus the fine-grained characteristics of expressions can be taken into account for the feature representation learning. Third, by integrating the multi-scale features from multiple layers with an attention module, our algorithm achieved the state-of-the-art performances, i.e. 92.21%, 89.50%, 62.82%, on three in-the-wild expression databases, i.e. RAF-DB, FERPlus, SFEW, and the second best performance, i.e. 65.29% on AffectNet dataset. Our codes will be made publicly available.
更多
查看译文
关键词
Semantics,Cross layer design,Face recognition,Self-supervised learning,Representation learning,Faces,Task analysis,Facial expression recognition,contrastive learning,latent semantic alignment,multi-layer attention
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要