Valence and Arousal Estimation based on Multimodal Temporal-Aware Features for Videos in the Wild

2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2022)(2022)

引用 17|浏览21
暂无评分
摘要
This paper presents our submission to the Valence-Arousal Estimation Challenge of the 3rd Affective Behavior Analysis in-the-wild (ABAW) competition. Based on multi-modal feature representations that fuse the visual and aural information, we utilize two types of temporal encoder to capture the temporal context information in the video, including the transformer based encoder and LSTM based encoder. With the temporal context-aware representations, we employ fully-connected layers to predict the valence and arousal values of the video frames. In addition, smoothing processing is applied to refine the initial predictions, and a model ensemble strategy is used to combine multiple results from different model setups. Our system achieves the performance in Concordance Correlation Coefficients (ccc) of 0.606 for valence, 0.602 for arousal, and mean ccc of 0.601, which ranks the first place in the challenge.
更多
查看译文
关键词
temporal-aware features,Valence-Arousal Estimation Challenge,3rd Affective Behavior Analysis,multimodal feature representations,visual information,aural information,temporal encoder,temporal context information,transformer based encoder,temporal context-aware representations,arousal values,video frames,initial predictions,model ensemble strategy
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要