Multi-Modal Emotion Classification in Virtual Reality Using Reinforced Self-Training

Yi Liu, Jianzhang Li, Dewen Cui,Eri Sato-Shimokawara

JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS(2023)

引用 0|浏览0
暂无评分
摘要
Affective computing focuses on recognizing emotions using a combination of psychology, computer science, and biomedical engineering. With virtual reality (VR) becoming more widely accessible, affective comput-ing has become increasingly important for supporting social interactions on online virtual platforms. How-ever, accurately estimating a person's emotional state in VR is challenging because it differs from real-world conditions, such as the unavailability of facial expres-sions. This research proposes a self-training method that uses unlabeled data and a reinforcement learn-ing approach to select and label data more accurately. Experiments on a dataset of dialogues of VR players show that the proposed method achieved an accuracy of over 80% on dominance and arousal labels and out-performed previous techniques in the few-shot classi-fication of emotions based on physiological signals.
更多
查看译文
关键词
emotional states classification,physiological signals,self-training,reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要