Learn2smile: Learning Non-Verbal Interaction Through Observation

2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)(2017)

引用 36|浏览159
暂无评分
摘要
Interactive agents are becoming increasingly common in many application domains, such as education, healthcare and personal assistance. The success of such embodied agents relies on their ability to have sustained engagement with their human users. Such engagement requires agents to be socially intelligent, equipped with the ability to understand and reciprocate both verbal and non-verbal cues. While there has been tremendous progress in verbal communication, mostly driven by the success of speech recognition and questionanswering, teaching agents to appropriately react to facial expressions has received less attention.In this paper, we focus on non-verbal facial cues for faceto-face communication between a user and an embodied agent. We propose a method that automatically learns to update the agent's facial expressions based on the user's expressions. We adopt a learning scheme and train a deep neural network on hundreds of videos, containing pairs of people engaging in a conversation, and without external human supervision. Our experimental results show the efficacy of our model in sustained long-term prediction of the agent's facial landmarks. We present comparative results showing that our model significantly outperforms baseline approaches and provide insightful human studies to better understand our model's qualitative performance. We release our dataset to further encourage research in this field.
更多
查看译文
关键词
facial expressions,nonverbal facial cues,face-to-face communication,embodied agent,learning scheme,external human supervision,learn2Smile,nonverbal interaction,interactive agents,healthcare,nonverbal cues,verbal communication,speech recognition,question-answering,user expressions,deep neural network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要