A Multifaceted Study On Eye Contact Based Speaker Identification In Three-Party Conversations

PROCEEDINGS OF THE 2017 ACM SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'17)(2017)

引用 16|浏览0
暂无评分
摘要
To precisely understand human gaze behaviors in three-party conversations, this work is dedicated to look into whether the speaker can be reliably identified from the interlocutors in a three-party conversation on the basis of the interactive behaviors of eye contact, where speech signals are not provided. Derived from a pre-recorded, multimodal, and three-party conversational behavior dataset, a statistical framework is proposed to determine who is the speaker from the interactive behaviors of eye contact. Additionally, with the aid of virtual human technologies, a user study is conducted to study whether subjects are capable of distinguishing the speaker from the listeners according to the gaze behaviors of the interlocutors alone. Our results show that eye contact provides a reliable cue for the identification of the speaker in three-party conversations.
更多
查看译文
关键词
eye gaze,eye contact,face-to-face communication,multiparty conversation,human-human interaction,head gestures,eye-head coordination,perception of gaze,nonverbal behaviors
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要