Is it Possible to Evaluate the Contribution of Visual Information to the Process of Speech Comprehension?

AVSP(1998)

引用 24|浏览26
暂无评分
摘要
We report in this paper the results of a series of comprehension tests run with the aim of investigating the contribution of visual information to the process of comprehension of conversational speech. The methodology we designed was presented in a previous work [1] in which we also showed the results of a pilot test to confirm our original hypothesis that the comprehension of conversational speech decreases passing from bimodal transmission to uni-modal transmission. In order to further investigate the contribution of visual information to the process of speech comprehension, we run a new series of comprehension tests, that consisted of the following three phases: 1. submission of the multi-modal speech signal (auditory + visual); 2. submission of the sample only in the auditory modality (i.e. without the integration of visual cues); 3. submission of the sample only in the visual modality (without the integration of auditory cues). We used as sample material a short conversation held by two male speakers, edited from an Italian TV soap opera. We tested 3 groups of 12 people with no sight and hearing pathologies and also a smaller group of 5 congenitally deaf people, that served as a kind of “control” group in the third phase. It is clear from our results that the visual cues help the subjects to understand the main topic of conversation and to remember some of the details of the conversation. Moreover they seem to play an important role for the interpretation of the emotional state of the speakers. In some cases visual cues appear to be misleading.
更多
查看译文
关键词
visual information,comprehension,speech
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要