Audio-visual scene analysis in conditions with head- and eye-steered beamformers in virtual reality

Journal of the Acoustical Society of America(2023)

引用 0|浏览2
暂无评分
摘要
In crowded social settings, listeners often face the challenge of following a conversation in the presence of other conversations. Several factors influence the difficulty of this task, including the number of talkers, the amount of reverberation, and the hearing status. Beamformers in hearing aids have the potential to mitigate these factors by improving the signal-to-noise ratio, but their effectiveness in real-world settings has not yet been clearly demonstrated. Here, we used virtual reality to investigate the effect of head- and eye-steered beamformers on the ability of participants to analyze complex audio-visual scenes. The participants’ task was to find and locate an ongoing story in a mixture of other stories in scenes differing in terms of the number of concurrent talkers and the amount of reverberation. The talkers were distributed in the frontal hemisphere between ±105°. The primary outcome measure was the time taken to identify the location of the target talker. Preliminary results show shorter response times with beamforming (in comparison to an omnidirectional setting), especially when more talkers were present. This framework provides a new means for examining the effects of hearing technologies on behavior in complex audio-visual scenes.
更多
查看译文
关键词
virtual reality,beamformers,audio-visual,eye-steered
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要