Dialocalization: Acoustic speaker diarization and visual localization as joint optimization problem

TOMCCAP(2010)

引用 20|浏览16
暂无评分
摘要
The following article presents a novel audio-visual approach for unsupervised speaker localization in both time and space and systematically analyzes its unique properties. Using recordings from a single, low-resolution room overview camera and a single far-field microphone, a state-of-the-art audio-only speaker diarization system (speaker localization in time) is extended so that both acoustic and visual models are estimated as part of a joint unsupervised optimization problem. The speaker diarization system first automatically determines the speech regions and estimates “who spoke when,” then, in a second step, the visual models are used to infer the location of the speakers in the video. We call this process “dialocalization.” The experiments were performed on real-world meetings using 4.5 hours of the publicly available AMI meeting corpus. The proposed system is able to exploit audio-visual integration to not only improve the accuracy of a state-of-the-art (audio-only) speaker diarization, but also adds visual speaker localization at little incremental engineering and computation costs. The combined algorithm has different properties, such as increased robustness, that cannot be observed in algorithms based on single modalities. The article describes the algorithm, presents benchmarking results, explains its properties, and systematically discusses the contributions of each modality.
更多
查看译文
关键词
unsupervised speaker localization,state-of-the-art audio-only speaker diarization,visual localization,visual speaker localization,multimodal integration,joint optimization problem,single modality,speech,speaker diarization,visual model,speaker diarization system,proposed system,single far-field microphone,speaker localization,acoustic speaker diarization,optimization problem,low resolution
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要