Toward Visual Voice Activity Detection for Unconstrained Videos

international conference on image processing(2019)

引用 14|浏览13
暂无评分
摘要
The prevalent audio-based Voice Activity Detection (VAD) systems are challenged by the presence of ambient noise and are sensitive to variations in the type of the noise. The use of information from the visual modality, when available, can help overcome some of the problems of audio-based VAD. Existing visual-VAD systems however do not operate directly on the whole image but require intermediate face detection, face landmark detection and subsequent facial feature extraction from the lip region. In this work we present an end-to-end trainable Hierarchical Context Aware (HiCA) architecture for visual-VAD for videos obtained in unconstrained environments which can be trained with videos as input and audio speech labels as output. The network is designed to account for local and global temporal information in a video sequence. In contrast to existing visual-VAD systems our proposed approach does not rely on face detection and subsequent facial feature extraction. It can obtain a VAD accuracy of 66% on a dataset of Hollywood movie videos just with visual information. Further analysis of the representations learned from our visual-VAD system shows that the network learns to localize on human faces, and sometimes speaking human faces specifically. Our quantitative analysis of the effectiveness of face localization shows that our system performs better than sound-localization networks designed for unconstrained videos.
更多
查看译文
关键词
Cross-modal learning,visualization,localization,Visual-VAD
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要