Learning Spatial Features from Audio-Visual Correspondence in Egocentric Videos
arxiv(2023)
摘要
We propose a self-supervised method for learning representations based on
spatial audio-visual correspondences in egocentric videos. Our method uses a
masked auto-encoding framework to synthesize masked binaural (multi-channel)
audio through the synergy of audio and vision, thereby learning useful spatial
relationships between the two modalities. We use our pretrained features to
tackle two downstream video tasks requiring spatial understanding in social
scenarios: active speaker detection and spatial audio denoising. Through
extensive experiments, we show that our features are generic enough to improve
over multiple state-of-the-art baselines on both tasks on two challenging
egocentric video datasets that offer binaural audio, EgoCom and EasyCom.
Project: http://vision.cs.utexas.edu/projects/ego_av_corr.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要