Siamese Vision Transformers are Scalable Audio-visual Learners
arxiv(2024)
摘要
Traditional audio-visual methods rely on independent audio and visual
backbones, which is costly and not scalable. In this work, we investigate using
an audio-visual siamese network (AVSiam) for efficient and scalable
audio-visual pretraining. Our framework uses a single shared vision transformer
backbone to process audio and visual inputs, improving its parameter
efficiency, reducing the GPU memory footprint, and allowing us to scale our
method to larger datasets and model sizes. We pretrain our model using a
contrastive audio-visual matching objective with a multi-ratio random masking
scheme, which enables our model to process larger audio-visual instance
batches, helpful for contrastive learning. Unlike prior audio-visual methods,
our method can robustly handle audio, visual, and audio-visual inputs with a
single shared ViT backbone. Furthermore, despite using the shared backbone for
both modalities, AVSiam achieves competitive or even better results than prior
methods on AudioSet and VGGSound for audio-visual classification and retrieval.
Our code is available at https://github.com/GenjiB/AVSiam
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要