Self-supervised audiovisual representation learning for remote sensing data

International Journal of Applied Earth Observation and Geoinformation(2023)

引用 3|浏览37
暂无评分
摘要
Many deep learning approaches make extensive use of backbone networks pretrained on large datasets like ImageNet, which are then fine-tuned. In remote sensing, the lack of comparable large annotated datasets and the diversity of sensing platforms impedes similar developments. In order to contribute towards the availability of pretrained backbone networks in remote sensing, we devise a self-supervised approach for pretraining deep neural networks. By exploiting the correspondence between co-located imagery and audio recordings, this is done completely label-free, without the need for manual annotation. For this purpose, we introduce the SoundingEarth dataset, which consists of co-located aerial imagery and crowd-sourced audio samples all around the world. Using this dataset, we then pretrain ResNet models to map samples from both modalities into a common embedding space, encouraging the models to understand key properties of a scene that influence both visual and auditory appearance. To validate the usefulness of the proposed approach, we evaluate the transfer learning performance of pretrained weights obtained against weights obtained through other means. By fine-tuning the models on a number of commonly used remote sensing datasets, we show that our approach outperforms existing pretraining strategies for remote sensing imagery. The dataset, code and pretrained model weights are available at https://github.com/khdlr/SoundingEarth.
更多
查看译文
关键词
Self-supervised learning,Multi-modal learning,Representation learning,Audiovisual dataset
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要