MTP: Advancing Remote Sensing Foundation Model via Multi-Task Pretraining
arxiv(2024)
摘要
Foundation models have reshaped the landscape of Remote Sensing (RS) by
enhancing various image interpretation tasks. Pretraining is an active research
topic, encompassing supervised and self-supervised learning methods to
initialize model weights effectively. However, transferring the pretrained
models to downstream tasks may encounter task discrepancy due to their
formulation of pretraining as image classification or object discrimination
tasks. In this study, we explore the Multi-Task Pretraining (MTP) paradigm for
RS foundation models to address this issue. Using a shared encoder and
task-specific decoder architecture, we conduct multi-task supervised
pretraining on the SAMRS dataset, encompassing semantic segmentation, instance
segmentation, and rotated object detection. MTP supports both convolutional
neural networks and vision transformer foundation models with over 300 million
parameters. The pretrained models are finetuned on various RS downstream tasks,
such as scene classification, horizontal and rotated object detection, semantic
segmentation, and change detection. Extensive experiments across 14 datasets
demonstrate the superiority of our models over existing ones of similar size
and their competitive performance compared to larger state-of-the-art models,
thus validating the effectiveness of MTP.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要