Context-Self Contrastive Pretraining for Crop Type Semantic Segmentation

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING(2022)

引用 6|浏览21
暂无评分
摘要
In this article, we propose a fully supervised pretraining scheme based on contrastive learning particularly tailored to dense classification tasks. The proposed context-self contrastive loss (CSCL) learns an embedding space that makes semantic boundaries pop-up by use of a similarity metric between every location in a training sample and its local context. For crop type semantic segmentation from satellite image time series (SITS), we find performance at parcel boundaries to be a critical bottleneck and explain how CSCL tackles the underlying cause of that problem, improving the state-of-the-art performance in this task. Additionally, using images from the Sentinel-2 (S2) satellite missions we compile the largest, to our knowledge, SITS dataset densely annotated by crop type and parcel identities, which we make publicly available together with the data generation pipeline. Using that data we find CSCL, even with minimal pretraining, to improve all respective baselines and present a process for semantic segmentation at greater resolution than that of the input images for obtaining crop classes at a more granular level. The code and instructions to download the data can be found in https://github.com/michaeltrs/DeepSatModels.
更多
查看译文
关键词
Crops, Image resolution, Satellites, Task analysis, Signal resolution, Semantics, Image segmentation, Contrastive learning, convolutional neural networks (CNNs), crop type segmentation, deep learning, pretraining, self-attention, semantic segmentation, sentinel-2
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要