Multifrequency Graph Convolutional Network With Cross-Modality Mutual Enhancement for Multisource Remote Sensing Data Classification

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING(2024)

引用 0|浏览3
暂无评分
摘要
The mining of meaningful features and effective fusion of multisource remote sensing (RS) data have always been the challenging research problems in the joint classification of hyperspectral image (HSI) and light detection and ranging (LiDAR) data. In this article, we propose a multifrequency graph convolutional network with cross-modality mutual enhancement (MFGCN-CME) for multisource RS data classification. Specifically, we design an adaptive multifrequency graph feature learning module to capture the low- and high-frequency multiscale features of HSI and LiDAR in parallel and further adaptively aggregate them. Then, we propose a bipartite graph (BG) enhancement learning module to obtain the spatial-enhanced HSI features and spectral-enhanced LiDAR features by propagating intermodality information. To the best of our knowledge, the BG is first used to multisource RS data classification task. Furthermore, compared with traditional fusion methods, a gated fusion module is used to fully explore the complementarity of two data sources. Finally, a joint loss function combing a classification loss and a semisupervised contrastive loss is developed to improve the model robustness. Comprehensive experiments on different HSI and LiDAR datasets demonstrate that our proposed method can yield better performance compared with several state-of-the-art multisource RS data classification methods.
更多
查看译文
关键词
Laser radar,Task analysis,Feature extraction,Data models,Logic gates,Convolutional neural networks,Bipartite graph,Bipartite graph (BG),contrastive learning,gated fusion,graph convolutional neural networks (CNNs),multifrequency,multisource remote sensing (RS) data classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要