MMAGL: Multi-objective Multi-view Attributed Graph Learning for Joint Clustering of Hyperspectral and LiDAR Data
IEEE Trans Geosci Remote Sens(2025)
School of Artificial Intelligence
Abstract
The joint clustering of multimodal remote sensing (RS) data represents a multi-objective optimization challenge involving conflicting modality-specific objectives and diverse regularization objectives. Current approaches to multi-view subspace clustering (MVSC) often oversimplify this task by transforming it into a weighted single-objective optimization problem, neglecting the intricate interactions between objectives and leading to suboptimal subspace representations. The presence of quadratic decision variables in MVSC renders direct application on large-scale RS data impracticable using multi-objective evolutionary algorithms (MOEAs). To overcome this challenge, we propose a novel MVSC method termed Multi-objective Multi-view Attributed Learning (MMAGL). Instead of optimizing every self-representation coefficient individually, our method transforms MVSC into a link prediction task over a sparse attributed graph that fuses different modalities. We incorporate superpixel-based sample reduction and proximity-based population coding, leveraging spatial and structural priors, respectively. This results in a significantly compressed decision space, enabling optimization with MOEAs. To fully exploit node attributes and the graph structure, we redefine self-representation using contrastive learning and introduce an efficient graph filtering through a generalized spectral graph convolution, enhancing clustering discriminability. The proposed MMAGL constitutes a hybrid and versatile framework, adaptable to any MOEA. Extensive experimental evaluations demonstrate that our MMAGL method surpasses the current state-of-the-art on multimodal RS benchmarks (e.g., with nearly 2% gain on Trento and 3% on Houston) on overall accuracy.
MoreTranslated text
Key words
Multi-objective optimization,multi-view subspace clustering,graph learning,hyperspectral and LiDAR data
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined