Comparing Spatial and Spectral Graph Filtering for Preprocessing Neurophysiological Signals
2023 IEEE Signal Processing in Medicine and Biology Symposium (SPMB)(2023)
Centre for Comp. Science and Math. Modelling
Abstract
Signals measured with multiple sensors simultaneously in time form multivariate signals and are commonly acquired in biomedical imaging. These temporal signals are generally not independent of each other, but exhibit a rich spatial structure. Graph filtering, either spatial or spectral, is a method that can leverage this spatial structure for various preprocessing tasks, such as graph denoising. Previous studies have focused on learning the parameters of spatial graph impulse response (GIR) filters, while neglecting spectral graph frequency response (GFR) filters, even though GFR filters offer unique advantages in terms of regularisation and interpretation. In this study, we therefore compare learning GIR filters and GFR filters as a trainable preprocessing step for two different neural networks on an Alzheimer’s classification task. We tested both a functional connectivity graph as well as a geometric graph as the base of each filter type, and varied the localisation of the spatial filter. As expected, the retrieved shapes of the trained filters suggest that GFR filters can be interpreted in terms of their graph structure, while the same does not hold for GIR filters. Contrarily, however, we found that only the geometric, highly localised GIR filter outperforms the baseline significantly, surpassing it by 3.8 percentage points. These findings suggest that the observed performance boost of a trained localised GIR filter may in fact not be due to the graph structure. Instead, we hypothesise that this boost is caused by favourable algebraic properties of the filter matrix.
MoreTranslated text
Key words
electroencephalogram,graph signal processing,graph filtering,machine learning,Alzheimer’s disease
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined