Brain functional networks and executive functions in children with attention- deficit/hyperactivity disorder
crossref(2024)
West China Second University Hospital of Sichuan University
Abstract
Abstract Brain region dysfunctions associated with executive function abnormalities may contribute to attention-deficit/hyperactivity disorder (ADHD) pathogenesis. We explored neural mechanisms through electroencephalography (EEG) studies of executive function and functional brain networks in children with ADHD. Executive function data were collected and resting-state EEG was measured in 84 children with ADHD and 84 healthy children. Functional connectivity was assessed across all scalp channels in five frequency bands. Brain networks were constructed, and relevant metrics were calculated using graph theory. Children with ADHD show varied executive function deficits. Connectivity in the frontal and parietal regions was reduced in both the eyes-open and eyes-closed states, particularly in the beta and gamma bands. Brain networks differed significantly in the beta band. Reduced characteristic path length (CPL) was seen in the eyes-closed state; global efficiency increased and CPL, clustering coefficient, and local efficiency decreased in the eyes-open state. Functional networks in children with ADHD correlate with executive function. Altered EEG connectivity and brain network topology may be underlying neural mechanisms of ADHD. Thus, EEG network dysfunction could be a potential biomarker or treatment target for future research. This study provides new insights into the underlying mechanisms of ADHD through EEG-based functional network analysis.
MoreTranslated text
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined