A Multi-Scale Deep Residual Network-Based Guided Wave Imaging Evaluation Method for Fatigue Crack Quantification
MEASUREMENT SCIENCE AND TECHNOLOGY(2025)
Abstract
As a promising structural health monitoring technology, guided wave (GW) imaging is gaining increasing attention for crack monitoring of aircraft structures. However, actual fatigue crack propagation is a complex dynamically evolving process affected by various variabilities. It is still challenging to accurately track and quantify the dynamic fatigue crack propagation with GW imaging methods. Therefore, in order to achieve more accurate fatigue crack quantification, this paper proposes a multi-scale deep residual network-based GW imaging evaluation method. A convolutional neural network (CNN) is utilized to evaluate the entire pixel distribution of GW imaging maps to fuse damage-related information from multiple GW monitoring paths. By designing multi-scale convolutional kernels and deep residual learning, a robust quantitative image feature extraction is ensured with the dynamic evolution process of fatigue crack growth and the performance degradation is avoided as the CNN goes deeper, thereby improving the quantification accuracy. The method is validated on a fatigue test of landing gear beams, which are important load-carrying aircraft structural components. The results demonstrate that the proposed method can extract multi-scale crack length-related features and accurately track fatigue crack propagations. For batch specimens, the maximum quantification error is reduced from the original 6.1 mm to 1.6 mm, marking a significant improvement.
MoreTranslated text
Key words
fatigue crack damage quantification,multi-scale convolution,deep residual network,guided wave imaging,landing gear beam
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined