The Added Value of Apparent Diffusion Coefficient and Microcalcifications to the Kaiser Score in the Evaluation of BI-RADS 4 Lesions
European Journal of Radiology(2023)
Southern Med Univ
Abstract
PURPOSE:To explore the added value of combining microcalcifications or apparent diffusion coefficient (ADC) with the Kaiser score (KS) for diagnosing BI-RADS 4 lesions. METHODS:This retrospective study included 194 consecutive patients with 201 histologically verified BI-RADS 4 lesions. Two radiologists assigned the KS value to each lesion. Adding microcalcifications, ADC, or both these criteria to the KS yielded KS1, KS2, and KS3, respectively. The potential of all four scores to avoid unnecessary biopsies was assessed using the sensitivity and specificity. Diagnostic performance was evaluated by the area under the curve (AUC) and compared between KS and KS1. RESULTS:The sensitivity of KS, KS1, KS2, and KS3 ranged from 77.1% to 100.0%.KS1 yielded significantly higher sensitivity than other methods (P < 0.05), except for KS3 (P > 0.05), most of all, when assessing NME lesions. For mass lesions, the sensitivity of these four scores was comparable (p > 0.05). The specificity of KS, KS1, KS2, and KS3 ranged from 56.0% to 69.4%, with no statistically significant differences(P > 0.05), except between KS1 and KS2 (p < 0.05).The AUC of KS1 (0.877) was significantly higher than that of KS (0.837; P = 0.0005), particularly for assessing NME (0.847 vs 0.713; P < 0.0001). CONCLUSION:KS can stratify BI-RADS 4 lesions to avoid unnecessary biopsies. Adding microcalcifications, but not adding ADC, as an adjunct to KS improves diagnostic performance, particularly for NME lesions. ADC provides no additional diagnostic benefit to KS. Thus, only combining microcalcifications with KS is most conducive to clinical practice.
MoreTranslated text
Key words
Breast neoplasms,Decision support systems,Diffusion magnetic resonance imaging,Microcalcification
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined