AI Based PD-L1 CPS Quantifier Software to Identify More Patients for Checkpoint Therapy in Gastric Cancer at Pathologist-Level Interobserver Concordance.
Journal of Clinical Oncology(2024)
Emory University School of Medicine
Abstract
2633 Background: To determine whether gastric cancer (GC) patients are eligible for immunotherapy, PD-L1 expression is analyzed by immunohistochemistry (IHC) using the “Combined Positive Score” (CPS). This requires quantification of PD-L1 stained tumor and tumor-associated immune cells as well as all viable tumor cells. However, manual CPS scoring on whole-slide images (WSIs) is time-consuming and prone to error as evidenced by low interobserver concordance. While the use of artificial intelligence (AI) holds the promise to ameliorate this key challenge in clinical practice, AI models have not yet met the required accuracy thresholds for PD-L1 CPS scoring on GC biopsies. Methods: We investigated the use of an AI-based PD-L1 CPS quantifier software to support pathologists in standardized PD-L1 IHC assessment on GC biopsies.An AI software for automated PD-L1 CPS scoring was deployed on WSIs from GC biopsies (n = 97) stained for PD-L1 with the 28-8 pharmDx assay and scanned on a 3DHistech P1000 scanner. Manual CPS scores from 12 pathologists on all 97 slides were available for comparison. Pairwise correlation was calculated for continuous values using Lin’s concordance correlation coefficient (CCC). Pairwise concordance was measured for scores binarized at the clinically relevant positivity cutoff of CPS ≥ 5 using unweighted Cohen’s kappa. Results: For continuous CPS scores, the CCC between AI scores and pathologists’ scores was higher (0.59) than the mean correlation among pathologists (0.56). In the majority of cases, the AI scores were found to be within the range of all pathologists, but slightly above the pathologist median. At a cutoff of CPS≥5, the concordance between AI scores and pathologists’ manual scores (κ=0.45) was higher than the mean concordance among pathologists’ manual scores (κ=0.39) (p<0.05). Substantial variability is seen among pathologists when categorizing patients as positive (CPS≥5), with approximately 30.3 ±5.0 patients classified as positive on average by manual scoring and 46 patients (>50% more) categorized as positive by the AI model. Conclusions: An AI model for the assessment of PD-L1 expression in GC using CPS was applied successfully without human intervention. The correlation in continuous CPS scores as well as the concordance in clinical categories with all pathologists was higher for the AI model than for individual pathologists on average, while at the same time, the AI model found more positive patients. This shows that by using AI more positive patients eligible for PD-L1 targeted treatment might be identified while simultaneously ensuring a level of concordance that is non-inferior to pathologists.
MoreTranslated text
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined