Immune Profile and Routine Laboratory Indicator-Based Machine Learning for Prediction of Lung Cancer
Computers in Biology and Medicine(2025)
Abstract
Introduction Early diagnosis of lung cancer is still a challenge by using current diagnostic methods. Objectives The study aims to explore the utilization of host immune parameters, in combination with conventional laboratory tests, for the early prediction of lung cancer. Methods Immune profiles were assessed by flow cytometry in 221 patients, and machine learning algorithms, utilizing either combined or routine indicators alone, were applied to classify lung cancer stages. Results The study revealed significant alterations in immune profiles across different stages of lung cancer. Notably, we observed a progressive increase in the percentages of effector memory CD8+ T cells and polymorphonuclear-MDSCs from healthy controls to patients with benign lesion, early-stage cancer, and late-stage cancer. Conversely, the percentages of naive CD8+ T cells, DCs, and NKG2D+ NK cells exhibited a decreasing trend throughout this progression. Accordingly, the gradual differentiation of effector CD8+ T cells and the accumulation of inhibitory polymorphonuclear-MDSCs, along with the progressive impairment of innate and adaptive immunity, were the most prominent immune features observed during lung cancer progression. Through in combination of selected conventional laboratory and immune indicators, we demonstrated the effectiveness of machine learning models, particularly SVC and logistic regression, in predicting the presence of lung cancer and its staging with high accuracy. Conclusion We depict the immune landscape in patients with benign disease and different stages of lung cancer. Combination of routine and immune indicators by using machine learning displays a potential in predicting the presence of lung cancer and its staging.
MoreTranslated text
Key words
Lung cancer,Predictive model,Immune indicators,Machine learning,CD8+ T cells
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined