Is the Validity of Logistic Regression Models Developed with a National Hospital Database Inferior to Models Developed from Clinical Databases to Analyze Surgical Lung Cancers?

CANCERS(2024)

引用 0|浏览0
暂无评分
摘要
Simple Summary In national hospital databases, certain prognostic factors cannot be taken into account. Our objective was to estimate the performance of two models based on the Epithor clinical database and the French hospital database. The performance of the models was assessed with the Brier score, the area under the receiver operating characteristic (AUC ROC) curve, and the calibration of the model. For the Epithor and hospital databases, the training dataset (70% of the initial data) included 10,516 patients (with, respectively, 227 (2.16%) and 283 (2.7%) deaths) and the validation dataset (30% of the initial data) included 4507 patients (with, respectively, 93 (2%) and 119 (2.64%) deaths). The Brier score values were similar in the models of the two databases. For validation data, the AUC ROC curve was 0.73 [0.68-0.78] for Epithor and 0.8 [0.76-0.84] for the hospital database. The slope of the calibration plot was less than 1 for the two databases. This work showed that the performance of a model developed from a national hospital database is nearly as good as a performance obtained with Epithor, but it lacks crucial clinical variables.Abstract In national hospital databases, certain prognostic factors cannot be taken into account. The main objective was to estimate the performance of two models based on two databases: the Epithor clinical database and the French hospital database. For each of the two databases, we randomly sampled a training dataset with 70% of the data and a validation dataset with 30%. The performance of the models was assessed with the Brier score, the area under the receiver operating characteristic (AUC ROC) curve and the calibration of the model. For Epithor and the hospital database, the training dataset included 10,516 patients (with resp. 227 (2.16%) and 283 (2.7%) deaths) and the validation dataset included 4507 patients (with resp. 93 (2%) and 119 (2.64%) deaths). A total of 15 predictors were selected in the models (including FEV1, body mass index, ASA score and TNM stage for Epithor). The Brier score values were similar in the models of the two databases. For validation data, the AUC ROC curve was 0.73 [0.68-0.78] for Epithor and 0.8 [0.76-0.84] for the hospital database. The slope of the calibration plot was less than 1 for the two databases. This work showed that the performance of a model developed from a national hospital database is nearly as good as a performance obtained with Epithor, but it lacks crucial clinical variables such as FEV1, ASA score, or TNM stage.
更多
查看译文
关键词
model performance,hospital database,clinical database,brier score,area under the receiver operating characteristic,discrimination,calibration
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要