The Worth of Words: Decision Support via Natural Language Processing of Trainee Data (Preprint)

semanticscholar(2021)

引用 0|浏览1
暂无评分
摘要
BACKGROUND Residents receive a numeric performance rating (e.g., 1-7 scoring scale) along with a narrative (i.e., qualitative) feedback based on their performance in each workplace-based assessment (WBA). Aggregated qualitative data from WBA can be overwhelming to process and fairly adjudicate as part of a global decision about learner competence. Current approaches with qualitative data require a human rater to maintain attention and appropriately weigh various data inputs within the constraints of working memory before rendering a global judgment of performance. OBJECTIVE This study evaluates the accuracy of a decision support system for raters using natural language processing (NLP) and machine learning (ML). METHODS NLP was performed retrospectively on a complete dataset of narrative comments (i.e., text-based feedback to residents based on their performance on a task) derived from WBAs completed by faculty members from multiple hospitals associated with a single, large, residency program at McMaster University, Canada. Narrative comments were vectorized to quantitative ratings using bag-of-n-grams technique with three input types: unigram, bigrams, and trigrams. Supervised machine learning models using linear regression were trained for two outputs using the original ratings and dichotomized ratings (at risk or not). Sensitivity, specificity, and accuracy metrics are reported. RESULTS The database consisted of 7,199 unique direct observation assessments, containing both narrative comments and a 3 to 7 rating in imbalanced distribution (3-5: 726, and 6-7: 4,871 ratings). Total of 141 unique raters from five different hospitals and 45 unique residents participated over the course of five academic years. When comparing the three different input types for diagnosing if a trainee would be rated low (i.e., 1-5) or high (i.e., 6 or 7), our accuracy for trigrams was (87%), bigrams (86%), and unigrams (82%). We also found that all three input types had better prediction accuracy when using a bimodal cut (e.g., lower or higher) compared to predicting performance along the full 7-scale (50-52%). CONCLUSIONS The ML models can accurately identify underperforming residents via narrative comments provided for work-based assessments. The words generated in WBAs can be a worthy dataset to augment human decisions for educators tasked with processing large volumes of narrative assessments. CLINICALTRIAL N/A
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要