Comparison of ChatGPT 3.5 Turbo and Human Performance in taking the European Board of Ophthalmology Diploma (EBOD) Exam

Anna Maino, Jakub Klikowski, Brendan Strong, Wahid Ghaffari,Michał Woźniak, Tristan BOURCIER,Andrzej Grzybowski

crossref(2024)

引用 0|浏览1
暂无评分
摘要
Abstract Background/Objectives: This paper aims to assess ChatGPT’s performance in answering European Board of Ophthalmology Diploma (EBOD) examination papers and to compare these results to pass benchmarks and candidate results. Methods This cross-sectional study used a sample of previous past exam papers from 2012, 2013, 2020–2023 EBOD examinations. This study analysed ChatGPT’s responses to 392 Multiple Choice Questions (MCQ), each containing 5 true/false statements (1432 statements in total) and 48 Single Best Answer (SBA) questions. Results ChatGPT’s performance for MCQ questions scored on average 64.39%. ChatGPT’s strongest metric performance for MCQ was precision (68.76%). ChatGPT performed best at answering Pathology questions (Grubbs test p < .05). Optics and refraction had the lowest-scoring MCQ performance across all metrics. ChatGPT’s SBA performance averaged 28.43%, with the highest score and strongest performance in precision (29.36%). Pathology SBA questions were consistently the lowest-scoring topic across most metrics. ChatGPT chose option 1 more than other options (p = 0.19). When answering SBAs, human candidates scored higher than ChatGPT in all metric areas measured. Conclusion ChatGPT performed stronger for true/false questions, scoring a pass mark in most instances. Performance was poorer for SBA questions, especially as ChatGPT was more likely to choose the first answer out of four. Our results suggest that ChatGPT’s ability in information retrieval is better than knowledge integration.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要