Assessment of ChatGPT success with specialty medical knowledge using anaesthesiology board examination practice questions

Denys Shay, Bhawesh Kumar,David Bellamy,Anil Palepu, Mark Dershwitz, Jens M. Walz,Maximilian S. Schaefer,Andrew Beam

BRITISH JOURNAL OF ANAESTHESIA(2023)

引用 6|浏览3
暂无评分
摘要
Editor—An artificial intelligence (AI) system known as ChatGPT has recently demonstrated outstanding capabilities across a range of tasks, including diagnostic medicine. 1 Schulman J, Zoph B, Kim C, et al. ChatGPT: optimizing language models for dialogue 2022. Available from: https://openai.com/blog/chatgpt. Accessed on January 30, 2023. Google Scholar ,2 Levine D.M. Tuwani R. Kompa B. et al. The Diagnostic and Triage Accuracy of the GPT-3 Artificial Intelligence Model. medRxiv [Preprint]. 2023 Feb; Google Scholar ChatGPT has been shown to contain wide medical knowledge, performing reasonably well on medical licencing examination questions 3 Kung T.H. Cheatham M. Medinilla A. et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models 2023. PLOS Digit Health. 2023; 2e0000198 Google Scholar and board examination style questions, such as neurosurgery. 4 Hopkins B.S. Nguyen V.N. Dallas J. et al. ChatGPT versus the neurosurgical written boards: a comparative analysis of artificial intelligence/machine learning performance on neurosurgical board-style questions. J Neurosurg. 2023; : 1-8 Google Scholar We aimed to characterise ChatGPT's knowledge of anaesthesiology-specialty medical knowledge on anaesthesiology board examination questions.
更多
查看译文
关键词
artificial intelligence,board examination,ChatGPT,large language models,medical knowledge,multiple choice questions,specialty qualifications
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要