Evaluation of multiple choice questions using item analysis tool: a study from a medical institute of Ahmedabad, Gujarat

International Journal of Community Medicine and Public Health(2017)

引用 8|浏览1
暂无评分
摘要
Background: Multiple choice question (MCQ) assessments are becoming popular means to assess knowledge for many screening examinations among several fields including Medicine. The single best answer MCQs may also test higher-order thinking skills. Hence, MCQs remain useful assessment gadget. Objectives: 1) To evaluate Multiple Choice Questions for testing their quality. 2) To explore the association between difficulty index (p-value) and discrimination indices (DI) with distractor efficiency (DE). 3) To study the occurrence of functioning distractors for MCQs. Methods: Total five MCQ test sessions were conducted among interns of a medical institute of Ahmedabad city Gujarat, between April 2016 to March 2017, as part of their compulsory rotating postings in the department. The average participation in each of the sessions was 17 interns, thus a total of 85 interns getting enrolled. For each test session, the questionnaire consisted of forty MCQs having 4 options including a single best answer. The MCQs were analyzed for difficulty index (DIF-I, p-value), discrimination index (DI), and distractor efficiency (DE). Results: Total 85 interns attended the tests consisting of total 200 MCQ items (questions) from four major medical disciplines namely - Medicine, Surgery, Obstetrics u0026 Gynecology and Community Medicine. Mean test scores of each test ranged from 36.0% to 45.8%.The reliability of the tests, the Kuder Richardson (KR) 20, ranged from 0.29 to 0.52. The standard error of Measurement ranged from 2.59 to 2.79.Out of total 200 MCQs, seventy nine (n=79) had Discrimination index (DI) u003c0.15 (poor), and 61 had DI ≥0.35 (excellent). Easy items having average DE of all tests was 20.1%. Conclusions: Items having average difficulty and high discrimination with functioning distractors should be incorporated into tests to improve the validity of the assessment.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要