Evaluating Predictive Models Of Student Success: Closing The Methodological Gap

JOURNAL OF LEARNING ANALYTICS(2018)

引用 11|浏览26
暂无评分
摘要
Model evaluation - the process of making inferences about the performance of predictive models - is a critical component of predictive modelling research in learning analytics. We survey the state of the practice with respect to model evaluation in learning analytics, which overwhelmingly uses only naive methods for model evaluation or statistical tests that are not appropriate for predictive model evaluation. We conduct a critical comparison of both null hypothesis significance testing (NHST) and a preferred Bayesian method for model evaluation. Finally, we apply three methods - the naive average commonly used in learning analytics, NHST, and Bayesian - to a predictive modelling experiment on a large set of MOOC data. We compare 96 different predictive models, including different feature sets, statistical modelling algorithms, and tuning hyperparameters for each, using this case study to demonstrate the different experimental conclusions these evaluation techniques provide.
更多
查看译文
关键词
Model evaluation, model selection, feature selection, Bayesian, MOOCs
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要