When alternative analyses of the same data come to different conclusions: A tutorial using DeclareDesign with a worked real-world example

crossref(2024)

引用 0|浏览0
暂无评分
摘要
Recent studies in psychology have documented how analytic flexibility can result in different results from the same dataset. Here we demonstrate a package in the R programming language, DeclareDesign, which uses simulated data to diagnose the properties of analytic designs. To illustrate features of the package, we contrast two analyses of a randomised controlled trial (RCT) of GraphoGame, an intervention to help children learn to read. The initial analysis (NFER) found that the intervention was ineffective, but a subsequent reanalysis (Cambridge) concluded that GraphoGame significantly improved children’s reading. With DeclareDesign we can simulate data where the truth is known, and thus can identify which analysis is optimal for estimating the intervention effect, using “diagnosands”, including bias, precision, and power. The simulations showed that the NFER analysis accurately estimated intervention effects, whereas selection of a subset of data in the Cambridge analysis introduced substantial bias, overestimating the effect sizes. This problem was exacerbated by inclusion of multiple outcome measures in the Cambridge analysis. Much has been written about the dangers of performing reanalyses of data from RCTs that violate the randomisation of participants to conditions; simulated data make this message clear and quantify the extent to which such practices introduce bias. The simulations confirm the original NFER conclusion that the intervention has no benefit over “business as usual”. In this tutorial we demonstrate several features of DeclareDesign; this package can simulate observational and well as experimental research designs, allowing us to make principled decisions about which analysis to prefer.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要