Determining Negligible Associations in Regression

QUANTITATIVE METHODS FOR PSYCHOLOGY(2023)

引用 2|浏览3
暂无评分
摘要
Psychological research is rife with inappropriately concluding "no effect" between pre-dictors and outcome in regression models following statistically nonsignificant results. However, this approach is methodologically flawed because failing to reject the null hypothesis using tra-ditional, difference-based tests does not mean the null is true. Using this approach leads to high rates of incorrect conclusions that flood psychological literature. This paper introduces a novel, methodologically sound alternative. In this paper, we demonstrate how an equivalence testing ap-proach can be applied to multiple regression (which we refer to here as "negligible effect testing") to evaluate whether a predictor (measured in standardized or unstandardized units) has a negli-gible association with the outcome. In the first part of the paper, we evaluate the performance of two equivalence-based techniques and compare them to the traditional, difference-based test via a Monte Carlo simulation study. In the second part of the paper, we use examples from the literature to illustrate how researchers can implement the recommended negligible effect testing methods in their own work using open-access and user-friendly tools (negligible R package and Shiny app). Finally, we discuss how to report and interpret results from negligible effect testing and provide practical recommendations for best research practices based on the simulation results. All materi-als, including R code, results, and additional resources, are publicly available on the Open Science Framework (OSF): osf.io/w96xe/.
更多
查看译文
关键词
Equivalence testing,negligible effect,linear regression,lack of association
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要