A standardized framework to test event-based experiments

Alex Lepauvre,Lucia Melloni,Rony Hirschhorn,Liad Mudrik, Katarina Bendtz

crossref(2024)

引用 0|浏览1
暂无评分
摘要
The replication crisis in experimental psychology and neuroscience has received much attention recently. This has led to wide acceptance of measures to improve scientific practices, such as preregistration and registered reports. Less efforts have been devoted to performing and reporting the results of systematic tests of the functioning of the experimental setup itself. Yet, inaccuracies in the performance of the experimental setup may affect the results of a study, lead to replication failures, and importantly, impede the ability to integrate results across studies. Prompted by challenges we experienced when deploying studies across six laboratories collecting EEG/MEG, fMRI, and intracranial EEG (iEEG), here we describe a framework for both testing and the reporting of the performance of the experimental setup. In addition, 100 researchers were surveyed to provide a snapshot of current common practices and community standards concerning testing in published experiments’ setups. Most researchers reported testing their experimental setups. Almost none, however, published the performed test or their results. Tests were diverse, targeting different aspects of the setup. Through simulations, we clearly demonstrate how even slight inaccuracies can impact the final results. We end with a standardized, open-source, step-by-step protocol for testing (visual) event-related experiments, shared via protocols.io. The protocol aims to provide researchers with a benchmark for future replications and insights into the research quality to help improve the reproducibility of results, accelerate multi-center studies, increase robustness, and enable integration across studies.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要