Interpreting Model Comparison Requires Understanding Model-Stimulus Relationships

Computational Brain & Behavior(2019)

引用 4|浏览5
暂无评分
摘要
Lee et al. (Computational Brain & Behavior, 2019 ) discuss ways to improve research practices for evaluating quantitative cognitive models. We propose the additional research practices of careful consideration, documentation, and analysis of the stimuli used to generate responses. Current modeling practice too often fails to acknowledge how the stimuli used to generate responses from research participants can influence the results of model comparisons. We recommend researchers (a) disclose how the research stimuli were selected and (b) uncover and report the diagnosticity of the stimuli for the models being tested. To demonstrate the importance of this recommendation, we present lessons learned from model testing in judgment and decision-making research. We focus on the documentation and reporting of model-stimulus relationships, specifically diagnosticity, and demonstrate how transparent documentation of diagnosticity facilitates interpretation of the evidence it generates. We conclude with recommendations regarding research tools available to achieve these goals.
更多
查看译文
关键词
Model testing, Model distinguishability, Stimulus selection, Diagnosticity, Choice modeling
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要