Developing a Longitudinal Scale for Language: Linking Across Developmentally Different Versions of the Same Test.

JOURNAL OF SPEECH LANGUAGE AND HEARING RESEARCH(2019)

引用 2|浏览2
暂无评分
摘要
Purpose: Many language tests use different versions that are not statistically linked or do not have a developmental scaled score. The current article illustrates the problems of scores that are not linked or equated, followed by a statistical model to derive a developmental scaled score. Method: Using an accelerated cohort design of 890 students in Grades 1-5, a confirmatory factor model was fit to 6 subtests of the Test of Language Development-Primary and Intermediate: Fourth Edition (Hammill & Newcomer, 2008a, 2008b). The model allowed for linking the subtests to a general factor of language and equating their measurement characteristics across grades and cohorts of children. A sequence of models was fit to evaluate the appropriateness of the linking assumptions. Results: The models fit well, with reasonable support for the validity of the tests to measure a general factor of language on a longitudinally consistent scale. Conclusion: Although total and standard scores were problematic for longitudinal relations, the results of the model suggest that language grows in a relatively linear manner among these children, regardless of which set of subtests they received. Researchers and clinicians interested in longitudinal inferences are advised to design research or choose tests that can provide a developmental scaled score.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要