Validation of an Automated Procedure for Calculating Core Lexicon From Transcripts

JOURNAL OF SPEECH LANGUAGE AND HEARING RESEARCH(2022)

引用 5|浏览5
暂无评分
摘要
Purpose: The aim of this study was to advance the use of structured, monolo-gic discourse analysis by validating an automated scoring procedure for core lexicon (CoreLex) using transcripts.Method: Forty-nine transcripts from persons with aphasia and 48 transcripts from persons with no brain injury were retrieved from the AphasiaBank data-base. Five structured monologic discourse tasks were scored manually by trained scorers and via automation using a newly developed CLAN command based upon previously published lists for CoreLex. Point-to-point (or word -by-word) accuracy and reliability of the two methods were calculated. Scor-ing discrepancies were examined to identify errors. Time estimates for each method were calculated to determine if automated scoring improved efficiency.Results: Intraclass correlation coefficients for the tasks ranged from .998 to .978, indicating excellent intermethod reliability. Automated scoring using CLAN represented a significant time savings for an experienced CLAN user and for inexperienced CLAN users following step-by-step instructions.Conclusions: Automated scoring of CoreLex is a valid and reliable alternative to the current gold standard of manually scoring CoreLex from transcribed monologic discourse samples. The downstream time saving of this automated analysis may allow for more efficient and broader utilization of this discourse measure in aphasia research. To further encourage the use of this method, go to https://aphasia.talkbank.org/discourse/CoreLexicon/ for materials and the step-by-step instructions utilized in this project.Supplemental Material: https://doi.org/10.23641/asha.20399304
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要