INDEPENDENT REVIEW & MONITORING IMPROVES QUALITY OF PANSS DATA IN GLOBAL CLINICAL TRIALS

Barbara Echevarria,Cong Liu,Selam Negash,Mark Opler,Patricio Molero, Gianna Capodilupo

Schizophrenia Bulletin(2020)

引用 0|浏览4
暂无评分
摘要
Abstract Background The Positive and Negative Syndrome Scale (PANSS) (1) is the most widely used endpoint for measuring change in schizophrenia clinical trials. A set of flags have been developed by ISCTM expert working group to identify potential scoring errors in PANSS assessments (2). Measures have been taken by sponsors (pharmaceutical industry) with the goal of increasing scoring reliability and data quality, such as the use of Independent Review (IRev). We evaluated changes in data quality when site raters stop being recorded and monitored via IRev by comparing two studies with the same cohort of raters, one with independent review and one without. Methods Data from PANSS assessments in two global multisite schizophrenia clinical trials were analyzed. We selected data from raters participating in both studies (which run concurrently for a significant period of time). Raters were rigorously trained on administration and scoring conventions and certified prior to the study through demonstration of adequate interrater reliability. In addition to these steps, raters in study A were required to audio record all PANSS assessments with a selected subset of visits being subject to IRev. PANSS assessments in study B were neither recorded nor monitored via IRev. Data quality after study completion was examined by calculating the frequency of anomalous data patterns identified as “high” (very probable or definite error) by the ISCTM Working Group in both studies. Additionally, we examined the percentage of assessments with lower than expected PANSS interview duration as captured via an eCOA platform. Results There were 9441 eCOA PANSS assessments in study A and 6178 in study B included in this analysis. The proportions of flags that represented highly probable/definite error differed significantly between the studies (9% vs 18% for Study A and B, respectively, p<.01). The most significant differences in ISCTM flags were related to overly consistent scoring patterns (27 or more items scored identically to the prior visit) occurring with higher frequency in study B. Additionally, study B also had a significantly higher frequency of assessments flagged for low interview duration (< 15 minutes) (1% vs 4% for Study A and B, respectively, p<.01). Discussion Initial rater training is necessary but not sufficient to ensure adequate data quality in schizophrenia trials. Implementation of additional in-study oversight through Independent Review or similar methods reduces the probability of data error in PANSS assessments, including the appearance of improbable rating patterns and decreased time spent interviewing study subjects. One potential limitation is that study A is a double-blind study whereas study B is an open label extension of study A.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要