A timely review for systematic reviews.

JBI evidence synthesis(2023)

引用 0|浏览6
暂无评分
摘要
A great deal of time and effort has been invested in the pursuit and advancement of the evidence-based health care (EBHC) movement over recent decades. Aligned with this growth, multiple and diverse organizations, including JBI, have sprouted to carry forth the banner of EBHC. Systematic reviews have been a fundamental element of both the supporting narrative for EBHC and also, arguably, the principal focus of the continued investment internationally in research activity in this field since its inception. The efforts of many individuals, including those who are affiliated with these organizations, have contributed a large part of this investment through pursuing methodological research toward best practice in the conduct of evidence syntheses. Countless more researchers and authors have invested in applied research, employing systematic review methodology and methods to answer their questions, with the intent to provide the most trustworthy evidence to inform policy and practice. A timely manuscript in this month’s issue of JBI Evidence Synthesis1 offers an opportunity for reviewers and readers to take stock of and reflect on where the field of evidence synthesis, and organizational directives applicable to the field, including those from JBI, have arrived to date. Kolaski et al.1 highlight current deficiencies and confusion across terminology, methods of synthesis, and the application of the available methods by review authors—all of which rightfully cast doubt on the trustworthiness of many systematic reviews and question their authoritative claim to most appropriately guide decision-making.1 Noteworthy among these issues, Kolaski et al.1 identify problems associated with the classification of primary study designs and the varying taxonomies and algorithms available to assist with classification. This issue is not exclusive to primary research; it is also apparent at the secondary research level, where the waters are further muddied by evolving methodologies of synthesis that continue to emerge. While work has commenced to attempt to tackle this problem and develop a universal taxonomy,2 it is critical that reviewers are provided with guidance to ensure that they are following the most appropriate approach to answer their health care question. In the interim, online tools such as Right Review (https://rightreview.knowledgetranslation.net/) may provide a helpful starting point for reviewers. An additional concern the authors identify is that of redundant reviews, that is, reviews that overlap and may be deemed as wasteful and unnecessary.1 This topic has been discussed repeatedly in the literature, most recently by Puljak and Lund.3 It is a critical issue that we will explore further and discuss in a future editorial. Coupled with all of this is the astute realization that undertaking a systematic review and applying these standards is an onerous undertaking and a pathway that can be fraught with difficulty. This is due to the potential not only for misapplication of methods but also for application of methods that, while available, may not be the most appropriate for the job at hand.1 In light of these observations, Kolaski et al.1 respond by bringing together the latest in methodological advances and best practices to point readers toward solutions and the way forward in the conduct of systematic reviews and their reporting. While some readers may feel it is unnecessary or too simplistic to emphasize the differences between reporting guidance vs guidance for conduct, as Kolaski et al.1 note, we fully support this conversation. As editors of a journal that specializes in evidence syntheses, and as educators who also engage in teaching synthesis methodologies, it is remarkably clear that reviewers continue to struggle with distinguishing the nature and utility of guidance for the conduct of a systematic review vs reporting standards.4 While they are both important to ensuring trustworthiness, they are discrete and should not be considered interchangeable—a well reported review does not equate to the best standards of conduct. Ongoing discourse such as this is needed to ensure both elements are considered when undertaking any type of evidence synthesis. This independent assessment of the field1 offers timely insight for JBI and our program of evidence synthesis.5 The solutions provided by the authors reinforce the importance of ongoing investment in our training and education programs for reviewers.5 Furthermore, necessary points for advancement of our own methods (eg, continued development of the JBI critical appraisal tools for the majority of our quantitative study designs) are identified.1 JBI’s program of methodological development, under the auspices of the JBI Scientific Committee, acknowledged similar issues in 2021, which spawned a program for the revision of these tools.6 The results of this undertaking are now bearing fruit, with revised appraisal tools available for reviewers who wish to continue to use the popular JBI appraisal tools to facilitate the conduct of their review.7 These reviewers may rest assured that transparent processes underpin the development of the tools they are using to assess methodological quality and risk of bias.8 Ongoing investment and integration of best practices into software tools (eg, JBI SUMARI) to facilitate the conduct of systematic reviews will inevitably facilitate the demands for methodological adherence in systematic review conduct.5 Reflecting their rise in popularity, the majority of completed reviews presented in this issue, as with others across recent volumes of JBI Evidence Synthesis, are scoping reviews. At some point in the near future, a similar reflection of deficiencies in the application and conduct of scoping reviews, and concomitant solutions entwined with alignment to best practice standards of conduct and reporting, will be both informative and necessary.9 Systematic reviews remain fundamental pillars of EBHC to guide decision-making in health care. Standards for their conduct have been developed, yet there is ongoing confusion among reviewers. Such confusion can cause errors in the conduct of these demanding and often complex research undertakings to the extent that many published reviews are flawed.1 Considering this sobering realization, we encourage knowledge users to be vigilant when reading and interpreting the results of systematic reviews and to apply similar mechanisms of critique as they would to the results of any research. Despite all of this, when completed according to the best practices in the field, the systematic review can live up to its lofty expectation to inform the way forward to improved outcomes in health. Completion and publication of a systematic review involves multiple stakeholders, including editors and peer reviewers, whose participation alongside the authors in dissemination of scientific research and knowledge also carries a responsibility toward high standards of quality. On reflection and acknowledging that a great deal of work, guidance, education, and facilitation remains to be done across methods and with diverse stakeholders, JBI and JBI Evidence Synthesis are proud to be able to contribute to the advancement of the science of synthesis.
更多
查看译文
关键词
timely review,reviews
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要