Welcome to the third and final post of a three-part Prism Academy series on systematic reviews! If you haven't yet read the earlier posts in this series, be sure to check them out before continuing.
In today’s post, we are going to explore a few efforts aimed at addressing some of the weaknesses of systematic reviews. In the end, I hope you walk away with a better understanding of efforts being taken to improve systematic reviews and why doing so is crucial to to the everlasting pursuit of evidence-based practice.
For those of you who don’t know, my name is Maria, and I am the Head of Operations at Prism. Prior to joining the team, I was working in academia, coordinating research projects and spending a lot of time synthesizing evidence. I want to thank you for joining me on this series, where we are exploring the basics of systematic reviews, their current state, and how we must adapt them to keep pace with modern research.
In today’s post, we are going to explore a few efforts aimed at addressing some of the weaknesses of systematic reviews. In the end, I hope you walk away with a better understanding of efforts to improve systematic reviews and why such efforts are crucial for the everlasting pursuit of evidence-based practice.
Background and Refresher
Before diving in, let’s revisit some of the weaknesses of systematic reviews that were highlighted in the previous post:
- Despite the presence of standardized guidelines and practices, many systematic reviews are still conducted without rigor and report their methods in obscure language. Meta-analyses are also often conducted inappropriately and misuse statistical methods.
- Some researchers estimate that over 80 systematic reviews are published every day. Given the sheer quantity of this work, there is a substantial amount of unnecessary duplication.
- Systematic reviews take several months to complete, by the time the results are published, the findings may be out of date. Even if attempts to update the review are made, these too may take many months.
1. Use existing rubrics to evaluate the quality of a systematic review
As a consumer of systematic reviews, it can be hard to tell which reviews are trustworthy and which have serious methodological flaws. It can be a dangerous trap to assume that a review is of good quality just because it was peer-reviewed and published.
A lot of time and effort has gone into addressing issues related to systematic review quality and reporting consistency. Tools such as the Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA), were developed to promote not only quality, but standardization across the systematic review ecosystem. Another tool, Grading of Recommendations, Assessment, Development and Evaluation (GRADE), was developed to systematically appraise the quality of evidence used for a meta-analysis. Using the GRADE appraisal as a lens for interpreting meta-analysis results, reviewers rate both the strength of evidence for a given practice recommendation and the strength of the recommendation overall.
Some other tools include the Risk of Bias in Non-randomized Studies (ROBINS-I) tool, developed (as the name states) to assess risk of bias in non-randomized studies. The Risk of Bias (RoB-2) tool was developed to assess risk of bias in randomized studies. The Assessing the Methodological Quality of Systematic Review (AMSTAR-2), was developed for assessing the overall quality of a systematic review by external reviewers. As a consumer of reviews, it is important to familiarize yourself with common tools. We have included in-text links here to help you get familiar with the commonly used tools mentioned above.
Simplistically, when looking at a systematic review for the first time, it is helpful to look at the methods section to see if the authors indicate use of the PRISMA Checklist and other methodologies or assessment tools. Use of these standard methods and tools can increase your confidence in the quality and completeness of the review. For more in-depth investigation, the AMSTAR-2 tool allows you to assess review quality and can be used as the basis of performing a systematic review of systematic reviews.
In short, the best way to benefit from efforts to address review quality and reporting consistency is to familiarize yourself with PRISMA, GRADE, ROBINS-I, RoB 2 and the AMSTAR-2. While this is not a complete list, it is a great place to start. If you run across a tool you aren’t familiar with, look it up and read about the conditions under which it should be used. If the conditions do not align with the review’s, and this misalignment is not explained, proceed with caution. Finally, if you’re reading a systematic review and no mention is made of any of these methods or tools, try using the AMSTAR-2 to assess quality. If you’re short on time, try checking out another review.
A few final thoughts on this weakness: While there has been significant effort to improve review quality and reporting consistency, the space is crowded. There is a lot of nuance in appraising the quality of a systematic review article and it remains a time intensive process. I don’t claim to have the answer to this issue, but I think it is important to name it as a persistent pain point.
If I can leave you with any advice on coping with this weakness, it would be to educate yourself on common methods and tools; arm yourself with knowledge and strive to be an informed consumer of systematic reviews.
2. Quickly scan the systematic review ecosystem
Some researchers estimate that over 80 systematic reviews are published every day. Unfortunately, not every review author has broad access to the literature and can determine if their review is duplicative.
Thankfully, there is a non-profit organization called, “Epistemonikos,” that curates a specialized database of healthcare-focused systematic reviews. Within this database, systematic review articles are linked based on the underlying studies that they include. In this way, an expert can quickly gain an understanding of the systematic review ecosystem surrounding a given study or topic. This helps the expert determine if their hypothesis has already been assessed elsewhere. If another review exists, but it is several years old, the question of updating the existing review becomes relevant, and one we will discuss while addressing our last weakness.
3. Use tools to reduce the time and effort needed to create or update a systematic review
As I’ve emphasized through this series, the process of screening articles and extracting data for a systematic review is a labor-intensive process. On top of the time one spends screening articles and extracting data, there is the huge challenge of data management. Some such pain points include keeping track of reasons for article exclusion, at what stage of review an article was excluded, and what disagreements arose amongst paired reviewers. Keeping track of all of this information can prove to be a significant time investment.
Covidence is a platform that helps alleviate this pain point by providing a workflow software for reviewers to perform screening and data extraction in duplicate. During each round of screening , reviewers tag articles with the criteria upon which an article was eliminated. If there is disagreement between reviewers, the software flags the article for discussion. Once screening is complete, Covidence provides a form for double-reviewer extraction. When there is disagreement amongst the content extracted, this too is flagged for discussion.
Performing a systematic review using a software like Covidence is still a labor-intensive process, but it does alleviate some data management pain points.
But what about updating an existing review? Can this process also be made more efficient?
Efforts such as the ES3 initiative out of the University of Sydney are attempting to use “human in the loop” AI tools to assist authors with updating their existing systematic reviews. Similarly, Cochrane recently began publishing living systematic reviews. In this type of review, the original authors (or a working group) periodically update a review as new studies appear in the literature. Such efforts can certainly cut down on review duplication, but they do still require labor-intensive processes that not every author has the time or resources to do.
At Prism, we offer a product that can cut down on some of these labor-intensive activities—specifically, our data analytics platform, Prism Living Review. With Prism Living Review reviewers visualize not only their original work, but can easily update their dataset as new papers are published. This is powerful tool for busy academics, clinicians and practitioners looking to keep on top of the literature. It allows research staff to quickly update the database as new articles come out, immediately analyze results, and write addendums to the original report as required.
The lag time between a new, relevant study being published and its adding value to an existing evidence synthesis is thus cut way down, allowing users to stay current and avoid being held back by lengthy revision cycles. To see an example of Prism Living Review in action, check out this project on Opioid Use Disorder Treatment by a team at NYU Langone Health.
Beyond systematic reviews, Prism Living Review is currently being employed by a large research collaboration to communicate the outcomes of multiple evidence synthesis efforts. If you’re interested in exploring some of Prism’s other projects, go to app.prism.bio.
Thank you for joining me on this journey! I hope you walk away from this series with a better understanding of systematic reviews, their strengths, their weaknesses, and the many efforts underway to make them better and more useful.
But above all, I hope I’ve succeeded in foster your appreciation for the importance of continually improving evidence-based practice.
If you want to stay up to date on Prism, send us an email at email@example.com and let us know you’d like to be added to our email list. Trust me, we aren’t spammy, it’s one email a month. You can learn more about us at https://prism.bio and finally, if you have an idea for a project, evidence synthesis collaboration, comments on this series or just want to say hi, drop me a line at firstname.lastname@example.org. I look forward to hearing from you!
References and Resources
- Hoffmann F, Allers K, Rombey T, et al. Nearly 80 systematic reviews were published each day: Observational study on trends in epidemiology and reporting over the years 2000-2019. J Clin Epidemiol. 2021;138:1-11. doi:10.1016/j.jclinepi.2021.05.022
- Naudet F, Schuit E, Ioannidis JPA. Overlapping network meta-analyses on the same topic: survey of published studies. Int J Epidemiol. 2017;46(6):1999-2008. doi:10.1093/ije/dyx138
- Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA)
- Grading of Recommendations, Assessment, Development and Evaluation (GRADE)
- Risk of Bias in Non-randomized Studies (ROBINS-I)
- Risk of Bias (RoB-2)
- Assessing the Methodological Quality of Systematic Review (AMSTAR-2)