Welcome to post two of a three-part Prism Academy series on systematic reviews! If you are thinking to yourself, “wait...post two?” be sure to check out the first post here. Trust me, it will help!
For those of you who don’t already know, my name is Maria, and I am the Head of Operations at Prism. Prior to joining the team, I was working in academia, coordinating research projects and spending a lot of time thinking about and synthesizing evidence. I want to thank you for joining me on this three-part series, where we are exploring the basics of systematic reviews, their current state, and how we must adapt them to keep pace with the world today. In today’s post, we are going to explore the strengths and weaknesses of systematic reviews and why bringing them up to speed with the pace of modern research is crucial to the everlasting pursuit of evidence-based practice.
But before we get too into the weeds, let's quickly discuss a bit of history:
Systematic reviews, as we know them today, were popularized in the 1970s when researchers at Oxford began synthesizing evidence in the pursuit of summarizing the effectiveness of health care interventions. This work, laid the foundation for Cochrane, "a global independent network of researchers, professionals, patients, carers and people interested in health." Given their rigorous methodology, Cochrane reviews are considered among the highest quality systematic reviews, informing evidence-based practice for researchers and practitioners around the globe.
In recent years, systematic reviews have rapidly increased in popularity, with some estimates claiming that in 2019, over 80 systematic reviews were published every day. That represents a ton of work - and we’ll come back to discuss more about that in a moment.
As previously mentioned, systematic reviews are considered by many to be the gold standard of evidence synthesis. There are several good reasons for this:
- Systematic reviews utilize rigorous methods which aim to minimize bias in the review of evidence from individual studies.
- Systematic reviews attempt to be comprehensive in their search strategy, enabling reviewers to look at all of the available evidence and combine it accordingly.
- Systematic reviews, when paired with an appropriately conducted meta-analysis, can allow reviewers to better understand the efficacy and effect size of a given intervention, analyze safety risks and benefits more comprehensively, extrapolate findings to the larger population, and examine sub-populations.
- Systematic reviews tend to include only trials of high-quality, such as randomized controlled trials (RCTs), however, systematic reviews are conducted on other types of studies.
- Systematic reviews are often conducted using standardized quality-assurance practices and guidelines such as PRISMA, the Cochrane Handbook for Systematic Reviews, and GRADE.
Given all of the above, it is easy to see why systematic reviews have been relied upon for decades as a source of evidence-based answers to questions that critically affect the practice of researchers, clinicians, or practitioners.
Yet, systematic reviews are far from a perfect solution to evidence synthesis. Indeed, their drawbacks quickly become apparent when one examines them within the context of today’s biomedical research ecosystem. Some of the weaknesses include:
- Systematic reviews usually take several months to complete, and by the time the results are published, the findings may be out of date. Even if attempts to update the review are made, these too may take many months.
- As I already mentioned above, some researchers estimate that over 80 systematic reviews are published every day. Given that many reviews are 20-80+ pages in length, if one new review is published in a particular field each week, the consumers of this knowledge (i.e., busy academics and clinicians) will likely struggle to keep up.
- Given the sheer quantity of systematic reviews being published, there is a substantial amount of unnecessary duplication.
- Despite the presence of standardized guidelines and practices, many systematic reviews are still conducted without rigor and report methodology in obscure language. Meta-analyses are also sometimes conducted inappropriately or use incorrect or questionable statistical methods.
- Systematic reviews may ask questions that hold little to no value for the practice of other researchers, clinicians, or practitioners. It has even been suggested that some reviews are conducted simply to boost an author’s publication numbers.
Taken together, the strengths and weaknesses of systematic reviews reveal an interesting tension at the root of evidence synthesis:
When conducted using appropriate methods, systematic reviews are a high-quality source of evidence-based answers. However, reviews take many months to complete and often struggle to keep pace with the rate at which new data is published.
We add to this the fact that dozens of reviews of varying methodological quality get through peer-review and get published every day. This leaves it up to the consumers (i.e., those busy academics, clinicians, and practitioners) to read each review’s methods carefully and decide if the data are trustworthy and the conclusions are sound.
The result is a perfect storm of wasted time, misinterpretation of findings, and missed opportunities for implementation of evidence-based practice. Despite all the great things that systematic reviews can do, there is simply too much out there (of variable quality) for the experts to properly analyze, digest, and reliably fold into their practice.
That’s All for Now…
This is a dower to end on, perhaps - but also represents a great opportunity. I know I am not alone in the belief that we can do better. Many notable efforts are underway to ensure quality evidence-based decisions are at the heart of practice. In our next post, we will explore some of these efforts and how they each aim to amplify the strengths of systematic reviews while addressing some weaknesses. In the end, I hope you will find a way in which you can help systematic reviews meet the needs, capacities, and realities of the modern biomedical research ecosystem.
Additional Resources and References
- Hoffmann F, Allers K, Rombey T, et al. Nearly 80 systematic reviews were published each day: Observational study on trends in epidemiology and reporting over the years 2000-2019. J Clin Epidemiol. 2021;138:1-11. doi:10.1016/j.jclinepi.2021.05.022
- Naudet F, Schuit E, Ioannidis JPA. Overlapping network meta-analyses on the same topic: survey of published studies. Int J Epidemiol. 2017;46(6):1999-2008. doi:10.1093/ije/dyx138
- Ioannidis JP. The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses. Milbank Q. 2016;94(3):485-514. doi:10.1111/1468-0009.12210