My name is Maria Dellapina, and I am the Head of Operations at Prism. Prior to joining the team, I was working in academia, coordinating research projects and spending a lot of time thinking about and synthesizing evidence. I invite you to join me in this three part series, where we will explore the basics of systematic reviews, their current state, and how we must adapt them to keep pace with the world today. By the time we are done, it is my hope that you will understand not only why systematic reviews are important, but why bringing them up to speed with the pace of modern research is crucial to the everlasting pursuit of evidence-based practice.
The Basics of Systematic Reviews
In today’s post, we are covering the basics, so let’s get started!
What is a systematic review?
A systematic review is a rigorous method for synthesizing evidence that is intended to answer a specific research question while minimizing bias. Generally, the goal of a systematic review is to find an evidence-based answer to a research question that critically affects the practice of other researchers, clinicians, or practitioners.
How is a systematic review conducted?
The graphic to the right outlines the process for conducting a systematic review. Executing this process typically takes many months, or sometimes even years, to complete.
The first step in conducting a systematic review is developing the research question. Research questions for systematic reviews are usually quite specific, following the form: “What is the effect of intervention X on disease Y?”
Sometimes the question for a systematic review comes from the completion of a preliminary scoping review (see below for a description of scoping reviews). Regardless of how it’s developed, the specifics of the research question will shape the rest of the review process.
Once the question is set, the next step is to develop a protocol. The protocol spells out the details for the rest of the process, such as how the search for evidence will be conducted, how potentially relevant evidence will be screened, how it will be analyzed, etc.
Once the protocol is worked out, reviewers may then proceed to gather the evidence. This evidence often comes from literature published in academic journals. However, so-called “grey literature”—which includes policy reports, newspaper articles, and white papers—can be included as well.
To find all this evidence, the protocol will include a search strategy that describes the sources that are likely to contain the evidence needed to answer the research question, how those sources will be accessed, and how the potentially relevant results from the sources will be reviewed. A typical search strategy will yield thousands of results and is often conducted in collaboration with a subject-matter librarian.
To narrow down the search results, reviewers will screen them using a pre-set list of inclusion and exclusion criteria. Once the final set of results, or “records,” is achieved, each record is carefully scrutinized, and key details such as study design, population, intervention, and outcomes, are extracted. At this stage, the reviewers will assess the quality of the evidence by examining the study design, methods, and statistical approaches.
At this point—once the thousands of records have been whittled down, analyzed, and turned into a body of evidence—the systematic reviewers will summarize their findings, sometimes using a methodology known as a meta-analysis (discussed below). Ultimately, data interpretations and recommendations for practice are published in an article.
What is a meta-analysis?
As previously mentioned, in some cases, a systematic review is paired with a meta-analysis. A meta-analysis is a statistical analysis conducted on the pooled results from multiple studies. Running statistical analyses on pooled results can potentially detect important effects that would not be apparent in smaller, single studies. Meta-analyses can also allow reviewers to better understand the efficacy and effect size of a given intervention, analyze safety risks and benefits more comprehensively, extrapolate findings to the larger population, and examine sub-populations.
In order to perform a valid meta-analysis, reviewers need to make sure the studies they are combining are similar enough in design and execution that it actually makes sense to pool all the data. Sometimes a team may include a meta-analysis in their systematic review protocol, but later abandon this pursuit if they discover that the studies included in their final body of evidence are too dissimilar.
How are systematic reviews documented?
The methods employed in a systematic review need to be carefully documented in a protocol that is established before the review is conducted. As a means of promoting transparency and quality, some reviewers register their protocol with PROSPERO, an international registry of systematic review protocols.
Regardless of whether or not a protocol is registered, the end-result of most systematic reviews is a publication in an academic journal. However, it often takes a completed systematic review many months before it will finally appear in print. This means that despite all the hard work that the team has put in, the presentation of the final results is always at least a few months out of date. Sometimes there will be a plan to update a systematic review over time as new evidence emerges, but this is not typically the case.
Other Types of Reviews
No discussion of systematic reviews would be complete without discussing other types of evidence reviews!
Literature or Narrative Reviews
Literature reviews (also called narrative reviews) summarize the previous work of others, do not follow rigorous, pre-determined methods, and may not be designed to answer a specific research question. These types of reviews do not safeguard against author bias and are not comprehensive in their evidence search. You often see literature reviews in the sciences being used as justification for future research on a particular subject or included as part of a grant application. If you've taken a college composition course, you've probably written a literature review.
A scoping review is more rigorous than a literature review, but less rigorous than a systematic review. Scoping reviews are often done to understand the state of the evidence and sometimes determine if performing a systematic review is a worthwhile pursuit. Scoping reviews take less time, and unlike systematic reviews, which often narrow their focus to very specific types of studies, scoping reviews will often include an array of different types of articles and studies.
Like systematic reviews, scoping reviews attempt to be comprehensive in their evidence search and employ inclusion and exclusion criteria for screening (albeit the criteria are less rigid than those used in a systematic review). Furthermore, scoping reviews often ask broad research questions, compared to the very narrow and specific questions typically asked in a systematic review. Scoping reviews are often useful in identifying research gaps, summarizing evidence in emerging fields of study, or reviewing evidence in a field that employs a variety of study designs and methodologies.
That’s All for Now...
I hope this post has helped you understand the basics of systematic reviews and how they differ from other types of evidence reviews. In the end, it's important to remember:
- Systematic reviews are a rigorous approach to combining evidence, answering research questions, and informing the practice of other researchers, clinicians, and professionals.
- Systematic reviews are considered by many to be the gold standard of evidence synthesis and inform decision-making around the globe.
In my next post we are going to explore how systematic reviews became the gold standard of evidence synthesis and discuss why, in today’s world, they are at risk of being demoted to bronze.