Once screening (article selection) is completed, the Quality Assessment part of the systematic review process may begin. Quality assessment may be performed before or after data extraction. Quality assessment and the attempt to minimize bias are important features of the systematic review process and should be considered carefully before work gets underway on identifying studies for inclusion.
Once the data extraction stage of a systematic review is complete, the next task is to assess the quality of the included studies. Even a well-conducted systematic review can produce misleading results if it doesn’t conduct this important step. Why? Because of bias.
What is bias?
Bias is systematic error introduced into sampling or testing by favoring one outcome over another one. If a study is biased, it can understate or overstate the true effect of an intervention. And if that study makes it into a systematic review, the results of the review will, of course, be biased. That’s a big deal, given that the entire purpose of a systematic review is to give a reliable estimate of intervention effect.
The good news? Some studies, such as randomized controlled trials, are designed to minimize bias. The bad news? They can’t eliminate it completely.
What is quality assessment?
From inadequate blinding (which might lead to trial participants finding out whether they are on the treatment or the control drug) to selective reporting (e.g. writing up only the positive results of a trial), bias can creep into the most carefully designed studies. As a systematic reviewer, you can’t change how the studies in your review were conducted. Your mission, should you choose to accept it, is to look carefully at each study report and make a set of judgements about the risk of bias of each one. This is study quality assessment.
Since systematic reviews rely on data from other studies, the evidence in a systematic review is only as good as, or as free from bias as, the included studies. Therefore, the methodological quality of each individual study included in a systematic review should be assessed. This process involves appraising, judging, and documenting potential risks of bias.
Why do we need it?
A formal assessment of study quality helps review teams decide what to do with the study data they find, for example whether or not to include them in a synthesis. Information on the risk of bias can be presented alongside study results in a meta-analysis to show any flaws in the data that were used to produce the overall result. Sometimes the risk of bias varies across studies in a meta-analysis and review teams decide to include only those at low risk of bias. If this happens, sensitivity analysis can be used to explore how including or excluding certain studies affects the result of the meta-analysis.
Quality assessment and the attempt to minimize bias are important features of the systematic review process and should be considered carefully before work gets underway on identifying studies for inclusion.
Each study design will be appraised using a separate tool. For help in deciding which risk of bias / quality assessment tool(s) to use for each study design, consult one of the following sites.
Download the spreadsheet named, "Repository of Quality Assessment and Risk of Bias Tools OSF." Use the "Study Type/Intended Use" dropdown menu in column F to find recommended tools based on study design.
Cochrane Risk of Bias (ROB) 2.0 Tool
Templates are tailored to randomized parallel-group trials, cluster-randomized parallel-group trails (including stepped-wedge designs), and randomized cross-over trails and other matched designs.
CASP- Randomized Controlled Trial Appraisal Tool
A checklist for RCTs created by the Critical Appraisal Skills Program (CASP)
Checklist for Randomized Controlled Trials (JBI)
A critical appraisal checklist from the Joanna Briggs Institute (JBI)
The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses
Validated tool for assessing case-control and cohort studies.
A checklist created by the Critical Appraisal Skills Programme (CASP) to assess key criteria relevant to cohort studies.
Checklist for Cohort Studies (JBI)
A checklist for cohort studies from the Joanna Briggs Institute.
A checklist for quality assessment of case-control, cohort, and cross-sectional studies.
A checklist created by the Critical Appraisal Skills Programme (CASP) to assess key criteria relevant to case-control studies.
A checklist for quality assessment of case-control, cohort, and cross-sectional studies.
Tool to Assess Risk of Bias in Case Control Studies by the CLARITY Group at McMaster University
A quality assessment tool for case-control studies from the CLARITY Group at McMaster University.
JBI Checklist for Case-Control Studies
A checklist created by the Joanna Briggs Institute.