Unbiased, up-to-date systematic reviews of all the relevant, reliable evidence are needed to provide trustworthy evidence to inform choices in practice and policy.
One of the twentieth century pioneers of fair tests of treatments, Austin Bradford Hill, noted that readers of reports of research want answers to four questions: ‘Why did you start?’, ‘What did you do?’, ‘What did you find?’, and ‘What does it mean anyway?’ (Hill 1965). The quality of the answer to Hill’s last question is particularly important because this is the element of a research report which is most likely to influence actual choices and decisions about treatments.
Only very rarely will a single fair test of a treatment yield sufficiently strong evidence to provide a confident answer to the question ‘What does it mean?’ A fair test of a treatment is usually one of several tests addressing the same question. For a reliable answer to the question ‘What does it mean?’, then, it is important to interpret the evidence from a particular fair test in the context of a careful assessment of all the evidence from fair tests that have addressed the question concerned.
Lord Rayleigh – President of the British Association for the Advancement of Science-expressed the need to observe this principle more than a century ago:
“If, as is sometimes supposed, science consisted in nothing but the laborious accumulation of facts, it would soon come to a standstill, crushed, as it were, under its own weight…. Two processes are thus at work side by side, the reception of new material and the digestion and assimilation of the old…The work which deserves, but I am afraid does not always receive, the most credit is that in which discovery and explanation go hand in hand, in which not only are new facts presented, but their relation to old ones is pointed out.” (Rayleigh 1885)
Very few reports of fair tests of treatments discuss their results in the context of a systematic assessment of all the other relevant evidence (Clarke and Hopewell 2013). As a result, it is usually difficult for readers to obtain a reliable answer to the question ‘What does it mean?’ from reports of new research.
As noted in an earlier explanatory essay, embarking on new tests of treatments without first reviewing systematically what can be learnt from existing research is dangerous, wasteful and unethical (see Why comparisons must address genuine uncertainties). Reporting the results of new tests without interpreting new evidence in the light of systematic assessments of other relevant evidence is also dangerous because it results in delays in the identification of both useful and harmful treatments (Antman et al. 1992). For example, between the 1960s and the early 1990s, over 50 fair tests of drugs to reduce heart rhythm abnormalities in people having heart attacks were done before it was realised that these drugs were killing people. Had each report assessed the results of new tests in the context of all the relevant evidence, the lethal effects of the drugs could have been identified a decade earlier, and many unnecessarily premature deaths could have been avoided (Clarke et al. 2014).
In an age in which research papers are increasingly made freely available online it should be possible to deal with the limitations found in most reports of new research (Chalmers and Altman 1999; Smith and Chalmers 2001). However, rather than basing conclusions about the treatments on one or a few individual studies, users of research evidence are increasingly turning for reliable information to online, up-to-date, systematic reviews of all relevant, reliable evidence, because these are increasingly recognised as providing the best basis for conclusions about the effects of medical treatments.
Just as it is important to take steps to avoid being misled by biases and the play of chance in planning, conducting, analysing and interpreting individual fair tests of treatments, similar steps must also be taken in planning, conducting, analysing and interpreting systematic reviews. This entails:
- specifying the question to be addressed by the systematic review
- defining eligibility criteria for studies to be included
- identifying (all) potentially eligible studies
- applying eligibility criteria in ways that limit bias
- assembling as high a proportion as possible of the relevant information from the studies
- analysing this information, if appropriate and possible, using meta-analysis and a variety of analyses
- preparing a structured report
One manifestation of the increasing recognition of the crucial importance of systematic reviews for assessing the effects of treatments has been the rapid evolution of methods to improve the reliability of reviews. The first edition of a book entitled Systematic Reviews was less than 100 pages long (Chalmers and Altman 1995): only six years later, the second edition weighed in at nearly 500 pages and included rapidly evolving strategies for increasing the information obtained from research (Egger et al. 2001).
There continue to be important developments in the methods used for preparing systematic reviews, including those needed to identify unanticipated effects of treatments and for incorporating the results of research describing and analysing the experiences of people giving and receiving treatments (Thomas et al 2004; Jefferson et al 2014).
< Previous Essay | Next Essay >
The text in these essays may be copied and used for non-commercial purposes on condition that explicit acknowledgement is made to The James Lind Library (www.jameslindlibrary.org).
References
Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC (1992). A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts. JAMA 268:240-48.
Chalmers I, Altman DG (1995). Systematic Reviews. London: BMJ Publications.
Chalmers I, Altman DG (1999). How can medical journals help prevent poor medical research? Some opportunities presented by electronic publishing. Lancet 353:490-493.
Clarke M, Hopewell S (2013). Many reports of randomised trials still don’t begin or end with a systematic review of the relevant evidence. J Bahrain Med Soc 24: 145-148.
Clarke M, Brice A, Chalmers I (2014). Accumulating research: a systematic account of how cumulative meta-analyses would have provided knowledge, improved health, reduced harm and saved resources. PLoS ONE 9(7): e102670. doi:10.1371/journal.pone.0102670.
Egger M, Davey Smith G, Altman D (2001). Systematic Reviews in Health Care: meta-analysis in context. 2nd Edition of Systematic Reviews. London: BMJ Books.
Hill AB (1965). Cited in ‘The reasons for writing’. BMJ 4:870.
Jefferson T, Jones MA, Doshi P, Del Mar CB, Hama R, Thompson MJ, Spencer EA, Onakpoya I, Mahtani KR, Nunan D, Howick J, Heneghan CJ (2014). Neuraminidase inhibitors for preventing and treating in?uenza in healthy adults and children. Cochrane Database of Systematic Reviews 2014, Issue 4. Art. No.: CD008965. DOI:10.1002/14651858.CD008965.pub4
Rayleigh, Lord (1885). Address by the Rt. Hon. Lord Rayleigh. In: Report of the fifty-fourth meeting of the British Association for the Advancement of Science; held at Montreal in August and September 1884, London: John Murray.
Smith R, Chalmers I (2001). Britain’s gift: a ‘Medline’ of synthesized evidence. BMJ 323:1437-1438.
Thomas J, Harden A, Oakley A, Oliver S, Sutcliffe K, Rees R, Brunton G, Kavanagh J (2004). Integrating qualitative research with trials in systematic reviews BMJ 328:1010-1012