I arrived at my interest in systematic reviews and meta-analysis through a circuitous route. In 1976 I joined the Epidemic Intelligence Service (EIS) Program at the Communicable Disease Center (now Centers for Disease Control and Prevention) (CDC). The Epidemic Intelligence Service was established in 1951 by Alexander D Langmuir in response to concerns about the threat of biological weapons at the time of the Korean conflict (Langmuir and Andrews 1952). The Service is modeled along the lines of a clinical residency, with the ‘resident’ learning on the job in a mentored experience in applied epidemiology. EIS officers are known as ‘the disease detectives’ because they investigate epidemics of disease, the health effects of disasters, and trends over time of infectious disease, environmental health, chronic disease, violence, and unintentional injuries, as well as maternal and child health (Thacker et al. 2001). I was assigned to the health department in Washington, DC, but spent my first few weeks with a team investigating the epidemic of Legionnaires Disease in Pennsylvania. Subsequently, I led investigations of diverse problems in hospitals, schools, restaurants, nursing homes, an institution for the mentally disabled, and communities, including a study of the effects of a severe drought in Haiti. However, it was an investigation of a small cluster of febrile morbidity in a Washington, DC, hospital for women that led to my first systematic review and meta-analysis, although I had heard of neither of those terms at the time.
While in medical school at the Mount Sinai School of Medicine, I had worked on projects with David Banta, who was on the faculty of the Department of Community Medicine there. David had moved on to the Office of Technology Assessment (OTA) in Washington, DC. Staff there analyzed for the US Congress the implications of different technologies, and David was head of the Health Program. At a Sunday brunch at his house, he and I were talking about our jobs, and I mentioned my investigation of febrile morbidity among women after childbirth. He asked me about electronic fetal monitoring (EFM) because, at the suggestion of the Nobel Laureate Frederick Robbins, Chair of the OTA Health Advisory Committee, David had done a quick review of the literature on the topic and had found no evidence that it had any beneficial effects. He suggested that we do a more systematic review of the literature on EFM, believing that this might be a model for future technology assessments.
That conversation led us to spend a considerable amount of time systematically reviewing Index Medicus, MEDLINE, obstetrical texts, and journals — in total, approximately 600 books and articles. We located only four randomized clinical trials (RCTs). It was during our consultations with experts that I first came in contact with Iain Chalmers, who at the time was Director of the National Perinatal Epidemiology Unit in Oxford. He had presented a meta-analysis of the EFM trials at a meeting of the European Society of Perinatal Medicine in 1978 (Chalmers 1979).
David Banta and I published a government report in 1979 (Banta and Thacker 1979a) and made many presentations before women’s groups and medical audiences, and at national medical conferences. These provoked strong interest by the press, and David dealt with their questions and with those from women’s advocacy groups. He also presented our work at a National Institutes of Health Consensus Conference on antenatal diagnosis (NIH 1979), and this experience sensitized me to the limitations and biases of collective expert opinion. I found myself dealing primarily with obstetricians. It was difficult to persuade them to listen to what we were saying rather than what they thought we had said, but they were not happy with our finding that there was little evidence to suggest beneficial effects of EFM.
Publication of our report in peer-reviewed journals was, initially, quite a challenge, and the comments from reviewers were caustic (Hobbins et al. 1979a; 1979b). Readers did not like our conclusion that insufficient evidence existed to support the routine use of EFM and that side effects and associated costs should be considered. We published an account of our work in an obstetrical journal in late 1979 (Banta and Thacker 1979b), and later published several follow-up articles in books and journals. One of these, written after 10 RCTs had been completed, was the lead article in an issue of the American Journal of Obstetrics and Gynecology (Thacker 1987). Years later, we reviewed our experience in an attempt to place the controversy in an historical context (Banta and Thacker, 2001). In 1983, we published a systematic review on a low-technology procedure, episiotomy, which was met with a much more positive response (Thacker and Banta 1983).
While all this was happening, I had gone to work permanently at the Center for Disease Control (now Centers for Disease Control and Prevention) as an epidemiologist. David Banta had gone to work at the Pan American Health Organization and later moved to the Netherlands where he continued his work in technology assessment. We had begun to learn about meta-analysis and to read the social sciences literature on the topic. In particular, meta-analysis was being used increasingly in education and psychology. It was about this time that I began to lecture EIS officers and others on improving the scientific quality of reviews, including the use of meta-analysis as a methodology.
In the fall of 1983, I enrolled in the Masters Programme in Epidemiology at the London School of Hygiene and Tropical Medicine. The curriculum took us to Wales to meet with Archie Cochrane and hear his views on RCTs. Also during that year I met Iain Chalmers and his colleagues in Oxford and learned of their interests in RCTs and developing methods for reviewing them more systematically. Iain stressed the distinction between the steps needed to reduce biases in reviews (he had a particular interest in reporting biases) and meta-analysis for quantitative synthesis of results. Although I had always separated these two elements in my lectures on reviews and meta-analysis, I had not come up with a term to denote the first of the two elements. One of my classmates at the London School of Hygiene, Cindy Mulrow, covered it in the title of her pivotal paper – ‘The medical review article: state of the science’ (Mulrow 1987; Huth 2008). Cindy and I, along with her colleague, Jacqueline Pugh, later wrote a paper together offering guidelines on writing abstracts for meta-analyses (Mulrow et al. 1988).
On my return to the United States, I resumed my work in epidemiology but continued to work in technology assessment and to conduct meta-analyses in my proverbial ‘free time’. The article published in JAMA in 1988 – Meta-analysis: a quantitative approach to research integration (Thacker 1988) arose from a conversation with Bruce Dan, a graduate of the EIS Program who was a senior editor at JAMA during 1984–1992. While he was also the medical editor for ABC News in Chicago, Illinois, we had been bringing him to Atlanta to train the EIS officers in a half-day course on dealing with the media. In the course of a conversation with him about meta-analysis, he suggested that JAMA might be interested in an article introducing practicing clinicians to the topic. A few months later, after successfully negotiating the peer-review process, the article was published (Thacker 1988).
As an epidemiologist, I can use my ‘science of review’ tools in a wide variety of different kinds of health research. Although I have been an author of more than a dozen published meta-analyses, as well as invited articles and book chapters on the topic, two activities seem likely to have a more lasting impact than any of these papers. The first has been through teaching the method to EIS officers and other fellows; the second has been through helping to launch The Guide for Community Preventive Services, which has now published approximately 200 systematic reviews of population-based interventions (Task Force on Community Preventive Services 2009).
Systematic review and meta-analysis are basically simple approaches to applying scientific principles to the synthesis of research evidence. Using the best available evidence to inform treatment choices should be the norm, and these tools make this more feasible than ever before. The failure to conduct such reviews before funding new research or population-based intervention programs is unwise and inefficient at best, and negligent in the eyes of many. The systematic review should be done rigorously, thoroughly, and without bias; and the quantitative synthesis should have that same rigor. Investigators as well as users now have guidelines available to make this possible (Moher et al. 1999; Stroup et al. 2000), and these will continue to evolve as we use and improve these methods (www.equator-network.org).
Stephen Thacker died on 15 February 2013 from complications of Creutzfeldt-Jacob Disease
This James Lind Library article has been republished in two parts in the Journal of the Royal Society of Medicine 2022;115:273-275. Print PDF
References
Banta HD, Thacker SB (1979a). Costs and benefits of electronic fetal monitoring: a review of the literature. Washington, DC: National Academies Press. National Center for Health Services Research Report Series.
Banta HD, Thacker SB (1979b). Assessing the costs and benefits of electronic fetal monitoring. Obstetrical and Gynecological Survey 34:627-642.
Banta HD, Thacker SB (2001). A historical controversy in health technology assessment: the case of electronic fetal monitoring. Obstetrical and Gynecological Survey 56:707-719.
Chalmers I (1979). Randomized controlled trials of fetal monitoring 1973‑1977. In: Thalhammer O, Baumgarten K, Pollak A, eds. Perinatal Medicine. Stuttgart: Georg Thieme:260‑265.
Hobbins JC, Freeman R, Queenan JT (1979a). The fetal monitoring debate. Obstetrics and Gynecology 54:103-109.
Hobbins JC, Freeman R, Queenan JT (1979b). The fetal monitoring debate. Pediatrics 63:942-951.
Langmuir AD, Andrews JM (1952). Biological warfare defense. Part 2. The Epidemic Intelligence Service of the Communicable Disease Center. American Journal of Public Health 42:235-238.
Moher D, Cook JC, Eastwood S, Olkin I, Rennie D, Stroup DF (1999). Improving the quality of reports of meta-analyses of randomized controlled trials: the QUOROM statement. Lancet 354:1986-1900.
Mulrow CD (1987). The medical review article: state of the science. Annals of Internal Medicine 106:485-488.
Mulrow CD, Thacker SB, Pugh J (1988). A proposal for more informative abstracts of review articles. Annals of Internal Medicine 1089:613-615.
NIH Consensus Statement Online (1979). Antenatal Diagnosis. Mar 5-7;2:11-15.
Stroup DF, Berlin JA, Morton S, Olkin I, Williamson GD, Rennie D, Moher D, Becker BJ, Sipe T, Thacker SB (2000). Meta-analysis of observational studies in epidemiology: a proposal for reporting. Journal of the American Medical Association 283:2008-2012.
Task Force on Community Preventive Services (2009). Guide to Community Preventive Services. Atlanta, GA: US Department of Health and Human Services, Centers for Disease Control and Prevention. Available at: https://www.thecommunityguide.org/task-force-findings
Thacker SB (1987). The efficacy of intrapartum electronic fetal monitoring. American Journal of Obstetrics and Gynecology 156:24-30.
Thacker SB (1988). Meta-analysis: a quantitative approach to research integration. Journal of the American Medical Association 259:1685-1689.
Thacker SB, Banta HD (1983). Benefits and risks of episiotomy: an interpretative review of the English-language literature, 1860-1980. Obstetrical and Gynecological Survey 38:322-338.
Thacker SB, Dannenberg AL, Hamilton DH (2001). The Epidemic Intelligence Service of the Centers for Disease Control and Prevention: 50 years of training and service in applied epidemiology. American Journal of Epidemiology 154:985-992.