Oliver S (2014). Advantages of concurrent preparation and reporting of systematic reviews of quantitative and qualitative evidence.

© Sandy Oliver, UCL Institute of Education, London, WC1H 0AL, UK. Email: s.oliver@ioe.ac.uk


Cite as: Oliver S (2014). Advantages of concurrent preparation and reporting of systematic reviews of quantitative and qualitative evidence. JLL Bulletin: Commentaries on the history of treatment evaluation (https://www.jameslindlibrary.org/articles/advantages-of-concurrent-preparation-and-reporting-of-systematic-reviews-of-quantitative-and-qualitative-evidence/)


Introduction

Two systematic reviews played a key role in bringing together different types of evidence to inform policy decisions. The first was a review of process evaluations and controlled trials of young people promoting healthy behaviour to their peers (Harden et al. 1999). The second synthesised qualitative studies of children’s views of healthy eating and integrated the findings with a meta-analysis of controlled trials of healthy eating interventions (Thomas et al. 2004).

These systematic reviews were conducted against a backdrop of arguments about the value of different ways of knowing. Ann Oakley’s historical analysis of social science methods, their advocates and detractors, revealed power struggles between the alternatives of qualitative and quantitative social science (Oakley 2000). When quantitative and experimental methods were considered particularly influential, they were also seen by some qualitative researchers as inappropriate for investigating the social domain. Conversely, some adherents of quantitative or experimental methods saw qualitative methods as inadequate for informing policy or practice decisions. Competition and debate between the different standpoints was sometimes fierce, even abusive, and often unconstructive. The growth of systematic reviews, which initially addressed only questions about intervention effects, attracted critics who argued against whole bodies of literature, not just single studies.

Systematic reviews for action and systematic reviews for understanding

Until recently, most systematic reviews about health care were syntheses of controlled trials, often randomised, which drew conclusions about the effects of offering different  treatments, preventive care, or rehabilitation. As the body of literature synthesising studies with experimental designs grew, so the methodology guiding their systematic identification, appraisal and synthesis advanced. Less widely known were syntheses of a broader range of study designs, which drew conclusions about what could be understood from qualitative research and studies using mixed methods. This was a growing literature of social science syntheses, which identified themes and lines of argument to offer explanations about, for example, education, health, or family life. While systematic review teams tended to specialise in synthesising either quantitative or qualitative data, the Evidence for Policy and Practice Information and Coordinating Centre (the EPPI-Centre) at the Institute of Education, University of London, found itself spanning these disparate approaches to make use both of the methods for assessing the likelihood of causal relationships and those for advancing understanding of different social perspectives.

The EPPI-Centre’s design of systematic reviews involved translating the clinical model that had been initially developed for evidence-based medicine to a more social model to support evidence-informed health promotion. Initial efforts prompted the kind of criticism that had hitherto marked debates about primary studies. Advocates of qualitative social science for enhancing the understanding of social problems rejected systematic reviews as misguided, inappropriate and unthoughtful. Systematic reviewers adhering to more established synthesis methods complained about a lack of rigour and the risk of bringing systematic reviews into disrepute. One driver for our work was this academic debate but another important one was the interest in systematic reviews among research funders, initially the Medical Research Council and the local NHS research commissioning bodies, and then the Department of Health, which had responsibility for improving public health in England.

The findings of health promotion reviews reported by the mid-1990s had frustratingly found little sound evidence to answer questions about ‘what works?’. Not to be put off taking an evidence-informed approach to decision making, the Department of Health asked the EPPI-Centre to tackle the problem differently, by considering instead the question ‘what might work?’ The Department asked us to find evidence about the people involved in interventions; what influenced the implementation of interventions (practitioners’ behaviour), and what influenced behaviour in the personal and working lives of the people targeted by the interventions (the behaviour of the wider public).

For these reviews, the first step was not to draw on existing systematic review methodology, but rather to think more fundamentally about the types of studies that could contribute to answering these questions. We recognised research designs with strength for attributing cause and effect, those that were strong for monitoring implementation, and the designs capable of contributing data about people’s experiences. We took a more pragmatic approach to systematic reviewing, focussing on questions posed by policy makers, solving research problems step-by-step, and using our knowledge of research methodology to review different types of studies. Thus we identified process evaluations to investigate implementation of interventions; observational studies to identify factors associated with healthy and unhealthy behaviour; and qualitative studies to understand how these factors and interventions are experienced and understood.

Juxtaposing evidence from different types of studies

Our first venture into reviewing this wider literature was about young people delivering health promotion interventions to their peers (Harden et al. 1999). Process evaluations, including qualitative research, were considered alongside controlled trials employing quantitative methods. Comparing the results of the two approaches allowed conclusions to be drawn about how the implementation and acceptability of interventions might influence their effects. The mixed findings of controlled trials were complemented, for example, by ‘evidence of negative views about peer education and of conflicts between schools and the idea of young people and adults working in partnership’. In the 2001 update of its guidance for undertaking systematic reviews in health care, the University of York’s Centre for Reviews and Dissemination used this EPPI-Centre review as an example of how to include different study designs in systematic reviews (Centre for Reviews and Dissemination 2001).

In a subsequent series of systematic reviews about young people’s mental health, physical activity and healthy eating, we sought and appraised studies that could identify barriers to and facilitators of behaviour change. We argued that associated factors could be identified by: quantitative observational studies (for example, surveys, case-control studies, non-randomized cohort studies); qualitative studies of people’s views; and intervention studies addressing barriers or building on facilitators of behaviour change. The Steering Group we set up for these reviews advised us to include, as well as outcome evaluations, only studies of young people’s views (often qualitative), in order to identify factors perceived by them as influencing their behaviour.

Comparing these two sources of evidence revealed interventions that were supported both by young people’s perceptions (for example, schools not offering healthy food) and by controlled trials (for example, school meals made healthier); these interventions were considered both appropriate and effective (Shepherd et al. 2006). Where interventions reflected young people’s perceptions, but for which no well-conducted controlled trials were identified (for example, better labelling of food at eating places), there was need for further evaluation (Shepherd et al. 2006). This approach of piecing evidence together like a jigsaw created new knowledge at the interface of the different methodologies, in this case, where qualitative studies met controlled trials.

This approach overcame the problem of a dearth of rigorous studies for assessing effectiveness questions. The solution was to choose research questions that could be justifiably addressed by the types of studies available, and developing appropriate methods to systematically review and synthesise them.

Integrating evidence from different types of studies

In the early 2000s, James Thomas, now associate director of the EPPI-Centre, and Angela Harden (associate director of the EPPI-Centre until 2008), led a team of systematic reviewers (of which I was a member) to prepare and report innovative systematic reviews about children and healthy eating (Thomas et al. 2004). Two methodological innovations were introduced: first, the application of thematic analysis to reports of completed studies; second, a mixed-method synthesis that encompassed studies measuring effectiveness and studies investigating people’s views and experiences.

This review thus combined two parallel approaches to generating new knowledge: meta-analysis of well-conducted controlled trials provided evidence about the effects of interventions; thematic synthesis yielded information on people’s views and experiences.

Synthesising the findings across these different types of investigations did more than juxtapose the findings from each set of studies for researchers to draw out conclusions. This synthesis matched the implications for interventions from the studies of views to interrogate the meta-analysis of controlled trials with subgroup analyses. The result was a quantitative estimate of the effect size of interventions sharing a characteristic that, according to children’s views, is implicated in their eating behaviour. This new knowledge differed from previous meta-analyses. Instead of providing estimates of the effects of interventions judged by researchers to be similar, it provided estimates of effects for interventions judged similar in terms of research participants’ perspectives.

At the time (early 2000s) that we conducted this review it was an extension of how we had been working for ten years: navigating the evidence available and drawing out any learning that could be justified by our knowledge of research methodology. Instead of answering the research question by seeking data to fit our research methods, we sought research questions and methods to fit the data available. Looking back after another ten years, it is reasonable to claim that the report of this work (Thomas et al. 2004) occupies an important place in the history of research synthesis in health care and public policy. By 2003, synthesis methods were evolving differently in different research centres and university departments: thematic synthesis, meta-ethnography, grounded theory, and so on. It took time for systematic reviewers to understand the similarities and differences of their methods (Barnett-Page and Thomas 2009). More recently a spectrum of approaches has distinguished systematic reviews that aggregate the findings of similar studies to estimate the magnitude and precision of a hypothesised relationship; and systematic reviews that configure the findings of assorted studies to develop coherent theory (Sandelowski et al. 2012; Gough et al. 2012).

The study by Thomas and his colleagues (2004) adopted approaches from either end of the methodological spectrum and then integrated the two in order to reveal and estimate simultaneously the magnitude and precision of a relationship. In doing so it combined the strength of traditional quantitative research methods (for example, comparison of equivalent population samples) with the strengths of qualitative social science (for example, data collection methods that helped children express their views). Synthesis of the qualitative studies of children’s views suggested that interventions should treat fruit and vegetables in different ways, and should not focus on health warnings. When this finding was used to structure a subgroup analysis of the RCTs, it became clear that interventions which were in line with these suggestions tended to be more effective than those that were not. This conclusion could not have been drawn from synthesising either set of studies alone.

Choosing methodologies not as alternatives but as synergistic

Taking such a problem-solving approach to learning from the research literature requires investigators to cross the boundaries between different methodologies to make use of their different respective tools. The value of doing so has been nicely illustrated by Choi and Pak (2006) who wrote in terms of research that crosses academic disciplines. Their systematic review distinguished different degrees of combining disciplines: by drawing on a range of distinct disciplines to use their different strengths for different purposes (multidisciplinary); by making links between different disciplines and creating additional knowledge from where they meet (interdisciplinary); and by dissolving the boundaries between them and around them (transdisciplinary research).

Multidisciplinary research: Our first effort at cross-disciplinary reviewing was the multidisciplinary approach: it adopted parallel procedures for reviewing process evaluations and controlled trials about peer-delivered health promotion, then discussed their findings together, the latter providing some explanations for the former. Thus process evaluations revealing how teachers often undermine peer delivery by retaining control explained why controlled trials showed that the interventions were not working, but could not offer solutions.

Interdisciplinary research: Our subsequent series of barriers and facilitators in systematic reviews is characteristic of interdisciplinary research, where new learning emerges at the interface. Comparing the findings of qualitative studies with others evaluating the effects of interventions – some by sound designs and others by flawed designs – revealed appropriate interventions that were ready for policy consideration or for rigorous evaluation, thereby allowing specific recommendations to be made for both policy and research.

Transdisciplinary research: The children and healthy eating review provides an example of the advantages of merging two methodologies to provide a single coherent product, which was published as a single report in a widely read medical journal. The findings of the overarching synthesis presents the statistical meta-analysis in terms determined by the synthesis of children’s ‘views’. In the final synthesis and conclusion, the contributions of the experimental designs and the qualitative studies underpinning the work are indistinguishable, but the relevance to children and policy is clear. Moreover, the methodological development itself, a response to pressures from outside academia, transcended academic disciplines.

Conclusion

The science of research synthesis, and the children and healthy eating review in particular, has shifted cross-disciplinary discussions from competition between alternative methodologies towards application of complementary approaches. Subsequent EPPI-Centre systematic reviews have employed transdisciplinary approaches to make use of available evidence about, for instance, plain tobacco packaging (Moodie et al. 2012), cosmetic procedures (Brunton et al. 2013), and errors when medicating children (Sutcliffe et al. 2014). The beneficiaries of these methodological developments are the people facing real-world problems who wish to apply research findings to advance their understanding and make wise decisions, as well as all those who are at the receiving end of policy strategies and interventions.

Acknowledgments

I am grateful to Ginny Brunton, Angela Harden, Kristin Liabo, Ann Oakley, Katy Sutcliffe and James Thomas for comments on earlier drafts of this paper.

This James Lind Library article has been republished in the Journal of the Royal Society of Medicine 2015;108:108-111. Print PDF


References

Barnett-Page E, Thomas J (2009). Methods for the synthesis of qualitative research: a critical review. BMC Medical Research Methodology 2009;9:59. doi:10.1186/1471-2288-9-59.

Brunton G, Paraskeva N, Caird J, Schucan Bird K, Kavanagh J, Kwan I, Stansfield C, Rumsey N, Thomas J (2013) Psychosocial predictors, assessment and outcomes of cosmetic interventions: a systematic rapid evidence review. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London.

Centre for Reviews and Dissemination. Undertaking systematic reviews of research on effectiveness: CRD’s guidance for carrying out or commissioning reviews (2nd Edition). CRD Report 4. York: University of York. 2001

Choi BCK, Pak AWP (2006). Multidisciplinarity, interdisciplinarity and transdisciplinarity in health research, services, education and policy: 1. Definitions, objectives, and evidence of effectiveness. Clin Invest Med 29:351–364.

Gough D, Oliver S, Thomas J (2012). Clarifying differences between review designs and methods. Systematic Reviews 1:28 doi:10.1186/2046-4053-1-28

Harden A, Weston R, Oakley A (1999). A review of the effectiveness and appropriateness of peer-delivered health promotion interventions for young people. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London.

Moodie C, Stead M, Bauld L, McNeill A, Angus K, Hinds K, Kwan I, Thomas J, Hastings G, O’Mara-Eves A (2012) Plain tobacco packaging: a systematic review. London: Public Health Research Consortium.

Oakley A (2000). Experiments in knowing. Cambridge: Polity Press.

Sandelowski M. Voils CJ, Leeman J, Crandlee JL (2012). Mapping the mixed methods–mixed research synthesis terrain. Journal of Mixed Methods Research 6:317-331.

Shepherd J, Harden A, Rees R, Brunton G, Garcia J, Oliver S, Oakley A (2006). Young people and healthy eating: a systematic review of research on barriers and facilitators. Health Education Research 21:239-257.

Sutcliffe K, Stokes G, O’Mara-Eves A, Caird J, Hinds K, Bangpan M, Kavanagh J, Dickson K, Stansfield C, Hargreaves K, and Thomas J. (2014) Paediatric Medication Error: A systematic review of the extent and nature of the problem in the UK and international interventions to address it. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London.

Thomas J, Harden A, Oakley A, Oliver S, Sutcliffe K, Rees R, Brunton G and Kavanagh J (2004). Integrating qualitative research with trials in systematic reviews. BMJ 328:1010–12.