Biased reporting of research occurs when the direction or statistical significance of results influences whether and how research is reported
Avoiding biased comparisons entails using systematic reviews to identify and take account of all the relevant, reliable evidence. This is challenging in many ways, particularly because what evidence is available might be influenced by biased decisions about which results of research are submitted and accepted for publication. Studies that have yielded ‘disappointing’ or ‘negative’ results are less likely to be reported than others. This is often called ‘publication bias’ or ‘reporting bias’. It might arise from biased analyses of studies, after their results are known.
These reporting biases have been recognized for centuries (Dickersin and Chalmers 2010). In 1792, for example, the Scottish physician James Ferriar stressed the importance of recording treatment failures as well as treatment successes (Ferriar 1792). This principle was reiterated in an editorial published in the Boston Medical and Surgical Journal just over a century later (Editorial 1909).
There is now a large amount of evidence confirming that reporting bias is a substantial problem. There is also evidence that reporting bias results principally from researchers not writing up or submitting reports of research for publication, not because of biased rejection of their reports by journal editors. (Dickersin 2004). Recent research has also revealed an additional problem: if the observed effects of treatments on some of the outcomes studied don’t support the (hoped-for) conclusions of researchers, these data sometimes don’t get reported either (Chan et al. 2004).
For example, had all the studies of the effects of giving drugs to reduce heart rhythm abnormalities in patients having heart attacks been reported, tens of thousands of deaths from these drugs could have been avoided. In 1993, Alan Cowley and his colleagues pointed out how an unpublished study done 13 years previously might have “provided an early warning of trouble ahead”. Nine patients had died among the 49 assigned to the anti-arrhythmic drug (lorcainide) compared with only one patient among a similar number given placebos. “When we carried out our study in 1980”, they reported, “we thought that the increased death rate was an effect of chance…The development of lorcainide was abandoned for commercial reasons, and this study was therefore never published; it is now a good example of ‘publication bias’” (Cowley et al. 1993).
Reporting biases tend to lead to conclusions that medical treatments are more useful and freer of side effects than they are in fact. As a consequence, they can result in unnecessary suffering and death, and in wasted resources spent on ineffective or dangerous treatments (Chalmers 2004). People who agree to researchers’ requests that they participate in tests of treatments assume that their participation will lead to an increase in knowledge. This implied contract between researchers and participants in research is breached by researchers who do not make public the results of the research.
Biased under-reporting of research is scientific misconduct and unethical (Chalmers 1990). Selective reporting of studies sponsored by the pharmaceutical industry is a particular problem (Hemminki E 1980; Melander et al. 2003), although the problem is not limited to those with commercial vested interests. Research ethics committees, medical ethicists and research funders have so far not done enough to protect patients and the public from the adverse effects of reporting biases (Savulescu et al. 1996). Fair testing of treatments – particularly those treatments in which there is commercial interest – will remain compromised as long as this form of research misconduct is tolerated by governments and others who should be protecting the interests of the public.
Among others, the World Health Organization has coordinated solutions to address the problem of unidentifiable research and publication (or dissemination) bias: First, it established standards for the registration and exchange of data for the registration of trials. Second, it proposed registration of research protocols in databases that fulfill the above standards, before patient recruitment starts. Finally, it established a freely accessible portal (www.who.int/ictrp), that collates the data of all national and regional registers, making it easier for people to learn about anticipated, ongoing and finished research protocols.
Although registration addresses the problem of unidentifiable research by letting people know what research is planned, ongoing or completed, it is only by providing the results of this research that publication bias can be overcome. In recent years, some research registries have started to include study findings, but uptake of this option by researchers remains incomplete and inadequate as a means of ensuring that the findings of all trials are publicly available (www.alltrials.net).
Monty Python’s take on selective reporting can be seen here.
The text in these essays may be copied and used for non-commercial purposes on condition that explicit acknowledgement is made to The James Lind Library (www.jameslindlibrary.org).