© Dr Kay Dickersin, Dept of Epidemiology, Johns Hopkins Bloomberg School of Public Health, 615 N. Wolfe St, Mail Rm W5010, Baltimore, Maryland 21205, USA. Email: firstname.lastname@example.org
Dickersin K, Chalmers I (2010). Recognising, investigating and dealing with incomplete and biased reporting of clinical research: from Francis Bacon to the World Health Organisation. JLL Bulletin: Commentaries on the history of treatment evaluation (www.jameslindlibrary.org). [Brief history]
Why is incomplete reporting of research a problem
Under-reporting of the results of research in any field of scientific enquiry is scientific misconduct because it delays discovery and understanding. In the field of clinical research, incomplete and biased reporting has resulted in patients suffering and dying unnecessarily (Cowley et al. 1993). Reliance on an incomplete evidence base for decision-making can lead to imprecise or incorrect conclusions about an intervention’s effects. Biased reporting of clinical research can result in overestimates of beneficial effects (Sterne et al. 2008) and suppression of harmful effects of treatments (Explanatory essay). Furthermore, planners of new research are unable to benefit from all relevant past research.
Failure to publish is also unethical. Participants in clinical research are usually assured that their involvement will contribute to knowledge; but this does not happen if the research is not reported publicly and accessibly. Moreover, failure to publish is simply a waste of precious research and other resources (Chalmers and Glasziou 2009a). Every year an estimated 12,000 clinical trials which should have been fully reported are not, wasting just under a million tonnes of carbon dioxide annually - the carbon emission equivalent of about 800,000 round trip flights between London and New York (Chalmers and Glasziou 2009b).
In brief, failure to report research findings is not only unscientific but also unethical (Chalmers 1985; 1990; Antes and Chalmers 2003; World Medical Association 2008). How did this problem come to be recognised and investigated, and what steps are being taken today to deal with it?
Evidence of biased reporting of studies
‘Reporting bias’ occurs when the nature and direction of the results of research influences their dissemination (Explanatory essay). Research results that are not statistically significant (‘negative’) tend to be under-reported (Hopewell et al. 2009), while results that are regarded as exciting or statistically significant (‘positive’) tend to be over-reported (Rochon et al. 1994; Tramèr et al. 1997; Von Elm et al. 2004). The nature and direction of research results can influence whether or not research is reported at all (Hopewell et al. 2009; Dwan et al. 2009), and if so, in which forms (Scherer et al. 2007). They can also influence the speed at which results are reported (Stern and Simes 1997; Dickersin et al. 2002; Hopewell et al. 2007a), the language in which they are published (Egger et al. 1997; Juni et al. 2002), and the likelihood that the research will be cited (Gøtzsche 1987; Ravnskov 1992; Ravnskov 1995; Kjaergaard and Gluud 2002; Schmidt and Gøtzsche 2005; Nieminen et al. 2007).
Failure to publish research findings is pervasive (Dickersin 2005; Song et al. 2010). Studies demonstrating failure to publish have included research conducted in many countries, including Australia, France, Germany, Spain, Switzerland, the United Kingdom and the United States. For example, an analysis of follow-up studies based on 29,729 reports of research made available only in abstract form found that fewer than half of the studies went on to full publication, and that positive results were positively associated with full publication, regardless of whether 'positive' results had been defined as any 'statistically significant' result or as ‘a result favoring the experimental treatment’ (Scherer et al. 2007).
Recognition and investigation of biased reporting of research
The problem of reporting bias has been recognised for hundreds of years. In the 17th century, Francis Bacon noted that “The human intellect …is more moved by affirmatives than by negatives” (Bacon 1645); and Robert Boyle, the chemist, lamented the common tendency among scientists not to publish their results until they had a “system” worked out, with the result that “many excellent notions or experiments are, by sober and modest men, suppressed” (Boyle 1661, cited in Hall 1965). Other scientists, across many fields, have also recognized the problem over the years (Gregory 1772; Alanson 1782; Withering 1785; Currie 1797; Carlisle 1839; Holmes 1861; Bennett 1865; Editorial 1909; Earp 1927; Pratt et al1940; Hill 1959; Feynman 1985; Gould 1987).
For example, the bronze statue of Albert Einstein outside the US Academy of Sciences is inscribed with a quotation from a letter that he wrote on 3 March 1954, for a conference of the Emergency Civil Liberties Committee:
Academic freedom as I understand it means having the right to seek the truth and to publish and teach what is believed to be true. Naturally this right comes together with the duty not to withhold a part of what is believed to be true. It is clear that any restriction on academic freedom hinders the dissemination of knowledge in the population and therefore restrains rational judgement and action. (Einstein 1954).
In 1959, the father of medical statistics in Britain, Austin Bradford Hill, wrote:
A negative result may be dull but often it is no less important than the positive; and in view of that importance it must, surely, be established by adequate publication of the evidence. (Hill 1959).
And in the same year, Seymour Kety, an American psychiatrist wrote:
A positive result is exciting and interesting and gets published quickly. A negative result, or one which is inconsistent with current opinion, is either unexciting or attributed to some error and is not published. So that at first in the case of a new therapy there is a clustering toward positive results with fewer negative results being published. Then some brave or naïve or nonconformist soul, like the little child who said that the emperor had no clothes, comes up with a negative result which he dares to publish. That starts the pendulum swinging in the other direction, and now negative results become popular and important. (Kety 1959)
Although the importance of reporting biases had been recognised for centuries, it was not until the second half of the 20th century that researchers began to investigate the phenomenon. The impetus for these investigations came from the development of research synthesis, first by social scientists (Light 1983), then by health researchers (Hunt 1997; Begg and Berlin 1988; Vandenbroucke 1988; Djulbegovic et al 2000; Chalmers I et al. 2002; O’Rourke 2006). Unsurprisingly, researchers who have exposed reporting biases are often those who have also been involved in the application of methods for research synthesis.
Investigations of biased reporting of research began with surveys of journal articles, which revealed improbably high proportions of published studies showing statistically significant differences (Sterling 1959; Smart 1964; Chalmers TC et al. 1965; Light & Pillemer 1984; Song et al. 2010). Subsequent surveys of authors and peer reviewers showed that research that had yielded ‘negative’ results was less likely than other research to be submitted or recommended for publication (Greenwald 1975; Coursol and Wagner 1986; Dickersin et al. 1987; Shadish et al. 1989). These findings have been reinforced by the results of experimental studies, which showed that studies with no reported statistically significant differences were less likely to be accepted for publication (Mahoney 1977; Peters and Ceci 1982; Epstein 1990; Emerson et al. 2010).
The most direct evidence of publication bias in the medical field has come from following up cohorts of studies identified at the time of funding (Dickersin and Min 1993), ethics approval (Easterbrook et al. 1991; Dickersin et al. 1992) submission for drug licences (Hemminki 1980; Melander et al. 2003; Rising et al. 2008; Turner et al. 2008), or when they were reported in summary form, for example in conference abstracts (Scherer et al. 1994; 2007). Systematic reviews of this body of evidence have shown that ‘positive findings’ are the principal factor associated with subsequent publication: a systematic review of data from five cohort studies following research projects from inception found that, overall, the odds of publication for studies with ‘positive’ findings was about two and a half times greater than the odds of publication of studies with ‘negative’ or ‘null’ results, and that study results were the principal factor explaining these differences in reporting (Dwan et al. 2008; Hopewell et al. 2009; Song et al. 2009).
Even when studies are eventually reported in substantive publications, ‘negative’ findings take longer to appear in print (Stern and Simes 1997; Ioannidis 1998; Misakian and Bero 1998; Hopewell et al. 2007a): on average, clinical trials with ‘positive results’ are published about a year sooner than trials with ‘null or negative results’. There is also evidence that, compared to negative or null results, statistically significant results tend to be published in journals with higher impact factors (Easterbrook et al. 1991), and that publication in the mainstream (‘non-grey’) literature is associated with an overall 9 per cent larger estimate of treatment effects compared to reports in the grey literature (Hopewell et al. 2007b). Articles reporting negative findings for efficacy, or reporting adverse events associated with an exposure, may be published but ’hidden’ in harder-to-access sources (Bero et al. 1996). Furthermore, even when studies initially published in abstract form are published in full, ‘negative’ results are less likely to be published in high impact journals than ‘positive’ results (Timmer et al. 2002).
Selective reporting of suspected or confirmed adverse treatment effects is an area for particular concern because of the potential for patient harm. In a study of adverse drug events submitted to Scandinavian drug licensing authorities, subsequently published studies were less likely than unpublished studies to have recorded adverse events (Hemminki 1980). The lay and scientific media have drawn attention to failure to accurately report adverse events for drugs, for example, of selective serotonin uptake inhibitors for depression (Healy 2006; Bass 2008), rosiglitazone for diabetes (Drazen et al. 2007), and rofecoxib for arthritis pain (de Angelis and Fontanarosa 2008).
Biased reporting of data within studies
Even when substantive reports of research are published, there may be biased reporting of outcome data within the reports (Hahn et al. 2002; Chan et al. 2004a; 2004b; Rising et al. 2008; Turner et al. 2008; Dwan et al. 2008; Vedula et al. 2009). Comparisons of published articles with the study protocols approved by an ethics committee in Denmark found that in nearly two thirds of trial reports at least one planned outcome had been changed, introduced, or omitted in the published article (Chan et al. 2004b). In a similar comparison of randomized trials funded by the Canadian Institutes of Health Research, primary outcomes differed between the protocol and published article 40% of the time (Chan et al. 2004a). In both of these studies, outcomes that were statistically significant in favour of an experimental intervention had a higher chance of being published in full compared to those that were not statistically significant. Other analyses have shown important discrepancies between journal articles and information supplied for trial registration (Ross et al. 2009; Al-Marzouki et al. 2008; Chan et al. 2008; Mathieu et al. 2009).
Biased outcome reporting has also been shown in a comparison with subsequent publications of data about 12 antidepressant agents submitted for review to the Food and Drug Administration (FDA) (Turner et al. 2008). Only 31 per cent of the 74 FDA-registered studies had been published, and publication was associated with a ‘positive’ outcome (as determined by the FDA). Studies that the FDA had considered ‘negative’ or ‘questionable’ (n=36) were either not published (22 studies), reported with a positive interpretation (11 studies), or reported in a manner consistent with the FDA interpretation (3 studies). In summary, evidence from the published literature suggested that 94 per cent of studies had positive findings, while the FDA analysis concluded that only 51 per cent had positive findings.
Who is responsible for biased reporting of clinical research?
Reporting bias can be due to researchers and sponsors failing to submit study findings for publication, or due to journal editors and others rejecting reports for publication. Numerous surveys of investigators have left little doubt that almost all failure to publish is due to the failure of investigators to submit reports for publication (Timmer et al. 2002; Godlee and Dickersin 2003), with only a small proportion of studies remaining unpublished because of rejection by journals (Olson et al. 2002), although positive-outcome bias has been demonstrated among peer reviewers (Emerson et al. 2010). Qualitative studies of editorial discussion indicate that a study’s scientific rigour is the area of greatest concern (Dickersin et al. 2007). Researchers report that the reason they do not write up and submit reports of their research for publication is usually because they are “not interested” in the results (“editorial rejection by journals” is only rarely given as a cause of failure to publish). Even those investigators who have initially published their results as (conference) abstracts are less likely to submit their findings for full publication unless the results are ‘significant’ (Scherer et al. 2007).
It is now also well-established that biased reporting of research studies is associated with the sources of funding. In particular, research funded by the pharmaceutical industry has been shown to be less likely to be published than research funded from other sources (Lexchin et al. 2003; Sismondo 2008), and that studies sponsored by pharmaceutical companies are more likely to have outcomes favoring the sponsor than studies with other sponsors (Als-Neilsen et al. 2003; Bhandari et al. 2004). There are several possible explanations for the association between industry support and failure to publish ‘negative’ results. Industry may selectively publish findings supporting a product’s efficacy. It is also possible that industry is more likely to design studies with a high likelihood of a positive outcome, for example, by selecting a comparison population likely to yield results favoring the product (Djulbegovic et al. 2000; Mann and Djulbegovic 2004). This is clearly unethical.
The practice of hiring a commercial firm to write up the results from a clinical trial is common in industry trials (Sismondo 2007). It has been estimated that 75% of industry-initiated studies approved by two ethics committees in Denmark had ghost authors (Gøtzsche et al. 2007). In these cases, the named authors listed rarely included the hired writer. The World Association of Medical Editors has made it clear it considers such ghost authorship to be dishonest (http://www.wame.org/resources/policies accessed August 1, 2008). Unnamed, paid medical writers may be asked to address commercial interests in the way that research methods and results are presented. When the proportion of paid medical writers is sufficiently large, the literature, and thus opinion about the drug, may be influenced (Healy and Cattell 2003).
Because industry is the main funder of clinical research, it must inevitably shoulder a high proportion of the blame for this unscientific and unethical behaviour. The responsibility for biased reporting of clinical research does not lie solely with industry, however. As long ago as 1998, the Ethics Committee of the Faculty of Pharmaceutical Medicine, which represents physicians working in industry in particular, declared that: “Pharmaceutical physicians … have a particular ethical responsibility to ensure that the evidence on which doctors should make their prescribing decisions is freely available…the outcome of all clinical trials on a medicine should be reported" (Faculty of Pharmaceutical Medicine 1998).
Dealing with incomplete and biased reporting of research
Investigations of incomplete and biased reporting of clinical research conducted over the past three decades have made clear that this is a serious and extensive problem, which threatens the best interests of patients, undermines the scientific enterprise, and wastes resources.
Various attempts have been made to overcome the effects of reporting biases. These have included statistical adjustments of the results of published studies (Rosenthal 1979; Light and Pillemer 1984; Vandenbroucke 1988), surveys of investigators in attempts to locate unpublished studies (Hetherington et al. 1989), editorial ‘amnesties’ for unpublished trials (Smith and Roberts 1997; Roberts 1998), and journals and journal sections (Editorial 1962; Shields 2000; BioMedCentral 2002) specifically designated for reporting the misconceived notion of ‘negative results’ (Chalmers 1985). None of these approaches has proved satisfactory, however.
In 1986, John Simes showed that analyses of treatments for ovarian cancer based on the results of trials that had been registered before their results were known showed no statistically significant differences, while analyses based on all published reports of trials did. He postulated that these differences reflected biased under-reporting of trials, and suggested that this problem should be addressed by establishing an international registry of clinical trials (Simes 1986). Over the following three decades pressure to register trials gradually increased (Meinert 1988; Ad Hoc Working Party of the International Collaborative Group on Clinical Trials Registries 1993; Dickersin 1997; Wager et al. 2003; Dickersin and Rennie 2003; Chalmers 2006).
It took a public scandal in 2004 to provide the momentum needed to lead to a consensus that clinical trial registration, which had been called for repeatedly over the previous two decades, should become mandatory. In June of that year, Eliot Spitzer, the Attorney General of the State of New York, sued GlaxoSmithKline, makers of an anti-depressant drug (paroxetine), for suppressing evidence of possible serious harmful effects, thus depriving physicians of the information needed to assess the drug’s risks (Healy 2006; Bass 2008). A systematic review of the relevant published and unpublished data showed that the favourable impression created by the published studies was negated when unpublished data were included (Whittington et al. 2004).
The scandal prompted the International Committee of Medical Journal Editors to announce that their journals would require, as a condition of considering reports of clinical trials for publication, that the studies had been registered prior to enrolling participants (De Angelis et al. 2005). Furthermore, under the aegis of the World Health Organisation (WHO), it was agreed that basic information about all clinical trials should be registered, at inception, and that this information should be publicly accessible through the WHO International Clinical Trials Registry Platform (Gülmezoglu et al. 2005).
Public availability of full study protocols, either at trial inception (Horton 1997; BioMedCentral 2003) or at registration (Krleža-Jeric et al. 2005; Vedula et al. 2009), or alongside reports of trials (Siegel 1990), is also gaining momentum (Chan 2008; Miller et al. 2010). This further development has been fuelled by evidence of biased reporting of outcomes within studies (Hahn et al. 2002; Chan et al. 2004a; 2004b; Turner et al. 2008; Dwan et al. 2008; Vedula et al. 2009; Kirkham et al. 2009). This has been reflected in the development of reporting guidelines for protocols (SPIRIT Initiative 2010).
It remains to be seen how well these measures will deal with a serious problem recognised nearly four centuries ago by Francis Bacon (1645).
We are grateful to Doug Altman and Mike Clarke for drawing our attention to relevant historical material; to Ze’ev Rosenkranz, for providing an image of Einstein’s letter, and to Harry Hemingway and Claudia Langenberg for translating it from the original German; and to Doug Altman, An-Wen Chan and Sally Hopewell for commenting on earlier drafts of this brief history of reporting biases.
This James Lind Library commentary has been republished in the Journal of the Royal Society of Medicine 2011;104:532-538.
Alanson E (1782). Practical observations on amputation, and the after-treatment, 2nd Ed. London: Joseph Johnson.
Al-Marzouki S, Roberts I, Evans S, Marshall T (2008). Selective reporting in clinical trials: analysis of trial protocols accepted by The Lancet. Lancet 372:201.
Als-Nielsen B, Chen W, Gluud C, Kjærgaard LL (2003). Association of funding and conclusions in randomized drug trials: A reflection of treatment effects or adverse events? JAMA 290:921-928.
Antes G, Chalmers I (2003). Under-reporting of clinical trials is unethical. Lancet 361:978- 979.
Bacon F (1645). Franc. Baconis de Verulamio / Summi Angliae Cancellarii /Novum organum scientiarum. [Francis Bacon of St. Albans Lord Chancellor of England. A 'New Instrument' for the sciences] Lugd. Bat: apud Adrianum Wiingaerde, et Franciscum Moiardum. Aphorism XLVI (pages 45-46).
Bass A (2008). Side effects: a prosecutor, a whistleblower, and a bestselling antidepressant on trial. Boston: Algonquin.
Begg CB, Berlin JA (1988). Publication bias: a problem in interpreting medical data. Journal of the Royal Statistical Society, Series A 151:419-463.
Bennett JH (1865). The restorative treatment of pneumonia. Edinburgh, AC Black.
Bero LA, Rennie D (1996). Influences on the quality of published drug studies. Int J Technol Assess Health Care 12:209-237.
Bhandari M, Busse JW, Jackowski D, Montori VM, Schünemann H, Sprague S, Mears D, Schemitsch EH, Heels-Ansdell D, Devereaux PJ (2004). Association between industry funding and statistically significant pro-industry findings in medical and surgical randomized trials. CMAJ 170:477-480.
BioMedCentral. Information for authors: Publish your study protocols. Available at http://www.biomedcentral.com/authors/protocols. Accessed 8 March 2010.
BiomedCentral (2002). Journal of negative results in biomedicine. http://www.jnrbm.com/about
Boyle R (1661). Cited in Hall MB. In defense of experimental essays. In: Robert Boyle on natural philosophy. Bloomington: Indiana University Press:119-131. [Boyle R (1661). In defense of Experimental Essays. In: Certain Physiological Essays.]
Carlisle A (1839). On the production of representations of objects by the action of light. Mechanics Journal p 329.
Chalmers I (1985). Proposal to outlaw the term 'negative trial'. BMJ 1985;290:1002.
Chalmers I (1990). Underreporting research is scientific misconduct. JAMA 263:1405-1408.
Chalmers I (2006). From optimism to disillusion about commitment to transparency in the medico-industrial complex. Journal of the Royal Society of Medicine 2006;99:337-341.
Chalmers I, Hedges LV, Cooper H (2002). A brief history of research synthesis. Evaluation and the Health Professions 25:12-37.
Chalmers I, Glasziou P (2009a). Avoidable waste in the production and reporting of research evidence. Lancet 2009;374:86-89. doi:10.1016/S0140-6736(09)60329-9.
Chalmers I, Glasziou PG (2009b). The environmental, scientific and ethical scandal of biased under-reporting of research. http://www.bmj.com/content/339/bmj.b4187.
Chalmers TC, Koff RS, Grady GF (1965). A note on fatality in serum hepatitis. Gastroenterology 49:22-26.
Chan AW (2008). Bias, spin, and misreporting: time for full access to trial protocols and results. PLoS Medicine 5(11): e230. DOI:10.1371/journal.pmed.0050230.
Chan AW, Krleža-Jeric K, Schmid I, Altman D (2004a). Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research. CMAJ 171:735-40.
Chan AW, Hróbjartsson A, Haahr MT, Gøtzsche PC, Altman DG (2004b). Empirical evidence for selective reporting of outcomes in randomized trials. Comparison of protocols to published articles. JAMA 291:2457-2465.
Coursol A, Wagner EE (1986). Effect of positive findings on submission and acceptance rates: A note on meta-analysis bias. Professional Psychology: Research and Practice 17:136-137.
Cowley AJ, Skene A, Stainer K, Hampton JR (1993). The effect of lorcainide on arrhythmias and survival in patients with acute myocardial infarction: an example of publication bias. International Journal of Cardiology 40:161-166.
Currie J (1797). Medical reports on the effects of water, cold and warm, as a remedy in fever, and febrile diseases. Liverpool and London: M’Creery and Cadell.
DeAngelis CD, Drazen JD, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJPM, Schroeder TV, Sox HC, Van Der Weyden MB (2005). Is This Clinical Trial Fully Registered? A Statement From the International Committee of Medical Journal Editors. JAMA 293:2927-2929. doi:10.1001/jama.293.23.jed50037.
Dickersin K (1997). How important is publication bias? A synthesis of available data. AIDS Educ Prev 9 (1 Suppl):15-21.
Dickersin K (2005). Publication bias: Recognizing the problem, understanding its origins and scope, and preventing harm. In: Rothstein H, Sutton A, Borenstein M (eds). Publication bias in meta-analysis: prevention, assessment, and adjustments. London: Wiley, p 11-33.
Dickersin K, Min Y-I (1993). NIH clinical trials and publication bias. Online Journal of Current Clinical Trials, 28 April [Doc No. 50].
Dickersin K, Rennie D (2003). Registering clinical trials. JAMA 290:516-523.
Dickersin K, Chan S, Chalmers TC, Sacks HS, Smith H (1987). Publication bias and clinical trials. Control Clin Trials 8:343-53.
Dickersin K, Min YI, Meinert CL (1992). Factors influencing publication of research results. Follow-up of applications submitted to two institutional review boards. JAMA 267:374-8.
Dickersin K, Olson CM, Rennie D, Cook D, Flanagin A, Zhu Q, Reiling J, Pace B (2002). Association between time interval to publication and statistical significance. JAMA 287:2829-2831.
Dickersin K, Ssemanda E, Mansell C, Rennie D (2007). What do JAMA editors say when they discuss manuscripts that they are considering for publication? Developing a schema for classifying the content of editorial discussion. BMC Medical Research Methodology 7:44.
Djulbegovic B, Lacevic M, Cantor A, Fields KK, Bennett CL, Adams JR, Kuderer NM, Lyman GH (2000). The uncertainty principle and industry-sponsored research. Lancet 356:635-638.
Drazen JM, Morrissey S, Curfman GD (2007). Rosiglitazone--continued uncertainty about safety. N Engl J Med 357:63-64.
Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan A-W, Cronin E, Decullier E, Easterbrook PJ, Von Elm E, Gamble C, Ghersi D, Ioannidis JPA, Simes J, Williamson PR (2008). Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS ONE 3:e3081.
Earp JR (1927). The need for reporting negative results. JAMA 88:119.
Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR (1991). Publication bias in clinical research. Lancet 337:867-872.
Editorial (1909). The reporting of unsuccessful cases. Boston Medical and Surgical Journal 161:263-264.
Editorial (1962). Negative results section. JAMA 181:42-43.
Egger M, Zellweger-Zahner T, Schneider M, Junker C, Lengeler C, Antes G (1997). Language bias in randomised controlled trials published in English and German. Lancet 350:326–329.
Emerson GB, Warme WJ, Wolf FM, Heckman JD, Brand RA, Leopold SS (2010). Testing for the presence of positive-outcome bias in peer review. Archives of Internal Medicine 170:1934-1939.
Einstein A (1954). Statement for a conference of the Emergency Civil Liberties Committee, 3 March. Albert Einstein Archives, Hebrew University of Jerusalem, 28-1025.
Epstein WM (1990). Confirmational response bias among social work journals. Science, Technology, & Human Values 15:9-37.
Faculty of Pharmaceutical Medicine (1998). Ethical Issues Working Group. Ethics in pharmaceutical medicine. International Journal of Pharmaceutical Medicine 12:193-8.
Feynman RP (1985). Surely You’re Joking Mr. Feynman. New York: Norton.
Godlee F, Dickersin K (2003). Bias, subjectivity, chance, and conflict of interest in editorial decisions. In: Godlee F, Jefferson T, eds. Peer review in health sciences, 2nd edition. London: BMJ Books.
Gøtzsche PC (1987). Reference bias in reports of drug trials. BMJ 195:654-656.
Gøtzsche PC, Hrobjartsson A, Johansen HK, Haahr MT, Altman DG, Chan A-W (2007). Ghost authorship in industry-initiated randomised trials. PloS Medicine 4:47-52.
Gould SJ (1987). Urchin in the storm. Essays about books and ideas. New York: Norton.
Greenwald AG (1975). Consequences of prejudice against the null hypothesis. Psychol Bull 82:1-20.
Gregory J (1772). Lectures on the duties and qualifications of a physician. London: Strahan and Cadell.
Gülmezoglu AM, Pang T, Horton R, Dickersin K (2005). WHO facilitates international collaboration in setting standards for clinical trial registration. Lancet 365:1829-31.
Hahn S, Williamson PR, Hutton JL (2002). Investigation of within-study selective reporting in clinical research: follow-up of applications submitted to a local research ethics committee. Journal of Evaluation in Clinical Practice 8:353-359.
Healy D (2006). Did regulators fail over selective serotonin reuptake inhibitors? BMJ 333:92-95.
Healy D, Cattell D (2003). Interface between authorship, industry and science in the domain of therapeutics. Br J Psychiatry 183:22-27.
Hemminki E (1980). Study of information submitted by drug companies to licensing authorities. BMJ 280:833–836.
Hetherington J, Dickersin K, Chalmers I, Meinert CL (1989). Retrospective and prospective identification of unpublished controlled trials: lessons from a survey of obstetricians and pediatricians. Pediatrics 84:374-380.
Hill AB (1959). Discussion of a paper by DJ Finney. Journal of the Royal Statistical Society, Series A 119:19-20.
Holmes OW (1861). Currents and countercurrents in medical science with other addresses and essays. Boston: Ticknor and Fields, 1861.
Hopewell S, Clarke M, Stewart L, Tierney J (2007a). Time to publication for results of clinical trials. Cochrane Database of Systematic Reviews, Issue 2. Art. No.: MR000011. DOI: 10.1002/14651858.MR000011.pub2.
Hopewell S, McDonald S, Clarke M, Egger M (2007b). Grey literature in meta-analyses of randomized trials of health care interventions. Cochrane Database of Systematic Reviews, Issue 2. Art. No.: MR000010. DOI: 10.1002/14651858.MR000010.pub3.
Hopewell S, Loudon K, Clarke MJ, Oxman AD, Dickersin K (2009). Publication bias in clinical trials due to statistical significance or direction of trial results. Cochrane Database of Systematic Reviews, Issue 1. Art. No.: MR000006. DOI: 10.1002/14651858.MR000006.pub3.
Horton R (1997). Pardonable revisions and protocol reviews. Lancet 349:6.
Ioannidis JP (1998). Effect of the statistical significance of results on the time to completion and publication of randomized efficacy trials. JAMA 279:281-286.
Juni P, Holenstein F, Sterne J, Bartlett C, Egger M (2002). Direction and impact of language bias of controlled trials: An empirical study. International Journal of Epidemiology 31: 115-123.
Kety S (1959). Comment. In: Cole JO, Gerard RW, eds. Psychopharmacology. Problems in Evaluation. Publication 583. Washington DC: National Academy of Sciences, p 651-2.
Kirkham J, Dwan KM, Altman DG, Gamble C, Dodd S, Smyth R, Williamson PR (2009). The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. BMJ;340:c365. DOI: 10.1136/BMJ.c365.
Kjaergaard LL, Gluud C (2002). Citation bias of hepato-biliary randomized clinical trials. J Clin Epidemiol 55:407-410.
Krleža-Jeric K, Chan A-W, Dickersin K, Sim I, Grimshaw J, Gluud C, for the Ottawa Group (2005). Principles for international registration of protocol information and results from human trials of health-related interventions. Ottawa Statement (Part 1) BMJ 330:956-958.
Lexchin J, Bero LA, Djulbegovic B, Clark O (2003). Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ 326:1-10.
Light RJ (1983). Evaluation Studies Review Annual Vol 8. Beverley Hills, CA: Sage.
Light RJ, Pillemer DB (1984). Summing up. Cambridge: Harvard University Press.
Mahoney MJ (1977). Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive Therapy and Research 1:161-175.
Mann H, Djulbegovic B (2004). Why comparisons must address genuine uncertainties. James Lind Library (www.jameslindlibrary.org).
Mathieu S, Boutron I, Moher D, Altman DG, Ravaud P (2009). Comparison of registered and published primary outcomes in randomized controlled trials. JAMA 302:977-984.
Meinert CL (1988). Toward prospective registration of clinical trials. Controlled Clin Trials. 9:1-5.
Melander H, Ahlqvist-Rastad J, Meijer G, Beermann B (2003). Evidence-b(i)ased medicine – selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications. BMJ 326:1171-1173.
Miller JD (2010). Registering clinical trial results: the next step. JAMA 303: 773-774.
Misakian AL, Bero LA (1998). Publication bias and research on passive smoking. Comparison of published and unpublished studies. JAMA 280:250-253.
Nieminen P, Rucker G, Miettunen J, Carpenter J, Schumacher M (2007). Statistically significant papers in psychiatry were cited more often than others. J Clin Epidemiol 60:939-946.
Olson CM, Rennie D, Cook D, Dickersin K, Flanagin A, Hogan J, Zhu Q, Reiling J, Pace B (2002). Publication bias in editorial decision making. JAMA 287:2825-2828.
O’Rourke K (2006). An historical perspective on meta-analysis: dealing quantitatively with varying study results. The James Lind Library (www.jameslindlibrary.org).
Peters D, Ceci S (1982). Peer review practice of psychologic journals: The fate of published articles submitted again. Behav Brain Sci 5:187-195.
Pratt JG, Rhine JB, Smith BM, Stuart CE, Greenwood JA (1940). Extra-sensory perception after sixty years: a critical appraisal of the research in extra-sensory perception. New York: Henry Holt
Ravnskov U (1992). Frequency of citation and outcome of cholesterol lowering trials. BMJ 305:717.
Ravnskov U (1995). Quotation bias in reviews of the diet heart idea. J Clin Epidemiol 48:713-719.
Rising K, Bacchetti P, Bero L (2008). Reporting bias in drug trials submitted to the Food and Drug Administration: review of publication and presentation. PLoS Med 5(11): e217. doi:10.1371/journal.pmed.0050217
Roberts I (1998). An amnesty for unpublished trials. BMJ 317:763-764
Rochon PA, Gurwitz JH, Simms RW, Fortin PR, Felson DT, Minaker KL, Chalmers TC (1994). A study of manufacturer supported trials of non-steroidal anti-inflammatory drugs in the treatment of arthritis. Arch Intern Med 154:157-163.
Rosenthal R (1979). The ‘file drawer problem’ and tolerance for null results. Psychological Bulletin 86:638-641.
Ross JS, Mulvey GK, Hines EM, Nissen SE, Krumholz HM (2009). Trial publication after registration in clinicaltrials.gov: a cross-sectional analysis. PLoS Med 6: e1000144. doi:10.1371/journal.pmed.1000144.
Scherer RW, Dickersin K, Langenberg P (1994). Full publication of results initially presented in abstracts. JAMA 272:158-162.
Scherer RW, Langenberg P, Von Elm E (2007). Full publication of results initially presented in abstracts. In: The Cochrane Library. Issue 2. Art. No.: MR000005. DOI: 10.1002/14651858.MR000005.pub3.
Schmidt LM, Gøtzsche PC (2005). Of mites and men: reference bias in narrative review articles: a systematic review. J Fam Practice 54:334-8.
Shadish WR, Doherty M, Montgomery LM (1989). How many studies are in the file drawer? An estimate from the family/marital psychotherapy literature. Clin Psychol Rev 9:589-603.
Shields PG (2000). Publication bias is a scientific problem with adverse ethical outcomes: the case for a section for null results. Cancer Epidemiology, Biomarkers and Prevention 9:771-772.
Siegel J (1990). Editorial review of protocols for clinical trials. N Engl J Med 323: 1355.
Simes RJ (1986). Publication bias: the case for an international registry of clinical trials. J Clin Oncol 4:1529-1541.
Sismondo S (2008). Pharmaceutical company funding and its consequences: A qualitative systematic review. Contemp Clin Trials 29:109-113.
Smart RG (1964). The importance of negative results in psychological research. Canadian Psychologist 5:225-232.
Smith R, Roberts I (1997). An amnesty for unpublished trials. BMJ 315:622.
Song F, Parekh-Bhurke S, Hooper L, Loke YK, Ryder JJ, Sutton AJ, Hing CB, Harvey I (2009). Extent of publication bias in different categories of research cohorts: a meta-analysis of empirical studies. BMC Med Res Methodol. 26;9:79.
Song F, Parekh S, Hooper L, Loke YK, Ryder JJ, Sutton AJ, Hing CB, Kwok CS, Pang C, Harvey I (2010). Dissemination and publication of research findings: an updated review of related biases. Health Technology Assessment 14(8).
SPIRIT Initiative. http://www.equator-network.org/library/reporting-guidelines-under-development/ (Accessed February 15, 2010).
Sterling TD (1959). Publication decisions and their possible effects on inferences drawn from tests of significance - or vice versa. Journal of the American Statistical Association 54:30-34.
Stern JM, Simes RJ (1997). Publication bias: evidence of delayed publication in a cohort study of clinical research projects. BMJ 315:640-645.
Sterne J, Egger M, Moher D on behalf of the Cochrane Bias Methods Group, eds (2008). Chapter 10. Addressing reporting biases. In: Higgins JPT, Green S, eds. Cochrane Handbook for Systematic Reviews of Interventions. Version 5.0.0 [updated February 2008]. The Cochrane Collaboration, 2008. Available from www.cochrane-handbook.org.
Tramèr M, Reynolds DJ, Moore RA, McQuay HJ (1997). Impact of covert duplicate publication on meta-analysis: A case study. BMJ 315:635-640.
Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R (2008). Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med 358:252-260.
Vandenbroucke JP (1988). Passive smoking and lung cancer: a publication bias? BMJ 296:391-392.
Vedula S, Bero L, Scherer RW, Dickersin K (2009). Outcome reporting in industry-sponsored trials of gabapentin for off-label use. New England Journal of Medicine 361:1963-1971.
Von Elm E, Poglia G, Walder B, Tramèr MR (2004). Different patterns of duplicate publication. An analysis of articles used in systematic reviews. JAMA 291:974-980.
Wager E, Field EA, Grossman L (2003). Good publication practice for pharmaceutical companies. Current Medical Research and Opinion 19:149-54.
Whittington CJ, Kendall T, Fonagy P, Cottrell D, Cotgrove A, Boddington E (2004). Selective serotonin reuptake inhibitors in childhood depression: systematic review of published versus unpublished data. Lancet 363:1341-1345.
Withering W (1785). An account of the foxglove and some of its medical uses: with practical remarks on dropsy and other diseases. London: J and J Robinson.
World Medical Association (2008). World Medical Association. Declaration of Helsinki: ethical principles for medical research involving human subjects.