Mann H, Djulbegovic B (2012). Comparator bias: why comparisons must address genuine uncertainties.
Email to someoneTweet about this on TwitterShare on FacebookPin on PinterestShare on LinkedIn

© Howard Mann, Department of Radiology, 1A71 University Hospital, 50 North Medical Drive, Salt Lake City, UT 84132, USA. E-mail:

Cite as: Mann H, Djulbegovic B (2012). Comparator bias: why comparisons must address genuine uncertainties. JLL Bulletin: Commentaries on the history of treatment evaluation (

Controlled trials are for reducing uncertainties about the relative merits of different treatments

Researchers may believe – and patients and physicians may hope – that a particular treatment (perhaps because it is new) is better than other available treatments; but it may often turn out to be worse (Silverman 2003; Djulbegovic et al. 2012). When the British Medical Research Council’s controlled trial of streptomycin for pulmonary tuberculosis was conceived in 1946 (MRC 1948), none of the therapies used to treat the disease had been shown in controlled clinical trials to be useful; indeed, one controlled trial had shown gold salt therapy to do more harm than good (Amberson et al . 1931). Although streptomycin was known to be useful in forms of tuberculosis which had previously always been fatal (MRC 1948), there was uncertainty about how useful the new drug would be in pulmonary tuberculosis, from which patients often recovered after treatment with bed rest alone. Patients in the MRC trial were accordingly randomized either to bed rest alone, or to bed rest and streptomycin.

The same reasoning is applicable when controlled trials are designed today. After considering systematic reviews of the relevant existing evidence, patients and their doctors must be substantially uncertain about which among the treatment options – including no active treatment – is preferable. This implies ensuring that no patient who agrees to participate in the trial will knowingly be disadvantaged, whichever one of the comparison treatments the patient is assigned to receive.

Clinical trials are done to reduce uncertainties, and they should only be done if clinicians and their patients are uncertain which of the existing alternatives is preferable (Hill 1963; Djulbegovic et al 2000; Djulbegovic 2001; Djulbegovic et al. 2011). This requirement is sometimes referred to as “the uncertainty principle” (Peto and Baigent 1998) or “equipoise” (Freedman 1987; Edwards et al. 1998).

If one or more of the treatments selected for the comparison in a trial is known to be worse than others, not only will some participants in the trial be denied effective treatment, but this ‘comparator bias’ will result in unfair tests of treatments. Even if other sources of bias have been well controlled in such studies, their results will mislead patients and their doctors. Unfortunately, comparator bias is sometimes deliberately introduced for just this purpose, usually with a view to showing that new treatments are preferable to existing alternatives (Sackett and Oxman 2003; Montori et al. 2004).

Inappropriate use of inactive comparators

Comparator bias is introduced when treatments known to be beneficial are withheld from patients participating in controlled trials. The reason that bed rest alone was an acceptable treatment for half the patients in the MRC trial of streptomycin for pulmonary tuberculosis was that there was no known effective treatment for the condition. When systematic reviews of existing evidence show that existing treatments are more helpful than doing nothing, or than using placebos, comparator bias will result if patients are denied these effective treatments, thus giving the active treatments in the trial an unfair advantage.

Although users of clinical research evidence are usually interested in the relative merits and disadvantages of active alternative treatments (Garattini and Chalmers 2009; Sox 2010), available comparative efficacy data were used in only half of 100 applications for marketing licences for new molecular entities approved by the US Food and Drug Administration between 2000 and 2010 (Goldberg et al. 2011).

Estellat and Ravaud (2012) have shown that placebos were used as comparators in four out of five (81/102) trials of biologic disease-modifying drugs for rheumatoid arthritis done during the past decade. In 54 (86%) of 63 trials involving patients with a high level of active disease, placebos (or treatments known to have been ineffective) were used, with the result that potentially helpful treatments were being withheld from 9,224 out of 13,095 patients randomized to the control arms.

Even though the efficacy of erythropoietin in preventing anemia in cancer patients had been convincingly demonstrated, some researchers continued to assign participants in clinicial trials to placebos instead of testing the drug’s effects on other outcomes (Clark et al. 2002). Uncertainty about the effect of the drug on survival continues more than 20 years after drug was approved for clinical use in 1989.

Using inappropriate ‘active’ comparators

Predictable results favouring new treatments can be obtained when inappropriate ‘active’ comparators are used. For example, Psaty et al. (2006) noted that three out of four large industry-sponsored trials evaluating newer antihypertensive drugs used the beta-blocker atenolol as the comparator, even though this drug had been shown to be inferior to a low-dose thiazide diuretic.

Comparator bias can also result when a treatment is compared with an inappropriately low dose of a comparator intervention. This occurred in comparisons of newer non-steroidal anti-inflammatory agents used for arthritis with older drugs in the same class (Rochon et al. 1994). Inappropriately low doses can also result when treatments are given by an inappropriate route, for example, by comparing intravenous administration of one drug with oral administration of another that is poorly absorbed from the gastro-intestinal tract (Johanson and Gøtzsche 1999).

The net usefulness of treatments often requires trade-offs between wanted and unwanted effects. Treatments may be of preferable if, although their beneficial effects are no better than alternatives, they have fewer adverse effects. Some of the newer drugs for treating schizophrenia, for example, may be preferable to established drugs for this reason. However, this apparent advantage may be because the newer agents have been compared with inappropriately high doses of the older comparator drugs. Safer reported eight trials sponsored by three different drug companies which compared newer second-generation neuroleptic agents to a fixed high dose (20 mg/day) of haloperidol. Predictably, patients using the new agents had fewer extrapyramidal side effects (Safer 2002).

Rheumatological research provides a further example of the use of inappropriate active comparators. For example, in the MEDAL (Multinational Etoricoxib and Diclofenac Arthritis Long-term) trial, when 24,913 patients with osteoarthritis and 9,787 patients with rheumatoid arthritis were randomly assigned to receive COX-2 inhibitor etoricoxib or COX 1 inhibitor diclofenac (Cannon et al. 2006), no difference was detected in the frequency of bleeding or adverse cardiovascular events. However, Psaty and Weiss (2007) have noted that the results were predictable because diclofenac is known to have a toxicity profile similar to that of the COX-2 inhibitor celecoxib. They suggested that naproxen would have been an appropriate comparator because it was known to be associated with a lower risk of cardiovascular events: a meta-analysis of 121 placebo-controlled trials of COX-2 inhibitors yielded a relative risk (of vascular events of 0.92 (95% CI=0.81-1.05) for COX-2s when diclofenac was used as the comparator compared with 1.57 (95% CI=1.21-2.03) when naproxen was used as the comparator (Kearney et al. 2006).

How can comparator bias be reduced?

Reducing some forms of bias is straightforward: allocation bias , for example, is controlled by strict random allocation of patients to treatment comparison groups. Comparator bias cannot be dealt with so straightforwardly. In fact, a precise mathematical solution of the choice of appropriate comparator is theoretically not possible (Djulbegovic, 2007).

However, comparator bias would be less of a problem if the choice of comparison groups in controlled trials became informed routinely and transparently by systematic reviews of relevant existing evidence. A 2005 survey of authors of clinical trials which had recently been added to systematic reviews revealed that less than half were even aware of the relevant reviews when they designed their new studies (Cooper et al. 2005); and a 2011 analysis of clinical trials reported over four decades showed that, regardless of the number of relevant previous trials, fewer than a quarter and a median of only two trials had been cited in trial reports (Robinson and Goodman 2011). When, in the light of the existing evidence and other considerations, patients and doctors are uncertain which among treatment options is preferable (Djulbegovic 2001; Djulbegovic et al. 2011), the preconditions for avoiding comparator bias exist (Mann and Djulbegovic 2003).

The choice of comparators in clinical trials inevitably involves judgements and values that go beyond scientific considerations.  It is not surprising, therefore, that researchers, sponsors, patients and government regulators may have different views on the selection of comparators (Rothman et al. 2000; FDA 2001; Estellat and Ravaud 2012; Pearson 2012). Some authors believe that as long as the drugs are listed on the national pharmacotherapy reference books comparison against such treatment may be justified even if it is not supported by evidence-based clinical guidelines published in the literature (van Lujin et al. 2008). Avoidance of inappropriate use of inactive and active comparators would seem most likely to result from greater involvement of the patients and clinicians for whom research should be producing relevant knowledge (Evans et al. 2011), with those who prioritise, fund and design clinical research, and the entities that approve the marketing of new interventions.


We thank Drs. Adams, Power, Paul, Alexander, Price and Singh for providing some of the examples of comparator bias cited in this article.

This James Lind Library commentary has been republished in the Journal of the Royal Society of Medicine 2013;106:30-33. Print PDF


Cannon CP, Curtis SP, FitzGerald GA, Krum H, Kaur A, Bolognese JA, Reicin AS, Bombardier C, Weinblatt ME, van der Heijde D, Erdmann E, Laine L (2006) Cardiovascular outcomes with etoricoxib and diclofenac in patients with osteoarthritis and rheumatoid arthritis in the Multinational Etoricoxib and Diclofenac Arthritis Long-term (MEDAL) programme: a randomised comparison. Lancet 368:1771-81.

Clark O, Adams JR, Bennett CL, Djulbegovic B (2002). Erythropoietin, uncertainty principle and cancer related anaemia. BMC Cancer 2(2):23.

Cooper N, Jones D, Sutton A (2005). The use of systematic reviews when designing studies. Clinical Trials 2:260-264.

Djulbegovic B (2001). Acknowledgment about uncertainty: a fundamental means to ensure scientific and ethical validity in research. Current Oncology Reports 3:389-395.

Djulbegovic B, Lacevic M, Cantor A, Fields KK, Bennett CL, Adams JR, Kuderer NM, Lyman GH (2000). The uncertainty principle and industry-sponsored research. Lancet 356:635-638.

Djulbegovic B (2007). Articulating and responding to uncertainties in clinical research. J Med Philosophy. 32:79-98.

Djulbegovic B, Hozo I, Greenland S (2011). Uncertainty in clinical Medicine. In: Gifford F. (ed.) Philosophy of Medicine (Handbook of the Philosophy of Science). London: Elsevier.

Djulbegovic B, Kumar A, Glasziou PP, Perera R, Reljic T, Dent L, Raftery J, Johansen M, Di Tanna GL, Miladinovic M, Soares HP, Vist GE, Chalmers I. New treatments compared to established treatments in randomized trials. Cochrane Database of Systematic Reviews 2012 (in press)

Edwards SJL, Lilford RJ, Braunholtz DA, Jackson JC, Hewison J, Thornton J (1998). Ethical issues in the design and conduct of randomized controlled trials. Health Technol Assessment 2:1-130.

Estellat C, Ravaud P (2012). Lack of head-to-head trials and fair control arms: randomized controlled trials of biologic treatment for rheumatoid arthritis. Arch Intern Med 172:237-44.

Evans I, Thornton H, Chlamers I, Glasziou P (2011). Testing treatments. London: Pinter and Martin.

Garattini S, Chalmers I (2009). Patients and the public deserve big changes in evaluation of drugs. BMJ 338:804-806.

Goldberg NH, Schneeweiss S, Kowal MK, Gagne JJ (2011). Availability of comparative efficacy data at the time of drug approval in the United States. JAMA 305:1786-9.

FDA. Guidance for Industry. E 10 Choice of Control Group and Related Issues in Clinical Trials. ICH. . 2001 (available at http :// www . fda . gov / downloads / Drugs / GuidanceComplianceRegulatoryInformation / Guidances / UCM 073139. pdf) (accessed July 12, 2012)

Freedman B (1987). Equipoise and the ethics of clinical research. N Engl J Med 317:141-5.

Hill AB (1963). Medical ethics and controlled trials. BMJ 1:1043-1049.

Hunter R, Kennedy E, Song F, Gadon L, Irving CB. Risperidone versus typical antipsychotic medication for schizophrenia. Cochrane Database of Systematic Reviews 2003, Issue 2. Art. No.: CD000440. DOI: 10.1002/14651858.CD000440.

Johansen HK, Gotzsche PC (1999). Problems in the design and reporting of trials of antifungal agents encountered during meta-analysis. JAMA 282:1752-9.

Kearney PM, Baigent C, Godwin J, Halls H, Emberson JR, Patrono C (2006). Do selective cyclo-oxygenase-2 inhibitors and traditional non-steroidal anti-inflammatory drugs increase the risk of atherothrombosis? Meta-analysis of randomised trials. BMJ 332:1302-8.

Mann H, Djulbegovic B (2003). Choosing a control intervention for a randomised clinical trial. BMC Med Res Methodol 3:7.

Montori VM, Jaeschke R, Schunemann HJ, Bhandari M, Brozek JL, Devereaux PJ, Guyatt GH (2004). Users’ guide to detecting misleading claims in clinical research reports. BMJ. 329:1093-6.

Pearson SD (2012). Placebo-controlled trials, ethics, and the goals of comparative effectiveness research: comment on “lack of head-to-head trials and fair control arms”. Arch Intern Med 172:244-5.

Peto R, Baigent C (1998). Trials: the next 50 years. BMJ 317:1170-1171.

Psaty BM, Weiss NS, Furberg CD. (2006) Recent trials in hypertension: compelling science or commercial speech? JAMA 295:1704-06.

Psaty BM, Weiss NS. NSAID (2007)  Trials and the choice of comparators – questions of public health importance. N Engl J Med 356:328-30.

Robinson KA, Goodman SN (2011). A systematic examination of the citation of prior research in reports of randomized, controlled trials. Ann Int Med 154:50-55.

Rochon PA, Gurwitz JH, Simms RW, et al (1994). A study of manufacturer-supported trials of nonsteroidal anti-inflammatory drugs in the treatment of arthritis. Arch Intern Med 154:157-63.

Rothman KJ, Michels KB, Baum M (2000). Declaration of Helsinki should be strengthened. For and against. BMJ 321:442-5.

Sackett DL, Oxman AD (2003). HARLOT plc: an amalgamation of the world’s two oldest professions. BMJ 327:1442-1445.

Safer DJ. Design and reporting modifications in industry-sponsored comparative psychopharmacology trials. J Nerv Ment Dis 2002;190(9):583-92.

Silverman WA (2003). Personal reflections on lessons learned from randomized trials involving newborn infants, 1951 to 1967. JLL Bulletin: Commentaries on the history of treatment evaluation (

Sox HC. (2010) Defining comparative effectiveness research: the importance of getting it right. Med Care 48 (6 Suppl):S7-8.

Van Luin JCF, van Loenen AC, Gribnau FWJ, Leufkens HGM (2008). Choice of comparator in active control of new drugs. Annals of Pharmacology 42:1605-1612.