1.4 Trust based on the source of a claim alone can be misleading

Cite as: Oxman AD, Chalmers I, Dahlgren A (2022). Key Concepts for Informed Health Choices: 1.4 Trust based on the source of a claim alone can be misleading. James Lind Library (www.jameslindlibrary.org).

© Andy Oxman, Centre for Epidemic Interventions Research, Norwegian Institute of Public Health, Norway. Email: oxman@online.no

This is the fourth essay in this series explaining key concepts that can help you avoid being misled by claims that have an untrustworthy basis. The next four essays in the series will explain concepts that can help you recognise when evidence from comparisons (tests) of treatments is trustworthy and when it is not. In this essay, we will explain five assumptions about basing trust on the source of a claim alone.  This can be misleading as a result of assuming that:

  • personal experiences alone are sufficient,
  • your beliefs are correct,
  • opinions alone are sufficient,
  • peer review and publication is sufficient, or
  • there are no competing interests.

The basis for these concepts is described elsewhere [Oxman 2022].

Do not assume that personal experiences alone are sufficient.

People can be led to believe that improvements in a health problem (for example, recovery from a disease) resulted from having received a treatment. Similarly, they might believe that an undesirable health outcome was due to having received a treatment. However, the fact that an individual recovered after receiving a treatment does not mean that the treatment caused the recovery, or that other people receiving the same treatment will also improve. The improvement (or the undesirable health outcome) might have occurred even without treatment.

One reason is that personal experiences – including a series of personal experiences – are sometimes misleading. This is because experiences, such as pain, fluctuate and tend to return to a more normal or average level. This is sometimes referred to as “regression to the mean”. For example, people often treat symptoms such as pain when they are very bad and would improve anyway without treatment. The same applies to a series of experiences. For example, if there is a spike in the number of traffic crashes someplace, traffic lights may be installed to reduce these. A subsequent reduction may leave the impression that the traffic lights caused this change. However, it is possible that the number of crashes would have returned to a more normal level without the traffic lights.

If you have a splinter that is causing pain and the pain goes away right after you pull out the splinter, you can be confident that pulling out the splinter (the treatment) caused the outcome (no more pain). This is because the outcome happened right after the treatment and without the treatment the pain was constant and would very likely continue [Glasziou 2007]. However, few conditions are constant (unchanging without treatment) and respond quickly to treatment. So, for example, it is impossible to know based on your personal experience whether you did or did not have a stroke or cancer when you are 70 because of your diet when you were younger.

Unless an outcome rarely, if ever, occurs without treatment, it is not possible to know based on personal experience whether the treatment caused the outcome, even if the outcome occurs shortly after the treatment. For example, tension-type headaches are very common. In adults who have frequent headaches, about 5%, 20%, and 44% are likely to be pain-free within one, two, and four hours respectively without taking paracetamol (acetaminophen) [Stephens 2016]. So, if an individual with frequent tension-type headaches took paracetamol and the headache went away, it would not be possible for that individual to know whether it was because of the medicine or if it would have gone away just as quickly without the medicine.

Do not assume that your beliefs are correct.

People often look for and use information to support their own beliefs, including beliefs about the effects of treatments. This is sometimes called ‘confirmation bias’. Confirmation bias can occur when people want a claim about treatment effects to be true. By focussing on evidence or arguments that support their existing beliefs and ignoring evidence or arguments that challenge these, people believe claims that confirm what they believe or wanted to be true without thinking critically about the basis for the claims.

When looking for health information, many people search the Internet. However, the information they select, and their perception of that information may be biased based on their prior beliefs. For example, parents of young children are more likely to select information about vaccination that is consistent than information that is inconsistent with their prior beliefs, and they perceive information that is consistent with their prior beliefs as being more credible, useful, and convincing [Meppelink 2019].

Do not assume that opinions alone are sufficient.

People often disagree about the effects of treatments, including doctors, researchers, and patients. This may be because their opinions are not always based on systematic reviews of fair comparisons of treatments. Who makes a treatment claim, how likable they are, or how much experience and expertise they have do not provide a reliable basis for assessing how reliable their claim is. This does not mean that conflicting opinions should be given equal weight – or that the existence of conflicting opinions means that no conclusion can be reached. How much weight to give an opinion should be based on the strength of the evidence supporting it.

Experts, just like everyone else, do not always base what they say on systematic reviews. For example, experts did not begin to recommend aspirin after a heart attack until years after there was strong evidence supporting its use [Antman 1992]. Conversely, experts continued to recommend medicines to reduce heart rhythm abnormalities years after there was strong evidence that they increased the risk of early death after a heart attack.

Do not assume that peer review and publication is sufficient.

Even though a comparison of treatments – whether in a single study or in a review of similar studies – has been published in a prestigious journal, it may not be a fair comparison and the results may not be reliable. Peer review (assessment of a study by others working in the same field) does not guarantee that published studies are reliable. Assessments vary and may not be systematic. Similarly, just because a study is widely publicised does not mean that it is trustworthy.

Sometimes, research that has been peer reviewed and published is so untrustworthy that it is retracted. About half of all retractions involve misconduct, including fabrication or falsification [Brainard 2018, Budd 2011]. Perhaps the most widely-known example of a widely-publicised paper that was subsequently retracted was a small study published in The Lancet which suggested that measles, mumps and rubella vaccination might cause autism [Flaherty 2011]. Publication of that paper contributed to vaccine scepticism and led to a decrease in vaccinated children, outbreaks of measles, serious illness, and at least four deaths that could have been prevented.

Although a small proportion of published papers are retracted, many more are corrected or refuted by more reliable research [Oransky 2021]. Journals rely on peer review to ensure the quality of the research they publish. However, peer review is highly variable, inconsistent, and flawed [Smith 2006, Smith 2010]. For the most part it is done by volunteers. Few peer reviewers have formal training and they commonly do not detect major errors. For example, the British Medical Journal (BMJ) sent three papers, each of which had nine major methodological errors inserted, to about 600 peer reviewers [Schroter 2008]. On average, the peer reviewers detected about one-third of the errors in each paper. Half of the peer reviewers were given brief training, which had only a slight impact on improving error detection.

Do not assume that there are no competing interests.

People with an interest in promoting a treatment (in addition to wanting to help people) – for example, to make money – may promote treatments by exaggerating benefits, ignoring potential harmful effects, cherry picking which information is used, or making false claims. Conversely, people may be opposed to a treatment for a range of reasons, such as cultural practices.

Tamiflu (oseltamivir) is an example of how financial conflicts of interest can result in misleading claims about the effects of a treatment [Doshi 2012, Loder 2014]. Tamiflu was approved for seasonal influenza by the U.S. Food and Drug Administration in 1999. Several randomized trials and systematic reviews emphasised the benefits and safety of Tamiflu. Most of them were funded by Roche, which also marketed and promoted Tamiflu. In 2005 and 2009, the fear of pandemic flu led to recommendations to stockpile Tamiflu and billions of dollars were spent on this. After battling with the company for over four years, a team of review authors finally accessed the complete data held by the company. After carefully reviewing all the documents, they found no compelling evidence to support claims that oseltamivir reduces the risk of complications of influenza, such as pneumonia and hospital admission, claims that had been used to justify international stockpiling of the drug [Jefferson 2014]. Tamiflu was found to slightly reduce the time to alleviation of flu symptoms in adults and to slightly reduce the risk of flu symptoms in people exposed to the flu. It was also found to have adverse effects that potentially outweighed the benefits. As a result of biased reporting of the research and misinformed recommendations and decisions, billions of dollars were wasted.

Implications

  • If an individual improved after receiving a treatment it does not necessarily mean that the treatment caused the improvement, or that other people receiving the same treatment will also improve.
  • Don’t be misled by your own beliefs or rely on them unless they are based on the results of systematic reviews of fair comparisons of treatments.
  • Do not rely on the opinions of experts or other authorities about the effects of treatments unless they have taken account of the results of systematic reviews of fair comparisons of treatments.
  • Always consider whether a published comparison of the effects of treatments is fair and whether the results are reliable. Peer review is a poor indicator of reliability.
  • Ask if people making claims that a treatment is effective have conflicting interests. If they do, be careful not to be misled by their claims about the effects of treatments.

< Previous Essay | Next Essay >

This James Lind Library Essay has been republished in the Journal of the Royal Society of Medicine 2022;115:479-481. Print PDF

References

Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC. A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts. Treatments for myocardial infarction. JAMA. 1992;268(2):240-8. https://doi.org/10.1001/jama.1992.03490020088036

Brainard J, You J. What a massive database of retracted papers reveals about science publishing’s ‘death penalty’. Science. 2018;25(1):1-5. https://www.science.org/content/article/what-massive-database-retracted-papers-reveals-about-science-publishing-s-death-penalty

Budd JM, Coble ZC, Anderson KM. Retracted publications in biomedicine: cause for concern. Association of College & Research Libraries Conference Program; Philadelphia, 2011. https://www.ala.org/acrl/files/conferences/confsandpreconfs/
national/2011/papers/retracted_publicatio.pdf

Doshi P, Jefferson T, Del Mar C. The imperative to share clinical study reports: recommendations from the Tamiflu experience. PLoS Med. 2012;9(4):e1001201. https://doi.org/10.1371/journal.pmed.1001201

Flaherty DK. The vaccine-autism connection: a public health crisis caused by unethical medical practices and fraudulent science. Ann Pharmacother. 2011;45(10):1302-4. https://doi.org/10.1345/aph.1q318

Glasziou P, Chalmers I, Rawlins M, McCulloch P. When are randomised trials unnecessary? Picking signal from noise. BMJ. 2007;334(7589):349-51. https://doi.org/10.1136/bmj.39070.527986.68

Jefferson T, Jones MA, Doshi P, Del Mar CB, Hama R, Thompson MJ, et al. Neuraminidase inhibitors for preventing and treating influenza in adults and children. Cochrane Database Syst Rev. 2014;2014(4):Cd008965. https://doi.org/10.1002/14651858.cd008965.pub4

Loder E, Tovey D, Godlee F. The Tamiflu trials. BMJ. 2014;348:g2630. https://doi.org/10.1136/bmj.g2630

Meppelink CS, Smit EG, Fransen ML, Diviani N. “I was right about vaccination”: confirmation bias and health literacy in online health information seeking. J Health Commun. 2019;24(2):129-40. https://doi.org/10.1080/10810730.2019.1583701

Oransky I, Fremes SE, Kurlansky P, Gaudino M. Retractions in medicine: the tip of the iceberg. Eur Heart J. 2021;42(41):4205-6. https://doi.org/10.1093/eurheartj/ehab398

Oxman AD, Chalmers I, Dahlgren A, Informed Health Choices Group. Key Concepts for Informed Health Choices: a framework for enabling people to think critically about health claims (Version 2022). IHC Working Paper. 2022. http://doi.org/10.5281/zenodo.6611932

Schroter S, Black N, Evans S, Godlee F, Osorio L, Smith R. What errors do peer reviewers detect, and does training improve their ability to detect them? J R Soc Med. 2008;101(10):507-14. https://doi.org/10.1258/jrsm.2008.080062

Smith R. Peer review: a flawed process at the heart of science and journals. J R Soc Med. 2006;99(4):178-82. https://doi.org/10.1258/jrsm.99.4.178

Smith R. Classical peer review: an empty gun. Breast Cancer Res. 2010;12 Suppl 4(Suppl 4):S13. https://doi.org/10.1186/bcr2742

Stephens G, Derry S, Moore RA. Paracetamol (acetaminophen) for acute treatment of episodic tension-type headache in adults. Cochrane Database Syst Rev. 2016;2016(6):Cd011889. https://doi.org/10.1002/14651858.cd011889.pub2