Haynes RB (2016). Improving reports of research by more informative abstracts: a personal reflection

© Brian Haynes, Department of Clinical Epidemiology & Biostatistics, McMaster University Medical Centre, 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada. Email: bhaynes@McMaster.CA


Cite as: Haynes RB (2016). Improving reports of research by more informative abstracts: a personal reflection JLL Bulletin: Commentaries on the history of treatment evaluation (https://www.jameslindlibrary.org/articles/improving-reports-of-research-by-more-informative-abstracts-a-personal-reflection/)


Becoming sceptical

I have to credit Sigmund Freud for the impetus that led to the proposal for more informative abstracts. I went to medical school from 1967 to 1971, in the era when ‘spoon-feeding’ students the knowledge of the day was still in full force, and problem-based learning (learning by inquiry) had not yet been ‘invented’. In second year medical school, a psychiatrist lectured us on Freud’s theories. I was incredulous. When he asked for questions at the end of the lecture, I rose to ask whether there was any evidence to support Freud’s theories. The lecturer suddenly looked a bit sheepish and said that he did not think that there was any such evidence. He went on to explain that he had been asked to give the lecture by the chair of his department, who was a Freudian. This was an epiphany for me: I wondered how much of my medical education was not based on evidence.

To assuage my fears about the dearth of scientific underpinnings of medical practice, I tried asking the “what’s the evidence for that?” question in additional encounters with my medical teachers. At the University of Alberta, my alma mater, most teachers responded with biological/physiological explanations for their interventions; when asked about trials of efficacy, they humbly indicated they didn’t know of any. I thought I might be able to get better answers at the ‘flagship’ of Canada’s medical school fleet and took my internship at the Toronto General Hospital (TGH). Far from my expectations, a frequent response to my evidence questions was anger, that I was challenging their expertise and authority. I turned to trying to comprehend the medical literature myself but was painfully aware that I had insufficient scientific training by which to judge the quality of evidence there either. I decided to seek training in research methods, not to become a researcher so much as to be able to understand what evidence was available for medical practice. I signed up for a new program at the University of Toronto, the Diploma in Epidemiology and Community Health, intended to be attractive to clinicians who wanted to understand epidemiology. I thought I could use this as a base for finding or even creating good or better evidence for clinical care.

Dave Sackett to the rescue

Fortunately, fate interceded. Jack Laidlaw, a distinguished endocrinologist at TGH, and an ‘evidence hound’ himself, invited David Sackett to talk at U of T. I was on the endocrine service at the time, where a notice of the talk was posted. The topic, “Is Health Care Researchable?”, was exactly what I wanted to know. I was the only member of the house staff to attend (although the talk was well attended by the epidemiology faculty and students). I had never heard of Sackett until then, nor of the department he had recently founded, Clinical Epidemiology and Biostatistics, at the upstart Faculty of Health Sciences at McMaster University, which based its educational program on learning by inquiry.

Sackett’s presentation was riveting for me – I wasn’t alone in the world! I asked to meet with him in Hamilton, just down the road from Toronto, applied for the new graduate program in Design, Measurement, and Evaluation (now Health Research Methodology), and jumped ship from U of T’s epidemiology diploma. This evoked an angry letter from Harding LeRiche, its director, opining how short-sighted and unfortunate my defection was. I never heard from, or of, their program again.

I completed my master’s degree at McMaster working with Dave Sackett. A project I had designed led to a Canadian Medical Research Council grant for two randomized trials of interventions to help patients take antihypertensive medications as prescribed, so I stayed on for a PhD. Dave had suggested the topic for my master’s thesis, and it certainly fitted with my interests in health care being both evidence-based and effective: what would be the point of having evidence of benefits from medications, as is the case for antihypertensive therapy, if doctors didn’t prescribe them or patients didn’t follow their prescriptions? We learned a lot about both problems in these trials (Sackett et al. 1975; Haynes et al. 1976; Haynes et al. 1978).

Following the PhD program, I went back to TGH to continue my clinical training in internal medicine. Now I had some knowledge of research methods. This didn’t endear me to the attending (consultant) staff there, but we had some interesting discussions. For example, patients requiring a renal biopsy could have a nephrologist do a percutaneous needle biopsy, which was minimally traumatic but sometimes missed the mark, or could have a urologist do an open biopsy, more traumatic but somewhat more likely to be definitive. Patients did not have a choice, though, as closed biopsies were done one month and open biopsies the next. This seemed to me to be a perfect opportunity to do a trial to assess the relative merits of the two approaches. This notion, however, was not well received by the attending staff. Their reasons were not immediately clear to me, but I soon realized that the arrangement was simply a matter of ‘turf’, not science. Nephrologists and urologists were in competition for patients in the Canadian fee-for-service health care system, and had come to a ‘gentlemen’s agreement’ to split the proceeds from biopsies by providing one or the other on alternate months.

McMaster and helping people to learn how to read clinical articles

I joined the faculty at McMaster in 1977, jointly in the departments of clinical epidemiology and medicine. Dave Sackett was busy brewing plans for teaching clinicians how to read the medical literature, and quickly enlisted Peter Tugwell and me to the cause. We began teaching post-graduate doctors (residents or house staff) how to critically appraise articles on clinical disagreement, diagnosis, prognosis, prevention and treatment, and etiology. The residents’ new knowledge was perceived by the clinical faculty as interesting but somewhat threatening, so faculty asked for their own training sessions. To provide a curriculum and spread the word, we began publishing a series of articles in the Canadian Medical Association Journal, beginning in 1980, on how to read a clinical article. The series proved very popular, creating demands for talks and courses in many places around the globe, so we created an annual workshop to ‘teach the teachers’. More articles and handbooks ensued, and we were joined by Gordon Guyatt and Deborah Cook.

This all seemed to be catching fire, but I was worried that it would fizzle out and not amount to much more than an academic exercise: even if we taught many individuals and multiplied our efforts by teaching teachers, it was simply too much work for doctors in busy practice settings (including ourselves…) to retrieve and critically appraise studies on paper, find the very few sound and relevant articles for specific practice questions, and extract evidence-based practice changes from them. Many clinicians were also in settings where they had no or very limited access to full-text journal articles. A catalyst was needed.

The arrival of electronic literature searches

At this time, the 1980s, MEDLINE had become searchable via compact disc (eg, SilverPlatter), and then online via PubMed and front-end software, GratefulMed (the National Library of Medicine’s Director was Donald Lindberg, a Grateful Dead fan), including the abstracts of most research articles. The abstracts of the time were mostly summaries and conclusions of about 150 words, with few details of methods and often contained conclusions that strayed beyond the evidence included in the report. I thought that, potentially, the abstract could become the most important part of the paper, providing open access to readers via Medline and the emerging Internet (whether they had journal subscriptions or not); key details concerning the patients, setting, and methods of the study for critical appraisal; salient results for the primary outcome measures; and only conclusions directly supported by the evidence. My critical appraisal colleagues thought this would be a good idea, but wondered whether journal editors would buy in, especially considering that readers could get such information for free, without having to pay for a journal subscription.

Ed Huth and the Annals of Internal Medicine

Fate came to the rescue again. I attended the annual Sydenham Society dinner, a fringe assembly of clinical epidemiologists at the Tri-Council ‘clinical research’ meeting (it was mostly basic science) at Atlantic City (an intriguing venue with a long beachfront board walk, birch bark beer, and a diving horse platform over the ocean, but I digress). At the dinner, I was most fortunate to be seated beside Ed Huth, editor of Annals of Internal Medicine, one of the most respected and influential medical journal editors of the era. I raised the subject of more informative abstracts and was delighted to find that Dr Huth had an abiding interest in concise communication and improving the quality of abstracts. He alerted me to a paper by Ertl in 1969, calling for extensive and highly structured abstracts in the form of several tables for medical research articles, but this seemed to be a more extensive change than possible or needed, and undermined the imperative we felt for clear, sharp and succinct focus on question, key methods, results and conclusions. He encouraged me to write a proposal, circulate it to colleagues for input and endorsement, and send it to him for consideration for publication. I did so, our first proposal being to write structured abstracts for articles in Annals that met key criteria for critical appraisal as we had been teaching. These abstracts would be written independently of the authors, appear at the top of the article, and would make it easy for journal readers to identify the very few articles in each journal issue that merited clinical attention. I presented this proposal to the Annals editorial board, where it was met with about equal measures of excitement and alarm. The board appeared to me to be split along age lines, with the younger members for the proposal and the older ones rallying against for fear that no author would submit articles to Annals with the prospect of being either ‘dissected’ if judged adequate, or shamed if not worthy of having an abstract prepared. As Ed Huth later described their decision (Huth 1987):

“Our Editorial Board and the editors discussed Dr. Haynes’ proposal at length. Various considerations, some tactical, some practical, led to our deciding not to put that proposal into effect. But Annals’ editors agreed fully on the importance of the basic intent of the Haynes’ proposal: making explicit the elements in papers critical for judging their validity and importance. Was there some other way to serve this intent?”

Back to the drawing board, we thought that we could (reluctantly!) work around the concern about naming the good articles and ignoring the others if journals required all their authors of original clinical or health research articles (systematic reviews did not exist at the time) to prepare abstracts according to a set of instructions for details of question, methods, results, and conclusions). I circulated drafts of the proposal broadly for comments, advice and endorsements from what then became the Ad Hoc Working Group for Critical Appraisal of the Medical Literature. 358 people from 18 countries contributed to and signed the proposal (in the pre-internet era!), which was subsequently published in Annals of Internal Medicine (Ad Hoc Working Group 1987), with an update in 1990 (Haynes et al. 1990).

The proposal was implemented by Annals, soon joined by JAMA and BMJ. Ed Huth and Stephen Lock, then editor of the BMJ, presented the proposal at the First International Congress on Peer Review of the Medical Literature, hosted by Drummond Rennie of JAMA. The editors in the audience were invited to join in implementing more informative abstracts in their journals. Most seemed to be in agreement, but Marcia Angell of the New England Journal of Medicine rose to proclaim that “over my dead body” would the NEJM provide structured abstracts. Far from her intended effect, I think this convinced a lot of other editors to buy into the proposal. Indeed, despite this proclamation, NEJM actually followed suit for structure and Dr Angell lived on. The Lancet dragged its heels as well, until 1996 (Chalmers 1996). Unfortunately, the structure which both NEJM and Lancet adopted was simply a variant of IMRaD (Introduction, Methods, Results and Discussion), namely, Background, Methods, Results, and Conclusions, leaving it to due process or chance whether the key points for critical appraisal would be consistently included (for example, exact question addressed, clinical setting, patient selection, study design, primary outcome measure, conclusions directly supported by the evidence). This is a good reason, I think, for my dislike of the term “structured abstracts”, shifting the emphasis from substance to mere format. In any event, many journals soon adopted both the spirit and key details of the full proposal, although it took a number of years for some to come around.

In retrospect, an approach along the lines proposed by Ertl (1969) could have addressed deficiencies in scientific reporting in both the full text and abstract of a journal article. However, I think we were right for clinically relevant research to begin by trying to upgrade the conventional abstract with something of similar size/space, with structure but without tables. Although we do have tables in our ACPJC versions (for example, Alberts 2016), I think NLM at the time would have had difficulty handling them in MEDLINE/PubMed, and authors and editors would have had difficulty getting them right. Also, Ertl’s target was all of scientific communication in journals, but our focus was much narrower, namely, transferring information from researchers to clinicians. For clinicians, brevity in research communications is regarded as an essential virtue.

The article on more informative articles for original studies was soon followed by a proposal from Mulrow, Thacker, and Pugh for more informative abstracts for review articles (Mulrow et al. 1988). Further work on more informative abstracts has been done by Hayward et al. (1993) for clinical practice guidelines, the CONSORT group (Hopewell et al. 2008) for randomized clinical trials and conference abstracts, as well as the PRISMA Group for systematic review articles (Moher et al. 2009). For general science short reports and communications, Hortolà (Hortolà, 2008) has proposed replacing the entire article with nested tables for title, objective, procedure, results (including tables), future work, references, and acknowledgement, all to be presented in one print page, but this condensed format would be challenging to achieve for full reports.

Relevant comments by Donald Mainland and Austin Bradford Hill

Recently, I learned about the recommendations for more informative abstracts by Donald Mainland (Mainland 1938) and Austin Bradford Hill (Chalmers 2015). Here is a passage from Mainland:

In writing summaries I have for several years tried to observe a plan which may perhaps lay a summary open to the criticism that it contains too much detail, but which, in my reading of other writers’ articles, has provided the most satisfactory type of summary. The maximum length of summary is adopted as one-thirtieth or 3 per cent of the length of the text of the article—the rule to be observed in preparing abstracts for the journal Biological Abstracts. The maximum number of words is thus estimated from this, and the object is to express intelligibly within these limits as much information as possible. The composition is often more difficult than that of the article itself, because it involves selection of information according to its importance from the reader’s point of view, and the selection and re-selection of words and phrases without descending to an abbreviated or “telegraphic” mode of expression. In many articles a lower maximum can be set, but the same technique can be applied.”

In an audiotaped conversation with William Silverman (Chalmers 2015), Bradford Hill relates this discussion with Hugh Clegg, then editor of the BMJ:

“I want to have a very long abstract to my paper…Many people will read nothing but that summary. They’re not going to look at all those statistics in the long article. A precis of everything of importance that’s in the paper has got to be in that summary. And you needn’t complain because I’ve been through the text of the paper. I’ve taken out every adjective and every adverb which is unnecessary. It’s not very difficult; [but] it is difficult. And I’ve put in short words where I had long ones. And he agreed.”

It is worth noting that both Mainland and Bradford Hill were excellent research methodologists and distinguished teachers (I learned a lot from the writings of both of them during my research training), but their purpose in these statements has more to do with digestion than highlighting features of studies that allow appraisal of scientific merit.

Evaluation

The US National Library of Medicine has supported the use of more informative abstracts from the beginning, and facilitated use by extending the length of abstracts to accommodate both structured headings and more details of the methods of projects. NLM created a check tag and search filter (https://www.ncbi.nlm.nih.gov/pubmed/?term=hasstructuredabstract) to retrieve articles with MIAs. The annual number of such abstracts in MEDLINE journals has grown steadily from 156 published in 1987 to 312,504 in 2016 (for a total of over 3.6 million as of July 2017), so the effect of the proposal is quantitatively large and growing. Beyond that, evaluation of more informative abstracts has been limited. James Hartley, a UK psychologist, reviewed evaluation studies for structured abstracts of several disciplines, and concluded that they usually contain more information, but not always; are easier to read and search, and possibly easier to recall; may facilitate peer review for conference proceedings; and are generally welcomed by readers and by authors. However, they usually take up more space and may be prone to the same distortions that occur in traditional abstracts (Hartley 2004). Indeed, deficiencies in the information provided in abstracts have been well documented (Froom and Froom 1993; Pitkin and Branagan 1998; Hartley 2000). As Hopewell and others have shown, the completeness of abstracts depends on editorial enforcement (Hopewell 2012). However, no one has addressed the hard questions of whether readers and their patients are better off. Nor should anyone expect such a finding, given that even clear, open, honest communication in the medical literature would be but one feature of the very complex process of knowledge translation and implementation.

More informative abstracts and the evolution of Evidence-Based Health Care

For me, more informative abstracts was the first innovation beyond the didactic stage of evidence-based health care to aid clinicians in defining the current best evidence for clinical practice. Soon after, I sought more extensive changes in the “lines of communication” from medical journals to clinicians (Haynes 1990):

“Peer-reviewed clinical journals impede the dissemination of validated advances to practitioners by mixing a few rigorous studies (communications from scientists to practitioners) with many preliminary investigations (communications from scientists to scientists). Journals wishing to improve communication with practitioners should feature rigorous studies of the nature, cause, prognosis, diagnosis, prevention, and treatment of disease and should feature sound clinical review articles (communications from practitioners to practitioners). Additional strategies for improving communication between medical scientists and practitioners include improving publication standards for clinical journals, providing more informative abstracts for clinical articles, fostering the development of derivative literature services, and enhancing practitioners’ skills in critically appraising the medical literature.”

Many journals have tried to promote and implement at least some of these tactics, including notably supporting reporting guidelines for various types of studies and reviews (now under the umbrella of Enhancing the QUAlity and Transparency Of health Research (EQUATOR, http://www.equator-network.org/). However, the main strategy of creating or fashioning clinical journals that would publish only scientifically strong and clinically relevant studies and reviews to facilitate scientist to practitioner communication has, if anything, gone in reverse. Given the prestige, profits and mixed motivations of publishing, far too many journals are competing for the very few “practice confirming and changing” studies to be published each year, so virtually all journals are destined to have very dilute content for improving health care.

What followed from my research team in the Health Information Research Unit at McMaster University was creation of processes to continuously define “current best evidence” across all leading clinical journals. We created a “health knowledge refinery” (HKR), McMaster PLUS (http://hiru.mcmaster.ca/hiru/HIRU_McMaster_PLUS_projects.aspx), that links centralized critical appraisal of studies and reviews by expert research staff with an international social network of practicing clinical reviewers to assess the relevance and importance for health care of newly published high quality research (second order peer review, Haynes et al. 2006). While Annals of Internal Medicine did not accept our proposal for independently written abstracts for their high quality studies and reviews, they were keenly in favor of us writing such abstracts for the best articles in all clinical journals for their readers. This led in 1991 to ACP Journal Club https://www.acponline.org/clinical-information/journals-publications/acp-journal-club, sponsored by the American College of Physicians, in which expertly prepared abstracts for systematically reviewed “best clinical evidence” studies and reviews appear monthly in Annals, with articles systematically selected from over 120 journals via McMaster PLUS. BMJ followed suit with Evidence-Based Medicine, Evidence-Based Nursing, and Evidence-Based Mental Health, all being fed by our refinery. Remarkably, this process shows just how little of the world’s prolific production of health care literature merits clinical attention (McKibbon et al., 2004): the knowledge refinery team are hard pressed to find 144 studies per year to fill the pages that Annals has dedicated to this feature. With the arrival of the Internet era, a myriad of derivative evidence-based processes and products ensued notably the creation of personalized evidence-alerting services and databases (such as EvidenceUpdates, https://plus.mcmaster.ca/evidenceupdates, sponsored by BMJ Evidence Centre), and, most importantly, infusion of current best evidence into online textbooks and guidelines (Alper and Haynes 2016). Personally, I believe that these knock on creations have far greater potential for impact than more informative abstracts alone, but the latter provided a sound launch for the mission to enhance the transfer of sound evidence from health research to health care.

This James Lind Library article has been republished in the Journal of the Royal Society of Medicine 2017;110:249-254. Print PDF

References

Ad Hoc Working Group for Critical Appraisal of the Medical Literature (1987). A proposal for more informative abstracts of clinical articles. Ann Intern Med 106:598-604.

Alberts MJ (2016). Pooled RCTs: After ischemic stroke or TIA, aspirin for secondary prevention reduced early recurrence and severity. Ann Intern Med 165:JC27.

Alper BS, Haynes RB (2016). EBHC pyramid 5.0 for accessing pre-appraised evidence and guidance. Evid Based Med. doi: 10.1136/ebmed-2016-110447

Chalmers I (1996). Structured abstracts in The Lancet. Lancet 347:340.

Chalmers I (2015). Personal communication. 29 November.

Ertl N (1969). A way of documenting scientific data from medical publications. Karger Gaz 20:1-4.

Froom P, Froom J (1993). Deficiencies in structured medical abstracts. J Clin Epidemiol 46:591–594.

Hartley J (2000). Are structured abstracts more or less accurate than traditional ones? a study in the psychological literature. J Info Sci 26:273–277.

Hartley J (2004). Current findings from research on structured abstracts. J Med Libr Assoc 92:368-71.

Haynes RB (1990). Loose connections between peer reviewed clinical journals and clinical practice. Ann Intern Med 113:724-8.

Haynes RB, Sackett DL, Gibson ES, Taylor DW, Hackett BC, Roberts RS, Johnson AL (1976). Improvement of medication compliance in uncontrolled hypertension. Lancet 1:1265-1268.

Haynes RB, Sackett DL, Taylor DW, Gibson ES, Johnson AL (1978). Increased absenteeism from work after detection and labeling of hypertensive patients. N Engl J Med 299:741-744.

Haynes RB, Mulrow CD, Huth EJ, Altman DG, Gardner MJ (1990). More informative abstracts revisited: A progress report. Ann Intern Med 113:69-76.

Haynes RB, Cotoi C, Holland J, Walters L, Wilczynski NL, Jedraszewski D, McKinlay J, Parrish R, McKibbon A for the McMaster Premium Literature Service (PLUS) Project (2006). A second order of peer review: A system to provide peer review of the medical literature for clinical practitioners. JAMA 295:1801-1808.

Hayward RSA, Wilson MC, Tunis SR, Bass EB, Rubin HR, Haynes RB (1993). More informative abstracts of articles on clinical practice guidelines. Ann Intern Med 118:731-737.

Hopewell S, Clarke M, Moher D, Wager E, Middleton P, Altman DG, Schulz KF; CONSORT Group (2008). CONSORT for reporting randomised trials in journal and conference abstracts. Lancet 371:281-283.

Hopewell S, Ravaud P, Baron G, Boutron I (2012). Effect of editors’ implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ 22;344:e4178. doi: 10.1136/bmj.e4178.

Hortolà P (2008). An ergonomic format for short reporting in scientific journals using nested tables and the Deming’s cycle. Journal of Information Science 34:207-212.

Huth E (1987). Structured abstracts for papers reporting clinical trials. Ann Intern Med 106:626-627.

Mainland D (1938). The treatment of clinical and laboratory data. Edinburgh: Oliver & Boyd: pp 288-90.

McKibbon KA, Wilczynski NL, Haynes RB (2004). What do evidence-based secondary journals tell us about the publication of clinically important articles in primary healthcare journals? BMC Medicine 2:33 http://www.biomedcentral.com/1741-7015/2/33

Moher D, Liberati A, Tetzlaff J, Altman DG; PRISMA Group (2009).Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ 339:b2535. doi: 10.1136/bmj.b2535.

Mulrow CD, Thacker SB, Pugh J (1988). A proposal for more informative abstracts of review articles. Ann Intern Med 108:613-615.

Pitkin RM, Branagan MA (1998). Can the accuracy of abstracts be improved by providing specific instructions? A randomized controlled trial. JAMA 280:267–269.

Sackett DL, Haynes RB, Gibson ES, Hackett BC, Taylor DW, Roberts RS, Johnson AL (1975). Randomized clinical trial of strategies for improving medication compliance in primary hypertension. Lancet 1:1205-1207.