Straus S, Eisinga A, Sackett D† (2015). What drove the Evidence Cart? Bringing the library to the bedside.

© Sharon Straus, Anne Eisinga and David Sackett. Contact author: Sharon Straus, Department of Medicine, University of Toronto, St Michael’s Hospital, Toronto, Ontario M5B 1W8, Canada. Email: sharon.straus@utoronto.ca


Cite as: Straus S, Eisinga A, Sackett D† (2015). What drove the Evidence Cart? Bringing the library to the bedside. JLL Bulletin: Commentaries on the history of treatment evaluation (https://www.jameslindlibrary.org/articles/what-drove-the-evidence-cart-bringing-the-library-to-the-bedside/)


The challenge

We saw an 87 year old woman (Mrs. T) who had been transferred from a long-term care facility with delirium and a pelvic fracture resulting from a fall. She had a past medical history suggestive of moderate Alzheimer’s dementia, osteoporosis and type 2 diabetes. She was taking calcium, vitamin D, and metformin. On admission, we found that she had a urinary tract infection, which we felt was contributing to her delirium. When we reviewed her situation with our clinical team, a few questions were raised about her management plan including:

1. In patients like Mrs. T with a urinary tract infection, what is the effectiveness and safety of a three-day course of antibiotics compared with a seven-day course?
2. In patients like Mrs. T with dementia and osteoporosis, what is the effect of treatment with a bisphosphonate compared with calcium/vitamin D to prevent fracture and avoid harms?

To answer these questions outside opening hours at our hospital library in 1996 we needed to walk to our office (10 minutes or 3 floors away) and access the CDs for Best Evidence or the Cochrane Library that we had purchased. This wasn’t a practical solution during busy ‘clinical rounds’ – meetings of the medical team to review and discuss patients admitted to our service. Our clinical team was on call (or on take) every fourth day, requiring the team to assess patients seen in the Accident and Emergency department for possible admission to the medicine inpatient service. We met during the evening of the on-call period to review and discuss any patients who had been assessed by that time. These are called on-call or on-take rounds. Our medical team also met on the morning after the on-call period to see and discuss all patients who had been admitted. These are called post-call or post-take rounds.

In all of these circumstances we needed information! When we were the team responsible for admissions to the general internal medicine inpatient units, we admitted 20 to 30 patients like Mrs. T during each on-call period. Each day, our clinical team provided care for 40 patients on average. As clinicians providing care for patients with complex healthcare needs like Mrs. T, we were challenged by the need to find and apply evidence in our decision making.

Our clinical experiences were reflected in published reports of the experiences of others. Clinicians said they needed evidence twice a week (for example, about the accuracy of diagnostic tests, the predictive power of prognostic markers, the comparative effectiveness and safety of interventions); and they tended to get it from textbooks, journals and colleagues (Covell et al. 1985). When clinicians were shadowed, researchers found that they really needed information! Clinical questions arose twice for every three outpatients and five times for every inpatient (Osheroff et al. 1991). Traditional sources for this information were inadequate. They were either out-of-date textbooks (Antman et al. 1992); frequently wrong experts (Oxman and Guyatt 1993); ineffective didactic continuing medical education (Davis et al. 1999); or in a plethora of journal articles, which were too overwhelming in number and too variable in quality for practical clinical use (Haynes 1993).

This situation was further complicated by the limited time there is to ‘read around’ our patients. In surveys that we conducted during more than 20 Grand Rounds in the UK, the median time spent reading around patients ranged from no reading for house officers, to 20 minutes/week among senior house officers, to 45 minutes/week for registrars and consultants. Given that attendees at Grand Rounds are likely to be keen learners (they showed up at rounds!), we worried these self-reports might be overestimates for the majority of clinicians. As a result, it wasn’t surprising that we were getting less than a third of the evidence we needed and that this resulted in gaps in care, and in practice variation.

Educational prescriptions and a clinical librarian

Our first attempt to meet this challenge was on the clinical teaching unit at the John Radcliffe (JR) Hospital in Oxford. The discussion about the assessment and management of patients typically led to clinical questions posed by the medical team. If the answer to a clinical question was not known by team members, it was identified as a learning opportunity and a team member was given a paper with the question to be answered. This paper was called an ‘educational prescription’.

Even when libraries were in the same building, it took time to go from the medical ward to the library to complete the search, and the libraries were not open then for six am post-call rounds. To help the team answer these questions, the clinician members of the team invited a clinical librarian to join the team. They were fortunate to recruit an enthusiastic librarian (AE) from the Cairns Library at the John Radcliffe Hospital. Her role as a clinical librarian was built on development work by others – Gertrude Lamb (1982) in particular – who had created and developed the role of a clinical medical librarian in the USA in the 1970s. The role had burgeoned into a range of programmes, initially documented in 1985 in an overview by Cimpl (1985), and assessed more recently in systematic reviews (Winning and Beverley 2003; Wagner and Byrd 2004; Brettle et al. 2011). Over the past 40 years, the role of the clinical medical librarian evolved from identifying and, upon request by clinicians, attaching relevant papers to the clinical records of patients (the Washington Hospital Center’s 1967 LATCH [Literature Attached To the Chart] program), to including librarians on clinical rounds (Cimpl 1985).

The librarian whom we engaged (AE) had previously been involved with a randomized trial to assess the effects of a training session to help medical students in Oxford to formulate questions and search databases (Rosenberg et al. 1998). For our project, the librarian attended each of our on-call and post-call rounds and tracked our clinical questions (usually captured with educational prescriptions). She addressed these questions by searching the literature and brought the results to subsequent rounds (including student teaching rounds) for discussion by the team.

The first objective of the clinical librarian was observation – to identify the information-seeking behaviour of a busy clinical team, and to assess how best to support clinicians in patient-centred, self-directed, problem-based learning at the bedside. A second objective was to assist members of the team to practise evidence-based medicine while caring for patients.

The librarian found that the clinical team’s information needs were usually:

(i) immediate, requiring information to be given in an easily assimilated form, particularly on the on-call and post-call rounds, when rapid management decisions were required or multiple clinical problems needed to be identified accurately;
(ii) specific, centred on the patients admitted to the clinical service, usually to decide on appropriate diagnostic tests or therapeutic interventions; and
(iii) evaluative, to assess whether the evidence found for managing a particular patient might be applicable to others, and so become the basis of a clinical service team policy to promote appropriate care of future, similar patients.

To fulfil the needs for immediacy and specificity, the librarian’s interventions to support the clinical team needed ideally to be performed in the clinical setting. However, this proved challenging and not always sustainable (Lusher 1999). The intended support comprised:

(i) searching bibliographic databases (usually Medline) and secondary sources of critically appraised evidence (usually Best Evidence and the Cochrane Library) to address clinical questions arising from individual patient-centred problems;
(ii) advising how to retrieve specific, patient-related evidence using advanced subject search strategies, with search filters for studies of therapy, diagnosis, aetiology and prognosis, thus optimizing precision (minimizing the retrieval of irrelevant studies) without sacrificing sensitivity (minimizing the risk of missing relevant studies); and
(iii) providing rapid access to the full text of articles and other reference material identified as likely to be useful.

Despite a highly-motivated team championing evidence-based practice, some of the clinical questions raised were left unanswered, particularly if the evidence sought was not retrieved from the easily assimilated secondary literature sources (usually Critically Appraised Topics (CATs), summary data, or Best Evidence). Also, even if the librarian had at all times been able to understand the clinical issues raised in the questions generated (which was not always the case), and had conducted comprehensive searches of the primary literature on behalf of the team, the team often had little time to appraise the validity and applicability of the articles selected. Service demands on clinical teams allowed for little reading time to assimilate information for clinical problem-solving. This finding suggested the need for a further extension of the clinical librarian’s role to include critical appraisal of the primary literature and presentation of the evidence in an accessible summary.

Participating in a clinical team and attending ward rounds was a challenge, a privilege, and richly rewarding for the librarian, especially when, on one notable occasion, it was not just the clinical team but the patient who asked a question (Is there a safe alternative in the short-term to the use of very uncomfortable compression stockings for reducing swelling?). The resulting literature search produced evidence (Diehm 1996a; Diehm et al. 1996b) that directly helped the patient and her clinical team to make a shared decision and an informed choice about her care. This ward-based experience helped to define the multiple skills which a clinical librarian ideally needed to acquire to provide an efficient bedside information service.

From experience derived from this pilot project, it became clear that, to be truly proactive in a clinical setting, a clinical librarian must be prepared to acquire and maintain sufficient clinical knowledge to gain an understanding of the clinical issues generated by the questions. This was needed to equip the librarian to conduct complex searches swiftly, drawing on a variety of resources, optimizing precision without sacrificing sensitivity, resulting in the retrieval of trustworthy evidence directly applicable to patient-centred problems. The clinical librarian role may also involve assisting clinical team members to enhance their searching skills to retrieve evidence efficiently and effectively for themselves. This can involve appraising the articles selected and presenting the evidence in readily assimilated summary form, thus helping them to integrate their clinical expertise with the best available research evidence. It is a complex role, requiring training in the tenets of evidence-based medicine, active listening, skilled communication of tailored information, courage, stamina, sensitive handling of multi-disciplinary team dynamics and a commitment to lifelong learning.

Bringing the library to the bedside

The results of this experience led us to brainstorm about how we could bring evidence to the point of clinical decision making more efficiently. Our approach was also stimulated from comments by Richard Smith, then editor of the BMJ (Smith 1996), who pointed out that ‘although most of the questions go unanswered, most of [them] can be answered, usually from electronic sources, but it is time-consuming to do so’. He concluded that the ‘ideal information source will be directly relevant, contain valid information, and be accessed with a minimum amount of work’ (Smith 1996). Mindful of his comments, our approach built on the work done by colleagues internationally, which included:

1. The development of strategies for efficient identification and appraisal of evidence (Rosenberg et al. 1998; Haynes et al. 1990; Marshall and Neufeld 1981; Langdorf et al. 1995);
2. The creation of systematic reviews of the effects of health care, such as those prepared and maintained by the Cochrane Collaboration (www.cochranelibrary.com); and
3. The creation of evidence-based journals of secondary publication for clinicians (Haynes 1993). .

In 1996, we felt the long-term solution to our challenge was handheld computers ‘radio-linked’ to the evidence; but this technology was in its infancy. As a result, we wanted to see if an ‘Evidence Cart’ might provide a short-term solution (Sackett and Straus 1998). In particular, we were interested in assessing whether it was feasible to find and apply evidence using an Evidence Cart during clinical rounds. Based on our clinical experience and previous literature, we felt it was important to include:

1. A laptop computer with projector and pop-out screen to share the results of the search with the team and potentially the patients and caregivers
2. Compact disks of Best Evidence (containing the cumulated contents of ACP Journal Club and Evidence-based Medicine, both journals of secondary publication); the Cochrane Library (Haynes et al., 1990), Scientific American Medicine (1997), Radiological Anatomy (1995) and MEDLINE (five-year clinical subset).
3. A physical examination textbook (Scientific American 1997) and reprints from the JAMA series on the Rational Clinical Examination (Sackett and Rennie 1992).
4. An infra-red simulscope [Cardionics, Houston, Texas] with 12 receivers, allowing several members of the team to auscultate simultaneously.
5. A simple database with one to three page summaries of evidence previously appraised by our teams in the form of:
(i) Entries in our ‘Redbook’, a binder filled with one to three page summaries of critically appraised evidence created by the specialist or junior staff, updated during the month prior to the team’s duty month for admissions. [It included 98 topics at the time and was maintained by DLS].
(ii) Critically Appraised Topics (Sauvé et al. 1995) created by clinical faculty and trainees, and brief summaries of the evidence, updated annually or more frequently if needed, addressing the clinical examination, diagnostic tests, prognostic markers, or treatments

Given the requirements to include all of the above materials, we commissioned a carpenter to modify a trolley to house these resources (Figures 1 and 2).

In 1997, we conducted a before-and-after study to evaluate the feasibility of the use of the Evidence Cart on our inpatient medicine clinical teaching unit (Sackett and Straus 1998). The Cart was used in three types of rounds: post-take rounds, during which we would review the assessment and management of newly admitted patients; teaching rounds, in which we would review the management plan of each patient admitted by the team; and teaching rounds targeted at medical students. We brought the cart with us as we saw patients in the accident and emergency department of the hospital and on the various medical wards.

One of the key challenges in using the Cart became clear very quickly. Its mammoth size and the need for a power cable made its use on busy post-take rounds impractical. So, after a week, we limited its use to the ward conference room. Use of the Cart was initiated by a member of the clinical team when evidence about a decision already made was challenged or needed to be confirmed, or when it was judged that a patient might benefit from a change in, or an addition to, a current prescription.

Ninety-eight searches were conducted during the one month period of the Cart’s use. We found that a mean of 3.1 evidence resources were used by the team during each round. 81% of searches were for evidence that could affect diagnostic and or treatment decisions, and 90% of the searches for these were successful in finding useful evidence, as judged by the most junior member of the team posing the question. Of the successful searches, 48% led either to a new decision (23%) or to a change (25%) to an existing decision. After removing the Cart, we completed a survey to see how many questions arose over a two-day period and whether answers to these questions were identified. The perceived need for evidence rose sharply, but a search for it was carried out for only 12% of the questions raised (five searches performed out of the 41 times evidence was needed). Ninety-two per cent of the respondents said the best thing about the Cart was the immediate access to relevant, up-to-date evidence, with instant print-outs. Eighty per cent of the respondents agreed that its worst feature was its bulk. The team suggested that the whole Cart could be brought to team rounds and student teaching rounds, but that the print-out version of the Redbook and Critically Appraised Topics (CATS) be used on post-take rounds.

We also recorded the time needed to access the various evidence resources available on the Cart in response to a clinical question. The access time was shortest for the Redbook and CATS (12 seconds to access the bottom line), and longest for the Cochrane Library (over two minutes to access a useful review). Indeed, we could complete 40 searches for pre-appraised evidence in the time that it took us to go the hospital library and conduct a single search. On a busy clinical round, we found that unless searches took less than 30 seconds, they would frequently be abandoned because of more urgent clinical demands.

In 1998, we believed that access to evidence could be further improved by the use of handheld computers. We subsequently completed a before-and-after study of handheld computers (Figure 3). The handheld computers used the hospital network to access the evidence databases we used on the Cart, along with patient laboratory results linked to relevant CATS. The technology was too slow for routine use but the handheld devices were the forerunners of today’s smartphones. In particular, the decision support provided by the linking of laboratory results to relevant evidence was perceived by the house staff to be the best part of this intervention, heralding the need for a computerized decision support system.

Unfinished business

Reflecting on this work from the late 1990s, huge strides have been made in providing high quality evidence resources at the point of decision making. For example, the efforts of leaders such as Brian Haynes and his colleagues at McMaster University have hugely impacted the way clinicians can seek and use evidence in practice. And, smartphones are now routinely used at the bedside by clinical teams wishing to access relevant evidence and data from electronic health records. However, we are continuing to struggle with the challenge of integrating relevant evidence with clinical data in the electronic health record in a way that promotes optimal patient care. To meet this challenge, we will need to continue to look to the needs of our patients and their caregivers and find feasible and cost-effective ways to promote evidence-based shared decision making across the care continuum.

This James Lind Library article has been republished in the Journal of the Royal Society of Medicine 2016;109:241-247 Print PDF

Acknowledgements

Sir Muir Gray recruited DLS to Oxford, facilitating the creation of the Oxford Centre for Evidence-Based Medicine; and, the members of the ‘original A-team’: David Laloo, Alain Townsend, Eric Valezquez, Clair Thomas, Chris Turner, George Ioannou, James Bursell, Hsien Chew, Margaret Findley, Andreas Fox, Sarah Green, Hari Jayaram, Steven Kane-Toddhall, Clair Lloyd, and Ash Cloke.

References

Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC (1992). A comparison of results of meta-analyses of randomised control trials and recommendations of clinical experts. Treatments for myocardial infarction. JAMA 268:240-248.

Brettle A, Maden-Jenkins M, Anderson L, McNally R, Pratchett T, Tancock J, Thornton D, Webb A (2011). Evaluating clinical librarian services: a systematic review. Health Information and Libraries Journal 28:3-22.

Cimpl K (1985). Clinical medical librarianship: a review of the literature. Bulletin of the Medical Library Association 73:21-28.

Covell DG, Uman GC, Manning PR (1985). Information needs in office practice: are they being met? Annals of Internal Medicine 103:596-599.

Davis D, O’Brien MA, Freemantle N, Wolf FM, Mazmanian P, Taylor-Vaisey A (1999). Impact of formal continuing medical education: do conferences, workshops, rounds, and other traditional continuing education activities change physician behaviour or health care outcomes? JAMA 282:867-874.

Diehm C (1996). The role of oedema protective drugs in the treatment of chronic venous insufficiency: a review of evidence based on placebo-controlled trials with regard to efficacy and tolerance. Phlebology 11:23-29.

Diehm C, Trampisch HJ, Lange S, Schmidt C (1996). Comparison of leg compression stocking and oral horse-chestnut seed extract therapy in patients with chronic venous insufficiency. Lancet 347:292-294.

Haynes RB (1993). Where’s the meat in clinical journals? ACP Journal Club 119:A-22-23.

Haynes RB, McKibbon KA, Walker CJ, Ryan N, Fitzgerald D, Ramsden MF (1990). Online access to MEDLINE in clinical settings. A study of use and usefulness. Annals of Internal Medicine 112:78-84.

Lamb G (1982). A decade of clinical librarianship. Clinical Librarian Quarterly 1:2-4.

Langdorf MI, Koenig KL, Brault A, Bradman S, Bearie BJ (1995). Computerized literature research in the ED: patient care impact. Academic Emergency Medicine 2:157-159.

Lusher A (1999). Getting evidence to the bedside: the role of the clinical librarian. In: Libraries without limits: changing roles – changing needs. Proceedings of the 6th European Conference of Medical and Health Libraries, 22/27 June 1998, Utrecht. Dordrecht: Kluwer Academic, p 67-70.

Marshall JG, Neufeld VR (1981). A randomised trial of librarian educational participation in clinical settings. Journal of Medical Education 56:409-416.

Osheroff JA, Forsythe DE, Buchanan BG, Bankowitz RA, Blumenfeld BH, Miller RA (1991). Physicians’ information needs: analysis of questions posed during clinical teaching. Annals of Internal Medicine 114:576-581.

Oxman AD, Guyatt GH (1993). The science of reviewing research. Annals of the New York Academy of Sciences 703:125-134.

Radiologic Anatomy (1995). [book on CD-ROM]. Gainsville, FL: Gold Standard Multimedia Inc.

Rosenberg W, Deeks J, Lusher A, Snowball R, Dooley G, Sackett D (1998). Improving searching skills and evidence retrieval. Journal of the Royal College of Physicians of London 32:557-563.

Sackett DL, Rennie D (1992). The science of the art of the clinical examination. JAMA 267:2650-2652.

Sackett DL, Straus SE (1998). Finding and applying evidence during clinical rounds: the ‘evidence cart’. JAMA 280:1336-1338.

Sauvé S, Lee HN, Meade MO, Lang JD, Farkouh M, Cook DJ, Sackett DL (1995). The Critically-Appraised Topic (CAT): a practical approach to learning critical appraisal. Annals of the Royal College of Physicians and Surgeons of Canada 28:396-398.

Scientific American Medicine (1997). [serial on CD-ROM]. New York, NY: Scientific American Inc.

Smith R (1996). What clinical information do doctors need? BMJ 313:1062-1068.

The Cochrane Library [Online]. [Accessed 18 April 2015]. Available from: http://www.cochranelibrary.com

Wagner K, Byrd G (2004). Evaluating the effectiveness of clinical medical librarian programs: a systematic review of the literature. Journal of the Medical Library Association 92:14-33.

Winning M, Beverley C (2003). Clinical librarianship: a systematic review of the literature. Health Information and Libraries Journal 20 (Suppl 1):10-21.