Lund H, Robinson KA (2026). History of Evidence-Based Research

© Hans Lund, Section for Evidence-Based Practice, Western Norway University of Applied Sciences, Bergen Campus, Norway and Karen A Robinson, Johns Hopkins University, Baltimore, USA.


Cite as: Lund H, Robinson KA (2026). History of Evidence-Based Research JLL Bulletin: Commentaries on the history of treatment evaluation (https://www.jameslindlibrary.org/articles/history-of-evidence-based-research/)


More than two and a half centuries ago, James Lind’s controlled trial showed that citrus fruits were effective in treating scurvy (Tröhler 2005). Remarkably, Lind’s publication not only reported these findings but also systematically reviewed the existing knowledge on treatments for scurvy (Lind 1753), exemplifying the scientific principle of building on prior evidence. Unfortunately, many researchers since this landmark trial have failed to base plans for new studies on evidence syntheses of earlier, similar studies or to place new findings into the context of what is already known.

In 1992, Joseph Lau and colleagues published a seminal systematic review of interventions for the treatment of myocardial infarction (Lau et al 1992). By sequentially organizing trials by date and updating the pooled effect estimate with each successive study, their cumulative meta-analyses revealed that a substantial number of trials were conducted after the effectiveness of the intervention had become apparent. This finding raised ethical concerns about the justification for continued patient randomization and highlighted failures to incorporate prior evidence into trial design.

On July 19, 2001, all federal funded medical research including humans was suspended at Johns Hopkins University, Baltimore, USA. The reason was the death of a 24-year-old healthy participant in a research study. In the study, hexamethonium was provided to participants to provoke a mild asthma attack to help doctors discover the reflex that protects the lungs of healthy people against asthma attacks. In a paper published the following year, Julian Savulescu stated that “[t]he most disturbing feature of this case was that literature documenting the pulmonary toxicity of hexamethonium was available before the trial began” (Savulescu 2002), which the investigator and ethics review board apparently failed to consider.

In 2005, Dean Fergusson and colleagues conducted a systematic review of aprotinin for bleeding (Fergusson et al 2005), which included a cumulative meta-analysis and an assessment of the citation of prior trials by each subsequent trial. Overall, the median number of prior trials cited was four and the median percentage of prior cited trials per publication was 20%. The number cited did not change over time, even though some of the later studies could have cited more than 50 earlier studies. Fergusson and colleagues concluded “[t]his study demonstrates that investigators evaluating aprotinin were not adequately citing previous research, resulting in a large number of [randomized trials] being conducted to address efficacy questions that prior trials had already definitively answered” (Fergusson et al 2005). Iain Chalmers, in a commentary published with the paper, more boldly stated that the results “[were] the most recent evidence of an ongoing scandal in which research funders, academia, researchers, research ethics committees and scientific journals are all complicit. New research should not be designed or implemented without first assessing systematically what is known from existing research. The failure to conduct that assessment represents a lack of scientific self-discipline that results in an inexcusable waste of public resources” (Fergusson et al 2005).

What is Evidence-Based Research?

In the 2000s, the need to explicitly consider existing research when justifying, planning and reporting new studies, as illustrated in the examples above, was formally named. The term evidence-based research was independently coined first by Karen Robinson in her PhD dissertation in 2009 and then by Hans Lund in an application for a PhD program at Høgskolen i Bergen in 2013. Robinson described evidence-based research as “one way to reduce waste in the production and reporting of trials, through the initiation of trials that are needed to address outstanding questions and through the design of new trials in a way that maximizes the information gained. Investigators need to identify and consider prior studies to provide the ethical and scientific justification for why they started a clinical trial, and to determine the most appropriate design and methodological characteristics of that trial. The synthesis of prior studies is also needed to provide an informative discussion of the results of a trial by placing its results within the context of existing evidence” (Robinson 2009, page 123).

Later, Lund wrote: “Current evidence shows that several randomized controlled trials have been unnecessarily performed because a clear answer to the research question had already been established (Antman et al 1992; Lau et al 1992; Ferguson et al 2005). Just as evidence-based practice has identified gaps between research findings and clinical practice, the studies referred to above indicate a worrying gap between the recommended and the actual practice among researchers” (Lund 2014).

The Evidence-Based Research Network (EBR Network, established in 2014, see below) adopted the definition of evidence-based research as “the use of prior research in a systematic and transparent way to inform a new study so that it is answering questions that matter in a valid, efficient, and accessible manner” (Robinson et al 2021).

History of Evidence-Based Research

The recognition of the need for research to identify, acknowledge and build upon earlier work is not new. David Wootton, in The Invention of Science, provides an example of Gilbert who published a book about magnets in 1600. In the first chapter, Gilbert presented a summary of all earlier books in order to convince the reader that he had made new discoveries about magnetism (Wooton 2015). This thinking about science as cumulative is also emphasized by Sir Isaac Newton’s statement in a letter to Robert Hooke in 1676 that “If I have seen farther it is by standing on the shoulders of giants” and by James Lind who presented all earlier attempts to heal scurvy in the first chapter of his book about his study of citrus fruits (Tröhler 2005).

Later, in the first issue of the New England Journal of Medicine in 1812, Warren wrote “[i]n our inquiries into any particular subject of medicine, our labours will generally be shortened and directed to their proper objects, by a knowledge of preceding discoveries” (Warren 1812). Further, at the 54th meeting of the British Association for the Advancement of Science in Montreal in 1884, Lord Rayleigh stated: “If, as is sometimes supposed, science consisted in nothing but the laborious accumulation of facts, it would soon come to a standstill, crushed, as it were, under its own weight…… The work which deserves, but I am afraid, does not always receive, the most credit is that in which discovery and explanation go hand in hand, in which not only are new facts presented, but their relation to old ones is pointed out” (cited from Chalmers et al 2002).

The interesting aspect here is that another key feature of science – discovery – also depends on use of prior work. These two apparently conflicting features –science is cumulative, and science is discovery — are challenged in Lord Rayleigh’s comment and in Light and Pillemer’s introduction Summing Up – the science of reviewing research: “Why do scientists think that new research is better, or more insightful, or more powerful? The underlying assumption must be that new studies will incorporate and improve upon lessons learned from earlier work. Novelty in and of itself is shallow without links to the past. It is possible to evaluate an innovation only by comparisons with its predecessors. For science to be cumulative, an intermediate step between past and future research is necessary: synthesis of existing evidence” (Light and Pillemer 1984).

To convey the existing body of evidence, researchers must cite relevant prior studies. Yet the selection of citations or references in a paper reporting a study is more often strategical, and preference based, not systematic and transparent (Amancio et al 2012; Thornley et al 2015). In 1965, Austin Bradford Hill laid out a structure for scientific articles that included “why did you start?” and “what does it mean anyway?”. To provide scientific and ethical justification for new trials, researchers need to convey the current state of knowledge. In addition, to facilitate interpretation, researchers should indicate the contribution of their trial results to the existing evidence base. As Bradford Hill stated “All scientific work is incomplete . . . [t]hat does not confer upon us a freedom to ignore the knowledge that we already have . . .” (Bradford Hill 1965).

Eugene Garfield, a pioneer in bibliometrics and founder of the company which developed the Science Citation Index, talked about bibliographic negligence in 1991: “For a long time, scientists and others have expressed the need for a “science court”- a panel that would, among other things, sit in judgment concerning matters of fraud, misconduct, and other transgressions by researchers. If such a court is ever established, I hope that cases of bibliographic negligence are among the issues that come under consideration–and I hope that proven cases of such negligence will be dealt with firmly. As important as the need for meting out punishment to willful perpetrators in this regard, however, is the need to instruct young researchers, preventively, on the ethics and etiquette involved in proper and complete referencing. Acknowledging prior research and intellectual debts is of crucial ethical importance” (Garfield 1991). Around the same time, Thomas Chalmers and colleagues noted that failure to appropriately consider previous studies may limit the design and conduct of studies; a cause of bias they called “sloth” (Chamers et al 1990).

The development of EBR is similar to the development of evidence-based medicine. The “evidence-based medicine” approach, developed in the 1980s and coined as a term in the early 1990s, promoted a new way to make clinical decisions, where a systematic search and synthesis of existing research is included in informing clinical decisions. In 1994, Cynthia D Mulrow, then Director of the San Antonio Cochrane Collaboration Center in USA and editor at Annals of Internal Medicine, wrote a paper called “Rationale for systematic reviews”. In her call for evidence-based medicine, Mulrow also expressed EBR concepts: “Researchers use the review to identify, justify, and refine hypotheses; recognise and avoid pitfalls of previous work; estimate sample sizes; and delineate important ancillary or adverse effects and covariates that warrant consideration in future studies” (Mulrow 1994).

Around this time, the first version of CONSORT (Consolidated Standards of Reporting Trials), published in 1996, included guidance that reports of randomised trials “state general interpretation of the data in light of the totality of the available evidence” (Begg et al 1996). Later versions of CONSORT included recommendations to use a systematic review of earlier, similar studies to justify the new study and to interpret its results (Moher et al 2010). The 2025 version retained the recommendation to provide, ideally, a systematic review of prior relevant research (Hopewell et al 2025). The explanation cites work by members of the EBR Network, which found that despite an increase in the proportion of reports of trials citing an available systematic review, over a quarter still failed to do so (Jia et al 2023).

In 1996, inspired by a new book Systematic Reviews, Julian Savulescu and colleagues wrote an article in BMJ called “Are research ethics committees behaving unethically? Some suggestions for improving performance and accountability” (Savulescu et al 1996). They suggested that “[t]he performance and accountability of research ethics committees would be improved if they required those proposing research to present systematic reviews of relevant previous research in support of their applications” (Savulescu et al 1996).

Thus, the 1990s saw many calls to take advantage of the systematic and transparent syntheses of earlier studies to inform both clinical decisions and the justification and interpretation of a new study. However, audits on the five high impact journals Annals of Internal Medicine, BMJ, JAMA, The Lancet, and the New England Journal of Medicine from 1997 to 2022 found that more was needed to change the way new research was justified and the way new results were reported and interpreted. Over those 25 years, the authors performed a series of audits of randomized trials published in the month of May in 1997, 2001, 2005, 2009, 2012 and 2022 (Clarke et al 2024). For each published report of a randomized trial, they read the introduction and discussion sections. For the introduction, they assessed whether a systematic review or references to prior trials were included and, for the discussion section, they assessed if there was a systematic attempt to set the new results in the context of existing trials. They concluded that while the numbers improved over the 25 years, reports of trials continue to inadequately consider existing trials and stated “[u]ntil this deficiency has been addressed by the research community, people are likely to continue to suffer and sometimes to die unnecessarily because of an unreliable evidence base for health care and additional research” (Clarke et al 2024).

Building on that work by Clarke and colleagues, and on prior work such as the aprotinin example (Fergusson et al 2005), Robinson’s 2009 doctoral thesis asked the simple question: if clinicians should use earlier research to inform clinical decisions, why shouldn’t researchers do the same? Or more precisely, it should be standard that researchers planning a new study should be informed by a systematic and transparent synthesis of earlier similar studies to inform the justification and design of a new planned study. In her thesis which focused on trials, Robinson suggested a term for this kind of thinking: “While the use of research synthesis to make evidence-informed decisions is now expected in health care, there is also a need for clinical trials to be conducted in a way that is evidence-based” (Robinson 2009).

Robinson’s doctoral thesis generated very worrisome results. She identified 227 systematic reviews related to healthcare interventions published in 2004 which included at least one meta-analysis where at least three included trials could have referred to earlier studies. The dataset contained 1523 original trials (published from 1963 to 2004) that could have cited on average 9.7 earlier similar trials. However, trials cited on average 1.9 prior relevant trials regardless of the number of possible prior trials to cite (Robinson 2009). Further, the proportion of trials cited was 21%, similar to the 20% in Fergusson and colleagues (Fergusson et al 2005).

Evidence-based research is one way to reduce waste in the production and reporting of trials, through the initiation of trials that are needed to address outstanding questions and through the design of new trials in a way that maximizes the information gained (Robinson 2009; Chalmers and Glasziou 2009). In 2011, Robinson and colleagues published a framework outlining how systematic reviews of previous similar studies can be used to identify and characterize evidence gaps, thereby informing the questions, design and methodological choices for new research (Robinson et al 2011a). Earlier that year, the same authors used the term evidence-based research in a paper describing methods to identify research gaps from guidelines (Robinson et al 2011b).

Around this time, Lund was exploring the use of systematic reviews in basic science to guide the development of clinical research. Studies suggested that the growing volume of research held the risk of increasingly reporting less meaningful observations. The goal was to use systematic reviews of biomechanical studies to generate new clinical research questions, particularly in the context of rehabilitating patients with chronic musculoskeletal conditions such as knee or hip osteoarthritis. While conducting systematic reviews of previous biomechanical studies and considering their application in formulating and prioritizing new clinical research questions, Lund encountered a presentation by Iain Chalmers titled “The Scandalous Failure of Scientists to Cumulate Scientifically” (presented at the Ninth World Congress on Health Information and Libraries, September 20, 2005). When Lund was appointed as a Guest Professor at Høgskolen i Bergen (HiB), later renamed the Western Norway University of Applied Sciences, in 2013, he proposed a new PhD program incorporating what he referred to as “Evidence-Based Research”, inspired by that presentation. As HiB approved the inclusion of the EBR approach in the PhD program application, Lund conducted an extensive literature search on EBR. He quickly realized that Robinson had already defined the concept of EBR in precisely the way they intended to implement it in the PhD program.

On April 10, 2014, Robinson and Lund met at Johns Hopkins University and agreed to establish a network for evidence-based research.

Evidence-Based Research Network

Subsequently, a meeting in Bergen established a network to promote the development and use of EBR. The following were invited attendees: Iain Chalmers (UK), Karen Robinson (USA), Donna Ciliska (Canada), Maureen Dobbins (Canada), Klara Brunnhuber (UK), Mona Nasser (UK), Matt Westmore (UK), Robin Christensen (Denmark), Carsten Juhl (Denmark), Gro Jamtvedt (Norway), Monica Nortvedt (Norway), Birgitte Espehaug (Norway), Hans Lund (Denmark/Norway), Kjetil Gundro Bruberg (Norway), Majbritt U. Johansen (Denmark), Mette Brandt Eriksen (Denmark), Thea Marie Drachen (Denmark) and Hanna Nykvist (Norway/Sweden). Paul Glasziou (Australia) and Malcolm Macleod (UK) participated virtually. Just before the meeting, Chalmers and Magne Nylenna published a commentary in the Lancet outlining the brief history and rationale for EBR and announcing the network’s establishment (Chalmers and Nylenna 2014).

During the meeting, a Steering Group was established, and two principles were formulated: (1) no new research studies without prior systematic reviews of existing evidence, and (2) efficient production, updating, and accessibility of systematic reviews. The following day, HiB and the Norwegian “Kunnskapssenteret” arranged a public meeting to introduce and discuss the concept of EBR. Subsequently, the EBR Network published an introductory paper in the BMJ (Lund et al 2016), that is now translated into nine different languages. This paper, known as the Bergen Statement on EBR, outlined the need for EBR and noted the role and responsibilities of researchers, funding agencies, research ethics committees and journal editors and reviewers. This paper made explicit the ways that EBR can address the factors set out in the Lancet series on increasing value and reducing waste (Al-Shahi Salman et al 2014; Chalmers et al 2014; Chan et al 2014; Glasziou et al 2014; Ioannidis et al 2014; Macleod et al 2014).

In the following years, the Steering Group met monthly, arranging dissemination of EBR through publications and presentations and preparing funding applications. One such application was for a COST (European Cooperation in Science and Technology) Action supporting grant for network building in the European Union. With support from Western Norway University of Applied Science, the EBR Network was awarded the grant in 2018.

For the following five years, the EBR Network focused on the new COST Action (CA17117) grant called “EVBRES” (Evidence-Based Research). More than 36 European countries joined and EVBRES was launched on October 17, 2018. Subsequently, more than 60 active participants from across Europe met and prepared courses, a handbook and several publications related to the conduct of EBR and efficient production, updating and dissemination of systematic reviews (see Appendix).

In 2021, the EBR Network published a three-paper series in the Journal of Clinical Epidemiology elaborating on the concept and application of EBR (Lund et al 2021a; Lund et al 2021b; Robinson et al 2021). The concept had become more clearly aligned with the concept of evidence-based medicine in considering multiple factors in addition to an explicit consideration of prior research. As shown in Figure 1, EBR promotes a transparent and systematic synthesis of earlier similar studies and of end users’ perspectives (Robinson et al 2021).

Figure 1. Elements of evidence-based research

The EBR Network outlines needed research within three domains: (1) meta-research on the current practice of EBR, and consequences of not doing so, (2) development of methods to conduct EBR, and (3) methods to implement and evaluate implementation of EBR. The Network established a research interest group, GROVR (Group on Research on Value of Research), and research by members of the EBR Network, GROVR and EVBRES have found that researchers rarely use an EBR approach when justifying and designing a new study and when interpreting new results in the context of existing evidence (Andreasen et al 2022; Draborg et al 2022; Lund et al 2022; Norgaard et al 2022). As outlined in its research framework, the EBR Network endeavors to evolve from characterizing current research practices and their consequences to developing methods for applying EBR, ultimately, assessing the impact of EBR itself.

Does EBR help to avoid research waste and increase the value of research?

In 2000, Emanuel and colleagues reviewed the basic philosophies underlying major ethics declarations relevant to research involving humans, such as the Declaration of Helsinki. They identified three key elements: welfare of the participants, validity of the study and its value (Emanuel et al 2000; Grady 1998). Inspired by this work and that from Benjamin Freedman (Freedman 1987), the EBR Network considers that EBR may increase the value of a study by informing the design choices (validity) and ensuring that a new study addresses an evidence gap (scientific relevance) and/or a research need of those who will use or be affected by its results (end user relevance). The interplay of scientific and end user relevance is also illustrated in Figure 1 (Robinson et al 2021). Further, by avoiding unnecessary duplication of studies, EBR avoids putting study participants at unnecessary risk.

The first EBR Conference was held online (due to the COVID-19 pandemic) on November 16-17, 2020. The second (September 27-28, 2021) and third (October 6-7, 2022) were also online. In December 2023, as focus moved from EVBRES (which was ending) back to the EBR Network, an online seminar was held re-introducing the concept of EBR and the network, discussing the continuation of EVBRES activities within the network, and introducing concepts of EBR in social science. The fourth Conference was in-person, held adjacent to the 2nd Global Evidence Summit in Prague, Czech Republic, on September 9, 2024. During this, the 2nd General Assembly was held, and a steering committee was elected. Also, during the General Assembly, three key documents developed as part of the registration of the EBR Network as a not-for-profit organization were ratified: Strategy, Charter and Business Plan. Details of program, abstracts and videos and other materials are available online (http://ebrnetwork.org).

Future of Evidence-Based Research

During the fourth EBR Conference, two key priorities for the Network were discussed. First, EBR has primarily been developed within health sciences, but research in other scientific fields would also benefit from this approach. Thus, the Network is working to move beyond health. Second, the role of EBR in reducing waste and increasing the value of research may be even more critical where resources for research are more limited, such as in low- or middle- income countries. Further, when researchers and research groups across the globe collaborate more closely, they can more effectively address the challenges of conducting valuable research in different contexts. Thus, the EBR Network is also focused on efforts beyond the “Global North”. The future of the Network, in addition to expanding to other scientific disciplines and ensuring global engagement, involves progressing from simply identifying problems and developing methods to assessing the impact of EBR.

To undertake research without systematically considering what has been done before is unethical, unscientific and wasteful. The EBR Network continues to raise awareness, conduct research, and engage stakeholders to ensure that new research is valuable and to achieve the vision of a world in which the cultural norm and expectation is that decisions about research are based on transparent and systematic use of evidence.

Acknowledgments

The authors extend their gratitude to Mike Clarke, Sally Hopewell and Malcom Macleod for their valuable feedback on the manuscript. We also thank Iain Chalmers and Mike Clarke for their encouragement and support throughout this process.

For more information on how authors are selective in what they cite in their research reports, see this entry in the Catalogue of Bias: Catalogue of Bias Collaboration, Spencer EA, Brassey J, Heneghan C. One-sided reference bias. In: Catalogue of Bias 2017. LINK

References

Al-Shahi Salman R, Beller E, Kagan J, Hemminki E, Phillips RS, Savulescu J, et al (2014). Increasing value and reducing waste in biomedical research regulation and management. Lancet 383(9912):176-85.

Amancio DR, Nunes MGV, Oliveira ON, Costa LdF (2012). Using complex networks concepts to assess approaches for citations in scientific papers. Scientometrics 91:827-42.

Andreasen J, Norgaard B, Draborg E, Juhl CB, Yost J, Brunnhuber K, et al (2022). Justification of research using systematic reviews continues to be inconsistent in clinical health science-A systematic review and meta-analysis of meta-research studies. PLoS One 17(10):e0276955.

Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC (1992). A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts. Treatments for myocardial infarction. JAMA 268:240–8. (https://www.jameslindlibrary.org/antman-em-lau-j-kupelnick-b-mosteller-f-chalmers-tc-1992/)

Begg C, Cho M, Eastwood S, Horton R, Moher D, Olkin I, et al (1996). Improving the quality of reporting of randomized controlled trials. The CONSORT statement. JAMA 276(8):637-9. (https://www.jameslindlibrary.org/the-consort-group-1996/)

Bradford Hill A (1965). Report of Editors’ Conference: The Reasons for Writing. British Medical Journal 2(5466):870-2. (https://www.jameslindlibrary.org/hill-ab-1965/)

Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gulmezoglu AM, et al (2014). How to increase value and reduce waste when research priorities are set. Lancet 383(9912):156-65.

Chalmers I, Glasziou P (2009). Avoidable waste in the production and reporting of research evidence. Lancet 374(9683):86-9.

Chalmers I, Hedges LV, Cooper H (2002). A brief history of research synthesis. Evaluation and the Health Professions 25(1):12-37.

Chalmers I, Nylenna M (2014). A new network to promote evidence-based research. Lancet 384(9958):1903-4.

Chalmers TC, Frank CS, Reitman D (1990). Minimizing the three stages of publication bias. JAMA 263(10):1392-5.

Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC, et al (2014). Increasing value and reducing waste: addressing inaccessible research. Lancet 383(9913):257-66.

Clarke M, Chalmers I, ALderson P, Hopewell S (2024). Reports of randomised control trials (RCTs) should begin and conclude with up-to-date systematic reviews of other relevant trials: a 25-year audit of the quality of trial reports. Journal of the Royal Society of Medicine 117(6):212-6. (https://www.jameslindlibrary.org/articles/reports-of-randomised-control-trials-rcts-should-begin-and-conclude-with-up-to-date-systematic-reviews-of-other-relevant-trials-a-25-year-audit-of-the-quality-of-trial-reports/)

Draborg E, Andreasen J, Norgaard B, Juhl CB, Yost J, Brunnhuber K, et al (2022). Systematic reviews are rarely used to contextualise new results-a systematic review and meta-analysis of meta-research studies. Systematic Reviews 11(1):189.

Emanuel EJ, Wendler D, Grady C (2000). What makes clinical research ethical? JAMA 283(20):2701-11.

Fergusson D, Glass KC, Hutton B, Shapiro S (2005). Randomized controlled trials of aprotinin in cardiac surgery: could clinical equipoise have stopped the bleeding? Clinical Trials 2(3):218-29; discussion 229-32.

Freedman B (1987). Scientific Value and Validity as Ethical Requirements for Research: A Proposed Explication. IRB: Ethics & Human Research 9(6):7-10.

Garfield E (1991). Bibliographic Negligence: A Serious Transgression. The Scientist

Glasziou P, Altman DG, Bossuyt P, Boutron I, Clarke M, Julious S, et al (2014). Reducing waste from incomplete or unusable reports of biomedical research. Lancet 383(9913):267-76.

Grady C (1998). Science in the service of healing. Hastings Center Report 28(6):34-8.

Hopewell S, Chan AW, Collins GS, Hrobjartsson A, Moher D, Schulz KF, et al (2025). CONSORT 2025 statement: updated guideline for reporting randomised trials. BMJ 389:e081123.

Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, et al (2014). Increasing value and reducing waste in research design, conduct, and analysis. Lancet 383(9912):166-75.

Jia Y, Li B, Yang Z, Li F, Zhao Z, Wei C, et al (2023). Trends of Randomized Clinical Trials Citing Prior Systematic Reviews, 2007-2021. JAMA Network Open 6(3):e234219.

Lau J, Antman EM, Jimenez-Silva J, Kupelnick B, Mosteller F, Chalmers TC (1992). Cumulative meta-analysis of therapeutic trials for myocardial infarction. New England Journal of Medicine 327(4):248-54.

Light RJ, Pillemer DB (1984). Summing up. The science of reviewing research. Boston: Harvard University Press. (https://www.jameslindlibrary.org/light-rj-pillemer-db-1984/)

Lind J (1753). A treatise of the scurvy. In three parts. Containing an inquiry into the nature, causes and cure, of that disease. Together with a critical and chronological view of what has been published on the subject. Edinburgh: Printed by Sands, Murray and Cochran for A Kincaid and A Donaldson. (https://www.jameslindlibrary.org/lind-j-1753/)

Lund H (2014). From evidence-based practice to evidence-based research-Reaching research-worthy problems by applying an evidence-based approach. European Journal of Physiotherapy 16(2):65-6.

Lund H, Brunnhuber K, Juhl C, Robinson K, Leenaars M, Dorch BF, et al (2016). Towards evidence based research. BMJ 355:i5440.

Lund H, Juhl CB, Norgaard B, Draborg E, Henriksen M, Andreasen J, et al (2021a). Evidence-Based Research Series-Paper 2: Using an Evidence-Based Research approach before a new study is conducted to ensure value. Journal of Clinical Epidemiology 129:158-66.

Lund H, Juhl CB, Norgaard B, Draborg E, Henriksen M, Andreasen J, et al (2021b). Evidence-Based Research Series-Paper 3: Using an Evidence-Based Research approach to place your results into context after the study is performed to ensure usefulness of the conclusion. Journal of Clinical Epidemiology 129:167-71.

Lund H, Robinson KA, Gjerland A, Nykvist H, Drachen TM, Christensen R, et al (2022). Meta-research evaluating redundancy and use of systematic reviews when planning new studies in health research: a scoping review. Systematic Reviews 11(1):241.

Macleod MR, Michie S, Roberts I, Dirnagl U, Chalmers I, Ioannidis JP, et al (2014). Biomedical research: increasing value, reducing waste. Lancet 383(9912):101-4.

Moher D, Hopewell S, Schulz KF, Montori V, Gotzsche PC, Devereaux PJ, et al (2010). CONSORT 2010 Explanation and Elaboration: Updated guidelines for reporting parallel group randomised trials. Journal of Clinical Epidemiology 63(8):e1-37.

Mulrow CD (1994). Rationale for systematic reviews. BMJ 309(6954):597-9.

Norgaard B, Draborg E, Andreasen J, Juhl CB, Yost J, Brunnhuber K, et al (2022). Systematic reviews are rarely used to inform study design – a systematic review and meta-analysis. Journal of Clinical Epidemiology 145:1-13. (https://www.jameslindlibrary.org/norgaard-b-draborg-e-andreasen-j-juhl-cb-yost-j-brunnhuber-k-robinson-ka-lund-h-2022/)

Robinson KA (2009). Use of prior research in the justification and interpretation of clinical trials: Johns Hopkins University.

Robinson KA, Brunnhuber K, Ciliska D, Juhl CB, Christensen R, Lund H, et al (2021). Evidence-Based Research Series-Paper 1: What Evidence-Based Research is and why is it important? Journal of Clinical Epidemiology 129:151-7.

Robinson KA, Saldanha IJ, McKoy NA (2011a). Development of a framework to identify research gaps from systematic reviews. Journal of Clinical Epidemiology 64(12):1325-30.

Robinson KA, Saldanha IJ, McKoy NA (2011b). Identification of research gaps from evidence-based guidelines: a pilot study in cystic fibrosis. International Journal of Technology Assessment in Health Care 27(3):247-52.

Savulescu J (2002). Two deaths and two lessons: is it time to review the structure and function of research ethics committees? Journal of Medical Ethics 28(1):1-2.

Savulescu J, Chalmers I, Blunt J (1996). Are research ethics committees behaving unethically? Some suggestions for improving performance and accountability. BMJ 313(7069):1390-3.

Thornley C, Watkinson A, Nicholas D, Volentine R, Jamali HR, Herman E, et al (2015). The role of trust and authority in the citation behaviour of researchers. Information Research 20(3):677.

Tröhler U (2005). Lind and scurvy: 1747 to 1795. Journal of the Royal Society of Medicine 98:519-522. (https://www.jameslindlibrary.org/articles/james-lind-and-scurvy-1747-to-1795/)

Warren J (1812). Remarks on angina pectoris. New England Journal of Medicine 1(1):1-11. (https://www.jameslindlibrary.org/warren-j-1812/)

Wootton D (2015). The Invention of Science – A new history of the scientific revolution. New York: HarperCollins Publishers.

 

Appendix

Articles about Evidence Based Research published or prepared during the COST Action (EVBRES) (October 2018 to April 2023)

Affengruber L, van der Maten MM, Spiero I, Nussbaumer-Streit B, Mahmić-Kaknjo M, Ellen ME, et al (2024). An exploration of available methods and tools to improve the efficiency of systematic review production: a scoping review. BMC Medical Research Methodology 24(1):210.

Andreasen J, Norgaard B, Draborg E, Juhl CB, Yost J, Brunnhuber K, et al (2022). Justification of research using systematic reviews continues to be inconsistent in clinical health science-A systematic review and meta-analysis of meta-research studies. PLoS One 17(10):e0276955.

Babic A, Poklepovic Pericic T, Pieper D, Puljak L (2020). How to decide whether a systematic review is stable and not in need of updating: Analysis of Cochrane reviews. Research Synthesis Methods 11(6):884-90.

Bala MM, Poklepovic Pericic T, Zajac J, Rohwer A, Klugarova J, Valimaki M, et al (2021). What are the effects of teaching Evidence-Based Health Care (EBHC) at different levels of health professions education? An updated overview of systematic reviews. PLoS One 16(7):e0254191.

Beller E, Clark J, Tsafnat G, Adams C, Diehl H, Lund H, et al (2018). Making progress with the automation of systematic reviews: principles of the International Collaboration for the Automation of Systematic Reviews (ICASR). Systematic Reviews 7(1):77.

Draborg E, Andreasen J, Norgaard B, Juhl CB, Yost J, Brunnhuber K, et al (2022). Systematic reviews are rarely used to contextualise new results-a systematic review and meta-analysis of meta-research studies. Systematic Reviews 11(1):189.

Juhl CB, Lund H (2018). Do we really need another systematic review? British Journal of Sports Medicine 52:1408-9.

Lund H, Bala M, Blaine C, Brunnhuber K, Robinson KA (2021). How to improve the study design of clinical trials in internal medicine: recent advances in the evidence‑based methodology. Polish Archives of Internal Medicine 131(9):848-53.

Lund H, Juhl C (2020). Doing meaningful systematic reviews is no gravy train. Lancet 395(10241):1905.

Lund H, Juhl CB, Norgaard B, Draborg E, Henriksen M, Andreasen J, et al (2021). Evidence-Based Research Series-Paper 2: Using an Evidence-Based Research approach before a new study is conducted to ensure value. Journal of Clinical Epidemiology 129:158-66.

Lund H, Juhl CB, Norgaard B, Draborg E, Henriksen M, Andreasen J, et al (2021). Evidence-Based Research Series-Paper 3: Using an Evidence-Based Research approach to place your results into context after the study is performed to ensure usefulness of the conclusion. Journal of Clinical Epidemiology 129:167-71.

Lund H, Robinson KA, Gjerland A, Nykvist H, Drachen TM, Christensen R, et al (2022). Meta-research evaluating redundancy and use of systematic reviews when planning new studies in health research: a scoping review. Systematic Reviews 11(1):241.

Mahmić-Kaknjo M, Tomić V, Ellen ME, Nussbaumer-Streit B, Sfetcu R, Baladia E, et al (2023). Delphi survey on the most promising areas and methods to improve systematic reviews’ production and updating. Systematic Reviews 12(1):56.

Nasser M, Peres N, Knight J, Haines A, Young C, Maranan D, et al (2020). Designing clinical trials for future space missions as a pathway to changing how clinical trials are conducted on Earth. Journal of Evidence-Based Medicine 13(2):153-60.

Norgaard B, Briel M, Chrysostomou S, Ristic Medic D, Buttigieg SC, Kiisk E, et al (2022). A systematic review of meta-research studies finds substantial methodological heterogeneity in citation analyses to monitor evidence-based research. Journal of Clinical Epidemiology 150:126-41.

Norgaard B, Draborg E, Andreasen J, Juhl CB, Yost J, Brunnhuber K, et al (2022). Systematic reviews are rarely used to inform study design – a systematic review and meta-analysis. Journal of Clinical Epidemiology 145:1-13. (https://www.jameslindlibrary.org/norgaard-b-draborg-e-andreasen-j-juhl-cb-yost-j-brunnhuber-k-robinson-ka-lund-h-2022/)

Nussbaumer-Streit B (2021). Resource use during systematic review production varies widely: a scoping review: authors’ reply. Journal of Clinical Epidemiology 142:321-2.

Nussbaumer-Streit B, Ellen M, Klerings I, Sfetcu R, Riva N, Mahmic-Kaknjo M, et al (2021). Resource use during systematic review production varies widely: a scoping review. Journal of Clinical Epidemiology 139:287-96.

O’Connor AM, Glasziou P, Taylor M, Thomas J, Spijker R, Wolfe MS (2020). A focus on cross-purpose tools, automated recognition of study design in multiple disciplines, and evaluation of automation tools: a summary of significant discussions at the fourth meeting of the International Collaboration for Automation of Systematic Reviews (ICASR). Systematic Reviews 9(1):100.

Puljak L, Bala MM, Zajac J, Mestrovic T, Buttigieg S, Yanakoulia M, et al (2024). Methods proposed for monitoring the implementation of evidence-based research: a cross-sectional study. Journal of Clinical Epidemiology 168:111247.

Puljak L, Lund H (2023). Definition, harms, and prevention of redundant systematic reviews. Systematic Reviews 12(1):63.

Puljak L, Makaric ZL, Buljan I, Pieper D (2020). What is a meta-epidemiological study? Analysis of published literature indicated heterogeneous study designs and definitions. Journal of Comparative Effectiveness Research 9(7):497-508.

Robinson KA, Brunnhuber K, Ciliska D, Juhl CB, Christensen R, Lund H, et al (2021). Evidence-Based Research Series-Paper 1: What Evidence-Based Research is and why is it important? Journal of Clinical Epidemiology 129:151-7.