About Evidence-Based Research (EBR)

 

 

 

Introduction
A number of studies show that researchers, research funders, regulators, sponsors and publishers of research fail to use earlier research when preparing to start, fund, regulate, sponsor or publish the results of new studies. To embark on research without systematically reviewing the evidence of what is already known, particularly when the research involves people or animals, is unethical, unscientific, and wasteful.

Well over a century ago Lord Rayleigh realised that simple accumulation of research results does not benefit anyone. Each new result needs to be interpreted in the context of earlier research:

“If, as is sometimes supposed, science consisted in nothing but the laborious accumulation of facts, it would soon come to a standstill, crushed, as it were, under its own weight……

The work which deserves, but I am afraid does not always receive, the most credit is that in which discovery and explanation go hand in hand, in which not only are new facts presented, but their relation to old ones is pointed out.”

Lord Rayleigh at the 54th meeting of the British Association for the Advancement of Science held in Montreal in 1884

In 1964, the same principle was endorsed by the World Medical Association:

“The Helsinki Declaration states that biomedical research involving people should be based on a thorough knowledge of the scientific literature. That is, it is unethical to expose human subjects unnecessarily to the risks of research. Ideally, the introduction should include a reference to a systematic review of previous similar trials or a note of the absence of such trials.”

 

The assumption
Many might, however, assume and argue that they have never come across an article published in a scientific journal that did not refer to earlier results. So are we not already complying with the scientific ideal? A scientific committee recently put this argument into writing, in the following response to an application for a new PhD program suggesting an evidence-based research approach by all PhD students:

Thus, why introduce the new concept of evidence-based research and demand researchers to adhere to it—has research not always been evidence-based?

 

The evidence
In its guidance on reporting randomised controlled trials (RCTs), the CONSORT group stated that “The introduction should include a reference to a systematic review of previous similar trials”. The main point of a systematic review is to avoid selection bias (as much as possible) – all studies should be included in the review, providing an exhaustive summary of the current literature relevant to the research question. The reason is simple, but crucial. Writers and readers of systematic reviews will be aware just how widely clinical studies can differ in their results and conclusions, even if they are examining the exact same question. Picking and choosing which studies to cite in the study introduction, an author can easily make the case for the new study by simply selecting supporting references. Many scientists, trying to keep the study introduction to a minimum, attempt to support each statement by citing the newest, the best or the largest study that fits. Obviously, this approach cannot be scientific because it is based on personal preferences rather than the totality of earlier research.

Using snowball sampling techniques and looking at references, we have identified at least 20 relevant articles evaluating this question. Studies analysing how often scientific authors refer to the totality of earlier research found a general lack of a systematic approach. The main conclusion is, as one of the study authors said:

“No matter how many randomized clinical trials have been done on a particular topic, about half the clinical trials cite none or only one of them. As cynical as I am about such things, I didn’t realize the situation was this bad”.

Dr. Steve Goodman, New York Times, 17th January 2011

Goodman was referring to a study he co-authored with Karen Robinson and published in 2011 . They examined all systematic reviews of health care questions, published in 2004 that included a meta-analysis combining 4 or more RCTs, so identifying studies that could potentially refer to 3 or more studies within the same area. Even though a great number of the included studies could have referred to 10 or more previous pieces of research, the median number of references for these studies was consistently 2!

Graph showing median number of references to previous similar studies against number of studies available (Robinson and Goodman, 2011)

In 2005 Fergusson published a cumulative meta-analysis, which showed that by 1994 there were enough studies to conclude that aprotinin diminishes the amount of bleeding during cardiac surgery. Still, in the following decade more than 4000 patients were involved in unnecessary RCTs comparing aprotinin versus placebo. This means that at least 2000 patients did not receive potentially life-saving medication, even though the systematic review from 1994 had proven the beneficial effect.

Cummulative meta-analysis showing proven benefit of aprotinin on need for blood transfusion during cardiac surgery (Fergusson et al 2005)

In a series of studies, Clarke and Chalmers [1998, 2002, 2007 & 2010] repeatedly showed that RCTs published in the month of May in the five highest ranking medical journals (JAMA; BMJ; NEJM; Lancet and Annals of Internal Medicine) almost never used a systematic review.

Thus, while many people assume that all research is evidence-based, the evidence clearly indicates that this is not the case!

Prof. Hans Lund on the “dangerous idea” of EBR

The Evidence-Based Research Network

To address the problem outlined above a group of Norwegian and Danish researchers initiated an international network, the ‘Evidence-Based Research Network’. The EBRNetwork was established in Bergen, Norway in December 2014 with initial partners from Australia, Canada, Denmark, the Netherlands, Norway, the UK, and USA.

The aim of the EBRNetwork is to reduce waste in research by promoting:

  • No new studies without prior systematic review of existing evidence
  • Efficient production, updating and dissemination of systematic reviews

The EBRNetwork has suggested a new working definition of a systematic review:

“a systematic review is a structured and preplanned synthesis of original studies that consists of predefined research questions, inclusion criteria, search methods, selection procedures, quality assessment, data extraction, and data analysis. No original research study should be deliberately excluded without explanation, and the results from each study should justify the conclusion.” 

The EBRNetwork has numerous resources for people wanting to find out more about EBR including presentations and an extensive bibliography. In 2016 members of the EBRNetwork published an analysis article in the BMJ “Towards evidence-based research” discussing EBR and its role in preventing research waste. The article contained the EBR Statement, detailing the different stakeholders’ responsibilities in meeting the aims of EBR, and a flow chart for EBR.

 

 

Flow chart for evidence-based research from Towards evidence based research. Lund H, et al BMJ. 2016 Oct 21;355:i5440
Prof. Hans Lund on EBR and why research before researching is still an issue