Correspondence to: Martin J. Tobin, MD, Division of Pulmonary and Critical Care Medicine, Loyola University of Chicago Stritch School of Medicine, Edward J. Hines, Jr, Veterans Administration Hospital, Fifth Ave and Roosevelt Road (111N), Hines, IL 60141; e-mail: firstname.lastname@example.org
I firmly believe clinical practice should be based on the best scientific evidence. But how do you define best evidence? Evidence-based medicine (EBM) founders say “identifying the best evidence means using epidemiologic and biostatistical ways of thinking.”1Table 1
lists five reasons why this approach is scientifically unsound.
A fundamental premise on which EBM is founded is the ability to grade the quality of research studies. The grading system (levels 1 to 5 evidence) was originally published in a CHEST Supplement (Table 2
).2EBM grading views randomization as not just one important factor but more important than every other component of research methodology. The same concept is rephrased by Sackett et al3: “If the study wasn't randomized, we'd suggest that you stop reading it and go on to the next article.” EBM grading is based on neither empirical investigation nor rationalist theory. The original article2 is simply an opinion piece.
There are two reasons why EBM grading is flawed. One, the grading is detached from scientific theory.2,4Homeopathy uses drugs in which less than one molecule of active agent is present. Benefit with dilution beyond Avogadro number contradicts pharmacologic theory. A metaanalysis5 of 89 placebo-controlled trials revealed a combined odds of 2.45 in favor of homeopathy. EBM grades metaanalysis as level 1 evidence but completely ignores scientific theory.2 There is nothing necessarily wrong with this particular metaanalysis, but the example illustrates how a system that grades findings of all metaanalyses as level 1 evidence2 is inherently flawed.6 A grading system that ranks homeopathy as sounder evidence than centuries of pharmacologic science commits the reductio ad absurdum fallacy in logic.
Two, attempts at grading of research in other disciplines have failed. The most famous attempt was by the logical positivists.7This school contained some of the brightest minds of the early twentieth century. It dominated analytic philosophy of that period. Positivists developed a verifiability criterion, which demarcated “meaningful” from “meaningless” research statements. Popper8 and others pointed out two fundamental flaws of positivism; thereafter, positivism lost all supporters.7 EBM retains these two flaws: a dissociation of facts from scientific theory (homeopathy, above), and no empirical testing (see below).
EBM founders have repeatedly revised their grading system.9 They have, however, never provided reasons why their system is capable of overcoming the problems that proved insurmountable to the logical positivists. Given the defeat of positivism, the leading epistemologists in the world have considered all attempts to grade scientific research as fundamentally flawed.7–8,10 No field of inquiry, other than clinical medicine, attempts to grade science.
EBM thinking gets even more worrisome. EBM founders say evidence can be “pregraded for validity by people with expertise in research methods.”11 Wait. Surely “pregraded” is misstated. Can you grade an article before reading it? Apparently yes. That is the inevitable conclusion of an argument premised on the belief that a randomized controlled trial (RCT) always constitutes level 1 evidence (no matter how sloppy the research). This is equivalent to judging a book by its cover.
lists eight examples of requirements for reliable research.10 It would be silly to rank these. If one is absent, the research is no longer reliable. Yet, EBM pregrades a study as level 1 evidence if researchers avoided assignment bias (through randomization) even if they ignored the other seven requirements.2 A grading system premised on the belief that randomization can cancel every other methodologic error is contrary to the most elementary understanding of science.
Clinicians have been lured into accepting EBM-based, clinical-practice guidelines in the belief they place medicine on a more scientific basis.12An example familiar to CHEST readers is the grade A recommendation made by an EBM Task Force for implementation of weaning protocols.13 The task force refers specifically to the study by Ely et al14 as sound evidence. But this study has flawed internal validity: intermittent mandatory ventilation was used in 76% of patients in the control arm, whereas T-tube or flow-by trials were used in 100% of patients of the intervention arm.
How could EBM founders base a grade A recommendation on a study with flawed internal validity? Because their criteria completely ignore breaches of internal validity.2 So what is a grade A recommendation based on? It is based on “precision of the estimated intervention effects … the narrower the confidence interval … the greater our ability to make strong recommendations.”12 (I am not making this stuff up.) Confidence interval is largely determined by sample size.6 This type of “precision” has nothing to do with “scientific precision,” such as ensuring internal validity.6 The graders' emphasis on confidence interval confuses statistics with science. The small confidence interval is a trap for the nonthinker: statistical precision is misinterpreted as “scientific exactness.”
You may think EBM does no harm. Not so. Clinical medicine requires thoughtful reflection about each individual patient, whereas graded guidelines encourage reflexive action. A double-blind RCT revealed that spironolactone decreased the mortality rate in patients with severe congestive heart failure (CHF) by 30%.15The clinical practice guidelines of the American Heart Association subsequently recommended spironolactone for treatment of ventricular dysfunction.16This was followed by a fourfold increase in spironolactone prescriptions, and a sixfold increase in death from hyperkalemia.17 Reflex response to level 1 evidence, without reflection about underlying pathophysiology and individual context, can kill.
Guidelines based on level 1 evidence, which ignore non-RCT research, can also kill. Sinuff et al18 developed a guideline for use of noninvasive positive-pressure ventilation (NPPV) in acute respiratory failure. They judged RCT data to support use of NPPV in COPD and CHF but not in other conditions. Before the guideline was introduced, 35% of patients with conditions other than COPD and CHF were intubated. After the guideline came into force, 100% were intubated,18and mortality increased from 21 to 34%. Hill19 pointed out that by classifying “patients as not meeting NPPV criteria, the authors could have unintentionally encouraged endotracheal intubation in this subgroup, possibly contributing to morbidity and mortality.”
The fundamental assumption of EBM is that physicians who practice EBM provide superior care.4 But EBM founders have never undertaken an RCT of the effect of EBM on patient outcome.11 So EBM does not satisfy its own basic requirements, which it demands of everyone else. (Hypocrisy or what?)
They say an RCT of EBM is unnecessary because “outcomes researchers consistently document that patients who receive proven efficacious therapies have better outcomes than those who do not.”20 With this non sequitur, EBM advocates claim credit for all research done under the heading of clinical research. But EBM is not a product of research. It is an activity for ranking the products of research. EBM advocates conflate the two. They need to disentangle them.
EBM founders say an RCT of EBM would be “impossible to do,”11 another non sequitur. Not true. All that is needed is to undertake a matched comparison of institutions where physicians practice EBM vs institutions where physicians do not believe in the tenets of EBM.
EBM founders say clinical decisions should be based on empirical evidence, and that expert opinion is untrustworthy.2–4,11 But EBM founders have never subjected EBM to empirical testing. Instead, EBM (and grading) is solely based on expert opinion. Thus, if EBMs tenets are true, then EBM should not be trusted, quod erat demonstrandum.
A major attraction of EBM is that it offers a means of coping with uncertainty. Given a physician's responsibility—to make life-and-death decisions about another human—the wish for certainty is understandable, as is the wish of wanting to act like the wisest physician when faced with a problematic patient. But these wishes are contrary to the reality of medicine.
A wise physician makes decisions on a background of scientific theory (universal principles) [Fig 1
]. Clinical practice, however, involves primarily phronesis (practical wisdom): a customized decision for one particular patient. A wise clinician bases each customized decision on a sound knowledge of science. Many physicians have been seduced by marketing of the “EBM-grading construct,” believing it makes clinical practice more scientific. These physicians, however, seem unaware that the EBM-grading construct is detached from science and poses a serious risk to patient safety.
Dr. Tobin is Professor of Medicine and Director, Division of Pulmonary and Critical Care Medicine, Loyola University of Chicago Stritch College of Medicine.
The author receives royalties for two books on critical care published by McGraw Hill. The author does not receive financial support for writing, advising, or consulting on evidence-based medicine or grading, or from pharmaceutical, biotechnology, or medical device companies.
Modified with permission from Cook et al.2
Become a CHEST member and receive a FREE subscription as a benefit of membership.
Individuals can purchase this article on ScienceDirect.
Individuals can purchase a subscription to the journal.
Individuals can purchase a subscription to the journal or buy individual articles.
Learn more about membership or Purchase a Full Subscription.
Institutional access is now available through ScienceDirect and can be purchased at myelsevier.com.
Some tools below are only available to our subscribers or users with an online account.
Download citation file:
Web of Science® Times Cited: 35
Customize your page view by dragging & repositioning the boxes below.
Enter your username and email address. We'll send you a reminder to the email address on record.
Athens and Shibboleth are access management services that provide single sign-on to protected resources. They replace the multiple user names and passwords necessary to access subscription-based content with a single user name and password that can be entered once per session. It operates independently of a user's location or IP address. If your institution uses Athens or Shibboleth authentication, please contact your site administrator to receive your user name and password.