The Limited Value of Evidence-Based Care.

by John Hardie BDS, MSc, PhD, FRCDC

INTRODUCTION

In September 2012, Oral Health published an article explaining why most current research findings are probably wrong and should be interpreted with a healthy dose of skepticism.2 A more damning and extensive indictment of the research industry appeared in The Economist in October 2013.3 Among its many startling findings are the following:

• American scientists readily acknowledge that they often get things wrong;

• There is evidence that many more dodgy results are published than are subsequently corrected or withdrawn;

• Statistical mistakes are widespread;

• Much scientific research is poorly thought through, or executed, or both;

• Many uncorrected flaws appear in peer reviewed articles;

• Prestigious medical and scientific journals fail to exert sufficient scrutiny of the results that they publish.3

The underlying concept of Evidence-Based Care (EBC) is that clinical practice is governed by rigorously derived scientific findings rather than by intuition, rituals or experience.4 Faith in the validity and efficacy of this concept has become so pervasive that the seal of approval for any intervention or clinical guideline is considered less than adequate unless it is evidence-based whether this is in medicine, nursing, public health, social work or dentistry.5 For example, the Network for Canadian Oral Health Research infers the superiority of its resources by emphasizing that they are evidenced-based.6

It seems obvious and intuitively correct that treatment options should be driven by the evidence derived from clinical investigations. To have any value such studies need to be based on faultless research. Since this is unlikely to occur, there is justification in questioning what reliance, if any, should be placed on clinical recommendations or guidelines that are deemed to be evidence-based.

HISTORY OF EVIDENCE-BASED CARE
At the Massachusetts General Hospital in 1835 the Boston Society of Medical Observation was formed. This was the beginning of relying less on the opinions of “learned physicians” and more on documenting the relationships between diagnoses, treatments and outcomes.7 However, it was not until the advent of double-blind, randomized controlled clinical trials in 1952 that researchers challenged the idea that a highly held opinion was not necessarily an established fact.7 In the mid-1950s the introduction of randomization to reduce bias, the integration of separate studies through meta-analysis and the application of epidemiological principles brought a degree of sophistication to the usefulness and relevance of clinical trials.7 In the early 1990s, Sackett et al at McMaster University, Hamilton, Ontario, established the rules and procedures that govern the present usage of EBC.7

EVOLVING DEFINITIONS
In 1992 the Evidence-Based Medicine Working Group insisted that EBC should be governed solely by the evidence from systematic research.8 Sackett and colleagues believed that the clinician should play an equal role in the decision making process. This is reflected in their 1992 definition of EBC as, “The conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research.”7 In 2000 the definition was expanded again by recognizing that the values and preferences of practitioners and their patients play as important roles in decision making as does evidence from research and clinical expertise.9

Traditionally, dentists have made decisions on care based on personal intuition, experiences, training and an understanding of pathophysiologic principles combined with knowledge of their patients’ values, circumstances, beliefs and attitudes. As the above definitions illustrate, all that EBC adds to this equation is the inclusion of systematic research into the decision making process. Accordingly, the value of EBC in dentistry is dependent on the incontestability of the research, and the willingness of dentists to incorporate such research based evidence into establishing a diagnosis or treatment plan.

RESEARCH AND EVIDENCE
Increasingly, errors are appearing in peer and non-peer reviewed scientific publications.2,3 The mistakes are due to poorly designed and conducted investigations combined with misinterpretation of results and widespread statistical errors.2,3 The root cause of this decline in research standards is thought to be the creed of “publish or perish” that has pervaded academia.10 A belief that is supported by the publication of approximately 1.4 million papers annually in scholarly journals.3 It is not surprising that supporters of EBC admit that there is considerable variation among the quality of research reports and on the accuracy and relevance of their conclusions.9,11

The optimal patient treatment envisaged by EBC demands use of the highest quality scientific evidence.9 Therefore, dentists wishing to practice EBC should have the skills and knowledge to critically appraise research findings. Thus allowing them to identify the relatively rare evidence that is of sufficient validity to influence clinical practice while, at the same time, avoiding the inevitable excess of weaker studies.9 This requires an understanding of how an investigation’s quality is influenced by; sample size, effect size, design, researcher bias, randomization and statistics.2,3 Physicians receive little training in these areas, with 75% of general medical practitioners being unable to understand the statistical aspects of journal items.4 Accordingly, it is a reasonable assumption that the average dentist with, perhaps 30 minutes of free time per week, has neither the time nor the training to critically assess what is valid and useful research.9 This role has been assumed by personnel charged with developing evidence-based clinical practice guidelines.

EVIDENCE-BASED CLINICAL PRACTICE GUIDELINES
Traditional clinical guidelines are an expression of the opinions, experiences and biases of teachers, specialists, professional associations and regulatory agencies.11 Often it is difficult to determine to what degree such guidelines are based on acceptable quality research.

Ideally, well developed evidenced-based clinical practical guidelines (EB-CPGs) are based on separate but inter-related processes. The first is the identification of the specific clinical question that the evidence should address. The second is a formal, reproducible and transparent method of gathering and evaluating the available evidence. The third is using the evidence to produce guidelines that also incorporate the preferences of patients and practitioners. The fourth is the modification of the guidelines following their review by potential users including their patients. The fifth stage is the distribution of the guidelines to all appropriate practitioners.11

This process strives to incorporate the best evidence while respecting the necessary input from clinicians and their patients. However, it is a time consuming and expensive procedure. One study suggests that the development of one EC-CPG takes about 1.5 years and costs about $50,000.12 This might be a reason why the Canadian Collaboration on Clinical Practice Guidelines in Dentistry charged in 1999 with developing “home grown” guidelines
did not survive beyond the early 2000s.

Apart from the time and financial constraints, most EB-CPGs suffer from a lack of large, well designed, statistically powerful, double blinded randomized trials.11,13,14 These “gold standard” studies are uncommon in medicine, exceptionally rare in dentistry and tend to occur only in industry sponsored drug trials.14 The lack of high quality clinical research means that developers of EB-CPGs have to rely on “best evidence” from; small clinical trials with minimal “statistical” power, investigations with non-randomized controls or no controls, and sometimes from the results of consensus meetings and focus groups.14

The inherent weaknesses of such studies produce evidence that is of questionable quality, and of minimal practical use for the development of EB-CPGs. For example, a systematic review of the evidence to support a six month recall found, “a lack of high-quality evidence in favor of routine dental examinations at 6-month intervals; however, clear clinical evidence of harm resulting from this practice could also not be identified.”15 An evidence-based study attempting to determine the benefits of flossing was marred by the nature of the randomized controlled trials causing the investigators to conclude that, “Trials were of poor quality and conclusions must be viewed as unreliable.”2 Another review, on the effects of scaling and polishing at different time periods to improve oral health, identified eight pertinent studies all of which were of insufficient quality to reach any conclusions.15 Published EB-CPGs on the emergency management of acute apical periodontitis in adults was based on an analysis of 15 randomized controlled trials of which only one was considered by the investigators to be of high quality.16 A detailed analysis in the medical literature demonstrated that there are reasons to question the degree to which EB-CPGs are truly evidenced-based.17

The stark reality is that “evidence-based” does not imply evidence of uniformly high quality. Often the evidence is poor or, at best, mediocre because of the failures of the research on which the evidence is based.2 Accordingly, prior to their implementation, it would be prudent for dentists to cast a critical and skeptical eye on all “evidence-based” guidelines, protocols or recommendations.

GUIDELINE ACCEPTANCE
A general dentist with the appropriate training, time and interest should be capable of searching the literature for the “best evidence” with which to address a particular clinical situation. However, as discussed above, if EBC dentistry is being practised it will most likely be by means of EB-CPGs.

Unfortunately, it appears that guidelines have a minimal effect on clinical behaviour or patient outcomes.12-14,18 A major analysis conducted by the Cochrane Centre on Effective and Organizational Practice found that most guidelines have a limited impact (less than 10 percent) on the provision of care.12 The usual passive introduction of guidelines in professional publications is of minimal value.14 Dutch studies show that guidelines are unlikely to enact change in decision-making unless their introduction is accompanied by intensive, continuing, comprehensive, multifaceted and expensive educational programmes.12 Until sponsoring organizations are prepared to mount such vigorous implementation procedures, the cost-effectiveness of EB-CPGs will remain highly questionable.

The clinical trials that are the basis of EB-CPGs are designed to answer a specific clinical question. Randomization and often complex statistical manipulations are needed to compensate for the inevitable variables among the trial participants so that the only common characteristic they share is the clinical condition for which evidence is being sought.19-21 This construct produces “sterilized” evidence applicable to the artificially controlled trial subjects but not readily transferrable to real patients. For example, a randomized trial to establish the ideal flossing techniques practiced by healthy adults is of minimal significance to an elderly arthritic with drug induced xerostomia.

Deciding if an EB-CPG will produce a favourable outcome depends not only on the clinical expertise of the dentist but requires a detailed and specific knowledge of the patient’s values, preferences, medical/dental histories, socio-economic status and cultural background.22 The trials on which an EB-CPG is based are unlikely to have addressed these factors, making the guideline somewhat irrelevant to the decisions that need to be addressed by the dentist and patient. Accordingly, it is not surprising that EB-CPGs have a minimal influence on clinical practice or patient outcomes.

CONCLUSIONS
The goal of evidenced-based care cannot be questioned. Quite simply, it is to provide the best care for a patient with a specific diagnosis. To accomplish this, its underlying research must be faultless, it must be distributed to clinicians in a cost effective manner and it must be proven to have positive influences on clinical practices and treatment results. This article has demonstrated that there are numerous reasons to doubt if any of these criteria are being met, or are liable to be achieved in the near future.

A major fault of evidence-based care is that it cannot control for all the variables that are present in clinical practice. Accordingly, evidence-based guidelines and recommendations are of only limited value. Ultimately, the decision on care will be determined by the practitioner’s judgement tempered by knowledge of the patient’s needs. It was ever such. OH


Dr. Hardie’s initial involvement with Evidence-Based Care was in 1996 while assisting the RCDSO in the development of Infection Control Recommendations. Subsequent experiencs have tempered his enthusiasm for the concept.

Oral Health welcomes this original article.

REFERENCES

1. Sutcliffe S, Court J. Evidence-Based Policymaking: What is It? How Does It Work? What Relevance for Developing Countries? Overseas Development Institute 2005. http://www.odi.org.uk/resources/download/2804pdf

2. Hardie J. Why Research Findings Are Usually Wrong. Oral Health; Sept 2012: 40-51.

3. Unreliable research. Trouble at the lab. The Economist ; October 19th 2013.

4. Taylor DK, Buterakos J. Evidence-Based Medicine: Not as Simple as It Seems. Academic Medicine 1998; 73(12): 1221-1222.

5. Yamey G, Feachem R. Evidence-based policymaking in global health – the payoffs and pitfalls. Evid Based Med 2011; 16: 97-99.

6. Network for Canadian Oral Health Research Launches New Website. J Can Den Assoc 2014; 80(1): 27.

7. Nierenberg AA. Promises, Pitfalls, and Pleasures of Practicing Evidence Based Psychiatry and Neurology. CNS Spectr 2009; 14(12): 665-667.

8. Horwitz RI. The dark side of evidence-based medicine. Cleveland Clinic Medical Journal of Medicine 1996; 63 (6): 320-323.

9. Dollaghan C. Evidence-Based Practice: Myths and Realities The ASHA Leader; April 13th, 2004.

10. Problems with scientific research. How science goes wrong. The Economist; October 19th 2013.

11. Sutherland SE. The Building Blocks of Evidence-based Dentistry. J Can Dent Assoc 2000; 66:241-244.

12. Grol R. Between evidence-based practice and total quality management: the implementation of cost-effective care. I
nternational Journal for Quality in Health Care 2000: 12 (4): 297-304.

13. Haynes B, Haines A. Barriers and bridges to evidence based clinical practice. BMJ 1998; 317(153): 273-276.

14. Timmermans S, Mauck A. The Promises and Pitfalls Of Evidence-Based Medicine. Health Affairs 2005; 24(1): 18-28.

15. Is the Six-Month Recall Interval Evidence-Based? The Oral Care Report 2006; 16(3): 1-3.

16. Sutherland S, Matthews DC. Emergency Management of Acute Periodontitis in the Permanent Dentition: A Systematic Review of the Literature. J Can Dent Assoc 2003; 69(3): 160-160l

17. McAllister FA, et al. How Evidence-Based Are the Recommendations in Evidence-Based Guidelines? PloS Med 2007; 4(8); e250.doi:10.1371/journal.pmed.0040250

18. Grol R. Successes and Failures in the implementation of Evidence-Based Guidelines for Clinical Practice. Medical Care 2001; 39(80): 46-54.

19. Loewy EH. Ethics and Evidence-Based Medicine: Is There a Conflict? Med Gen Med 2007; 9(3): 30-40.

20. Roberts MS. HCLF: The powers and pitfalls of evidence-based medicine. htt://www.clinical-innovation.com/topics/clinical-practice/hclf-powers-and-pitfalls-evidence-based-medicne

21. Annis D. The limits of evidence-based medicine. http://www.dbskeptic.com/2008/08/17/the-limits -of-evidence-based-medicine/

22. Tonelli MR. The Philosophical Limits of Evidence-based Medicine. Academic Medicine 1998; 73(12): 1234-1240.

RELATED NEWS

RESOURCES