Eyeworld

OCT 2013

EyeWorld is the official news magazine of the American Society of Cataract & Refractive Surgery.

Issue link: https://digital.eyeworld.org/i/194331

Contents of this Issue

Navigation

Page 23 of 134

October 2013 EW NEWS & OPINION 21 Tort reform and adherence to evidence-based practice paradigms T he August 2012 issue of EyeWorld featured an interview with Brock Bakewell, MD, chair of the ASCRS Government Relations Committee, on legislative priorities for ASCRS over the coming year. Among the medical liability (or tort reform) initiatives that Dr. Bakewell discussed, he made the following comment, which really caught my attention. He said: "[I]f a physician follows an evidence-based diagnostic and treatment paradigm, then that physician should not be liable for malpractice. This provision would go a long way in reducing defensive medicine costs." (p. 19) I have always liked this idea because it makes excellent legal and ethical sense. The primary ethical— and, I'd say, legal—obligation of any licensed or certified professional is to provide services that reasonably accommodate that profession's standards of care. That's all that potential clients or consumers of professional services can reasonably demand (in addition to not being unnecessarily harmed, of course). Patients cannot reasonably demand that their physicians unfailingly deliver superb therapeutic outcomes because there are always variables beyond the control of a physician that can cause a suboptimal result. But what patients can demand—indeed, what they have every right to demand—is that a licensed physician will provide treatment or services that accord with the practice standards that his or her discipline has adopted and promulgated. Consequently, in a profession like medicine that looks to science to validate and confirm those practice standards, Dr. Bakewell's argument that a physician should not be sued for malpractice if he or she follows an evidence-based diagnostic or treatment paradigm seems eminently sound. So I was more than a bit upset to recently come across a study by the well-known legal scholar Maxwell Mehlman who deflated this idea entirely.1 Mehlman commented that not only is the idea of predicating legal immunity on a physician's practicing evidence-based medicine at least two decades old, but that a number of states and medical groups have tried to advance it legislatively by John Banja, PhD over the years and failed miserably. Virtually all the attempts over the last 20 years to use evidence-based practice guidelines as "safe harbors" against malpractice claims have failed for reasons that Mehlman discusses and that I found important to know. In no particular order of importance, here are some of the main ones. For starters, many physicians just don't like the idea of practice "standards," regardless of how much or how little evidence exists to support them. These physicians rebel at the notion of practicing according to any kind of explicitly codified rules. They are not only afraid that such rule adherence robs them of their clinical creativity and imagination, but they worry about being sued in treatment situations wherein they clearly didn't practice according to such (evidence-based) rules. This is particularly pertinent to physicians who might claim to practice according to a "locality" or "community" standard that might be different from—and often "less" evidence-based because it is more parochial or anecdotal—than a national standard. Still, if a physician-defendant could persuasively show that his or her treatment complied with a slam-dunk evidence-based practice, shouldn't that showing preclude a malpractice action against him or her? I would certainly vote yes. If a physician adequately complied with a relevant evidence-based standard, such a case should never proceed to trial because there is no malpractice if there is no negligence—and there is no negligence if the physician reasonably complied with the standard(s). But here's the problem: What kind and how much evidence is sufficient for a physician-defendant in order to establish that his or her decision or treatment was indeed "evidence-based"? Suppose an ophthalmologist who is being sued argues that "I've been practicing ophthalmology for 27 years, and that's the way I've always done it and I've never had a problem," but no other ophthalmologist does it that way. Hundreds or thousands of personal cases certainly count as at least anecdotal evidence, but would that physician's "evidence-based" argument hold up in court? Much more likely in malpractice cases are instances where a physician defendant will secure peer or evidence-based support (in the form of expert witnesses) for his or her actions, but where plaintiffs will nevertheless counter with their own witnesses who provide their own "evidence-based" arguments. In other words, plaintiffs will argue that the evidence-based practice claims the physician-defendant is making are either outdated, idiosyncratic, nonexistent, or were misapplied to the particular patient. Malpractice cases often proceed because the "evidence" either side is calling upon is flimsy, controversial, or contradicted by other evidencebased studies. What people like Dr. Bakewell and myself would like to see is a world where evidence-based practices are robustly validated, consensually agreed upon, universally distributed, and applied uncontro- versially to individual cases. What actually occurs in real-life malpractice cases, however, is that the evidence supporting a practice or treatment approach is either not scientifically robust or that the physician-defendant will be accused of misapplying it (or bungling it) to a particular patient, i.e., the plaintiff. Mehlman also discusses numerous problems that plague the validity of studies that inform evidencebased practices. A not uncommon one is that developers of the standards might have conflicts of interests. Sometimes the studies that are widely touted as providing the "evidence" for the practice standard are financially supported by a drug or device manufacturer who has a vested interest in the study's outcome. Sometimes investigators who are involved in such studies might be paid consultants or equity owners in a company that has a serious stake in the study's outcome. Mehlman also points out how any evidence-based practice recommendation makes explicitly value-based assumptions about the gravity and probability of risks, costs, side-effects, complication rates, and what counts as an "acceptable" outcome. But these are hardly based in scientific evidence. Just because the evidence shows that the complication rate for Treatment A is 5% lower than for Treatment B doesn't necessarily mean we should universally adopt Treatment A if it costs 10X more than B and managing Treatment B's associated complications is neither burdensome nor expensive. But probably the most nagging problem in using evidence-based practice for legal immunity purposes involves the obscurity of scientific evidence itself. I recently read a great article by Dr. Brian Hurwitz, a primary care provider in London.2 Dr. Hurwitz was an author in three meta-analyses of the efficacy of using antibiotic eye drops for acute infective conjunctivitis. Now, metaanalyses are the best evidence we have because they analyze all the relevant trials of a drug or device that have been conducted. Dr. Hurwitz related that for more than 30 years, he has been treating infective conjunctivitis with antibiotic eye drops. But when he completed continued on page 22

Articles in this issue

Archives of this issue

view archives of Eyeworld - OCT 2013