This story is pretty powerful. A doctor receives a misdiagnosis, which he later catches and corrects. He contacts the hospital where the misdiagnosis occurred, offering to help them improve. But he's rebuffed. https://www.washingtonpost.com/health/hospital-misdiagnosis-mistakes-ignored/2020/10/02/7bac2d10-f851-11ea-be57-d00bb9bc632d_story.html
Brian R. Jackson, MD, MS
Assoc. Professor of Pathology (Clinical), University of Utah
Medical Director of Support Svcs, IT and Business Development, ARUP Laboratories
500 Chipeta Way, Mail Code 933
Salt Lake City, Utah 84108-1221
phone: (801) 583-2787, extension 3191
toll free: (800) 242-2787
The whole thing is disturbingly sad. A couple of questions for the risk/ malpractice experts on this list-serve.
Could you inform on the "will not be covered by the malpractice insurer" if you talk? How does the "obfuscation, don't talk, deny, defer to patient experience and administrators to deal with the issue" align with the apology and disclosure movement that risk management and malpractice carriers seem to support?
This article written by a #neurologist whose #emergency care was inept and inadequate illustrates the potential problems that every #patient faces when they engage with the #healthcare system. Had this patient been like the rest of us, and not a top neurologist, he would currently be a quadriplegic or dead. The disappointing response that he received from the hospital system makes us want to throw up our hands in despair. But we can't. As patients, we need to protect ourselves from potential harm by being educated and engaged in our own healthcare. Please read my Dx IQ columns about how to do that at https://www.improvediagnosis.org/dxiq-column/ before your next appointment. #PatientEducation #PatientEmpowerment #PatientEngagement #PatientSafety
October 5, 2020
The medical field is no different than other science related careers; the individual cannot posses all the working knowledge they will need in facing crisis with patients. As many of you have pointed out, it requires the clinician to exercise clear thinking and decision making. One of the choices is always to question one's own judgement and seek supportive resources.
All of the world renown experts I have known in my lifetime - everyone to the last man continually questioned their own thinking.
The "system" often drives in the direction of quick answers and available treatment; but an expert is never satisfied with this shortcut. So, in the case of the Washington Post article, the patient was the "expert" that pursued to the necessary solution. In the many cases where the patient is not capable of going the distance, the care team needs to carry this burden.
Despite all of the system issues that throw up roadblocks, the end result should be the care team's dedication to meet the patients expectation for appropriate treatment. This is the definition of Quality of medical services.
Although it usually is an individual that carries the blame when a situation goes bad, it is also the education system (teaching students to know the "right" answer, as opposed to knowing a thinking process), the facility expectations and culture.
Critical Thinking is both a skill and an art. Help your colleagues and their patients by encouraging them to improve their toolset with this as the number one priority.
If one takes the situativity approach seriously, (see most recent issue of Diagnosis), the phrase "the most significant diagnostic failures result from error in clinical judgment i.e. how clinicians think and not what they know" does not adequately address the problem. In fact, a lot of the failure in the example cited could be attributed to deficiencies in skills, before the examiners even got to the knowing and thinking parts!
Once we get the skills problem solved (if we can), the correct statement IMHO is "the most significant diagnostic failures result from errors in clinical judgment, i.e., how clinicians think in the settings in which the clinical/diagnostic decisions are made." We focus too much on fixing the individual's thinking defects while ignoring the conditions in which medicine is practiced and our diagnoses are made. The NAM report made that point 5 years ago as did To Err Is Human 20 years ago. We might be better off trying to figure out how to make it easier for clinicians to the right thing rather than the wrong and fix that. "It's (mostly) the systems!"
Agree David. Knowledge and expertise of course plays a significant role. There is no way a generalist can be expected to memorize and have the knowledge and ability of the expert. Research shows in the US that 65% of skin presentations are seen by non-dermatologists such as emergency physicians, primary care and urgent care. In the UK, there are roughly 500 dermatologists for 60 million people. In some countries in Africa there are roughly 1 dermatologist for a million people. Non-dermatologists therefore must diagnose and care for skin presentations yet they have profound knowledge gaps in knowing and recognizing the diagnostic clues found in the physical exam of the skin. Dermatologists see these mistakes made by generalists daily. The lack of feedback loops means that the primary care and emergency clinicians frequently do not hear what they missed. We need to support primary care and all generalists with tools as they have not seen enough cases and lack the knowledge necessary to recognize the patterns. In this NYT story, we contributed images to highlight how gaps in diagnosing from skin of color further compounds the problem. https://www.nytimes.com/2020/08/30/health/skin-diseases-black-hispanic.html I would argue that most generalists would not be able to recognize these patterns in skin of color.
Associate Professor of Dermatology and Medical Informatics
University of Rochester
David Newman-TokerThere are any number of ways to parse the WaPo article, many of which have been articulated here. I have one framing to add and then a comment about whether clinical judgment failures are disproportionately about cognitive or affective biases (which I don't think the available evidence necessarily supports), as opposed to failures of expertise (by which I mean elaborated symptom/disease knowledge and skills that can be applied efficiently and effectively in clinical context, using the parlance of educators --- well-honed, well-calibrated, highly-accurate system 1 reasoning, using the parlance of dual-process theory):
It is difficult to accept that the available evidence doesn't support the impact of biases on clinical decision making. Biases are now shown to exert impact in pretty much every human endeavour that involves decision making. It would be truly remarkable if medical decision makers were not affected in the same way as other people – is there anything about us that makes us immune? Thousands of papers have been published on this, including in every discipline of medicine.
In the parlance of educators, highly accurate System 1 reasoning doesn't exist. Reasoning is a deliberate cognitive process, whereas System 1 is only capable of autonomous reflexive responses. As cognitive scientists have shown, System 1 can be highly trained to deliver good decision making, but not reasoning. Reasoning is a deliberate cognitive process that occurs in System 2.1. THE BIG THREE: Rather than focusing on causes, you could also look at this as a near miss for a disease we routinely miss --- spinal abscess... one of the top 5 causes of harm from infection (PMID: 31535832), despite being a rare disorder... because it is missed 65% of the time (PMID: 32412440). If we set up protocols that reduced errors by 30% for just 15 high-risk diseases, we could prevent about 100,000 patients suffering death/disability each year in the US. Why are we arguing about the causes instead of fixing the problems in the disease contexts and clinical settings where we know they occur, leading to harm?
The reason for focusing on causes is that it provides a general tool we can all use to monitor our decision making – to try to get away with avoiding considerations of cause is counter-scientific. It is difficult to imagine a world where we say 'some people think they understand why we make these errors but we are not interested in their explanations'. As ethicists have pointed out, in this area in particular, If we know the cause of something there is an ethical imperative to understand it and use it.
Agreed that we can (and should) develop strategies to avoid top causes of diagnostic error wherever they are known, but protocols are largely useless if you don't recognize when they need to be used e.g pulmonary embolus is missed about 50% of the time on initial presentation. It is not because we don't know the pathophysiology of thromboembolism in excruciating detail, we just don't think of it. Shouldn't we be trying to understand why we don't think of something e.g. what are the reasons for getting anchored on some other possibility? In a recent study of clinical cases we found anchoring to be the most prevalent cognitive bias – shouldn't we try to understand why decision-makers anchor and how they might avoid it when it is potentially harmful? Understanding the basis of anchoring helps us in all diagnoses not just selected ones.
It should be possible to study the common biases in every discipline and provide appropriate training. It seems likely that the reduced numbers of diagnostic failure that are seen in the pattern recognition specialties are due to exposure to fewer biases. 2. KNOWLEDGE, EXPERTISE, and JUDGMENT: I think the jury is still out on how often diagnostic errors are due to knowledge gaps. Perhaps, as suggested by Pat Croskerry earlier, relatively few diagnostic errors can be attributed solely to simple book knowledge deficits (in the sense of MCQ exams)... though others probably disagree (PMID: 25176155), and there should be some interesting new data on this in 2021. Certainly for diagnosing dizziness, knowledge gaps are rampant and demonstrable even using epidemiologic health outcomes data (e.g., that CT scans are nearly useless to "rule out" ischemic stroke in the ED PMID: 17258669; yet many ED physicians rely on them to do so PMID: 17976351... leading to a discharge post negative CT scan in the ED among dizzy patients actually being a 2.3-fold risk factor for suffering a stroke hospitalization within a matter of days PMID: 25477217 --- in other words, the ED docs correctly risk stratified the patients with respect to stroke to get the CT in the first place, but were falsely reassured by the negative result PMID: 26231272). But setting book knowledge aside, there is a lot to commend the theory that the final common pathway for "clinical judgment failures" is not bias, per se, but lack of expertise (PMID: 26980778), which, in turn, is due to a lack of "deliberate practice" in diagnosis in the formal sense of the term, as used by Ericsson (PMID: 18778378), which, itself (in addition to sustained attention to self-improvement and the right training materials) requires prompt and accurate feedback, which we rarely get in routine clinical practice (PMID: 30386846). Certainly in my experience as a neurologist, where I have seen huge numbers of diagnostic errors made in frontline care settings with neurological diseases, almost every one could be attributed to failures of clinical expertise. In my opinion, cognitive bias is what thrives in the void of expertise --- when system 1 remains miscalibrated, we rely on whatever faulty heuristics live nearby.
I cannot agree that the jury is out on the role of knowledge gaps in diagnostic error. Certainly, they do occur in select cases and it is not difficult to pick a few examples, but in the overall scheme knowledge deficits are few. Several studies have shown this. As I noted earlier in this thread, in our recent book The Cognitive Autopsy which reviewed in detail over 40 clinical cases, we found a very wide range of clinical diagnoses (42 in all) and probably less than 6 demonstrated knowledge deficits. Further, none of these proved consequential to the diagnosis. In contrast, cognitive biases, defined according to standard descriptions in the cognitive science literature, were found in 230 instances i.e. outnumbering knowledge deficits approximately 40 to 1. Again, this should put a focus on how we think and not so much what we know.
Cognitive bias thrives everywhere – it can be considered a normal part of brain function. System 1 can certainly be calibrated better by experience (Hogarth made this point clearly in Educating Intuition) but experts can be just as vulnerable to bias if they don't respect its power and prevalence.
It would be a brave person these days to claim that biases are not exquisitely influential in human decision making and not exemplified in every human endeavor. If anyone remains unconvinced take a look at two other recent books in the medical literature: the neurologist Jonathan Howard's Cognitive Errors and Diagnostic Mistakes, and the general practitioner Cym Ryle's book Risk and Reasoning in Clinical Diagnosis.
It should be possible to study the common biases in every discipline and provide appropriate training. It seems likely that the reduced numbers of diagnostic failure that are seen in the pattern recognition specialties are due to exposure to fewer biases. 2. KNOWLEDGE, EXPERTISE, and JUDGMENT: I think the jury is still out on how often diagnostic errors are due to knowledge gaps. Perhaps, as suggested by Pat Croskerry earlier, relatively few diagnostic errors can be attributed solely to simple book knowledge deficits (in the sense of MCQ exams)... though others probably disagree (PMID: 25176155), and there should be some interesting new data on this in 2021. Certainly for diagnosing dizziness, knowledge gaps are rampant and demonstrable even using epidemiologic health outcom