Discussion Board

Expand all | Collapse all

A Comprehensive Review of Clinical Reasoning Research in Medical Education

  • 1.  A Comprehensive Review of Clinical Reasoning Research in Medical Education

    Posted 14 days ago
    Hi All,
    Just passing along a paper that might be of interest to folks:

    Koufidis, C, Manninen, K, Nieminen, J, Wohlin, M, Silén, C. Unravelling the polyphony in clinical reasoning research in medical education. J Eval Clin Pract. 2020; 113. https://doi.org/10.1111/jep.13432

    Cheers,

    David


  • 2.  RE: A Comprehensive Review of Clinical Reasoning Research in Medical Education

    Posted 12 days ago

    David, thanks for posting this wonderful, comprehensive paper about clinical reasoning which is primarily about diagnostic reasoning. Here are my thoughts about this subject.
    1. Diagnostic accuracy in practice in general is 85 to 90 percent which means a correct diagnosis is made in nearly all patients (Berner, Graber, Am J Med 2008; 121: S2-S23).
    2. The method employed for diagnosis in practice is best described as consisting of hypothesis generation and hypothesis verification. This is seen most clearly in all published diagnostic exercises in real patients such as in CPCs and clinical problem solving exercises (Jain, Diagnosis 2016; 3:61-64 and Diagnosis 2016; 3: 95-97).
    3. Thus there are two distinct steps in diagnostic reasoning, hypothesis generation and hypothesis verification.
    4. I shall consider the unqualified term'diagnosis' to refer to a disease which has completed both steps, of hypothesis generation as well as of hypothesis verification, that is, a disease whose presence has been confirmed by testing.
    5. I emphasize # 4, because many areas of research mentioned in this paper such as pattern recognition, exemplars, prototypes, semantic networks and illness scripts, semantic qualifiers, inference to best explanation are only about how diagnostic hypotheses are generated, which is merely the first step in diagnosis in practice.
    6. There is no discussion in this paper or in literature in general about how the second step, that is verification of a diagnostic hypothesis is performed.
    7. If we examine how this verification is performed in practice, such as in diagnostic exercises in real patients, we find a diagnostic hypothesis is verified to be correct if after testing, a highly informative test result is observed. A test result with likelihood ratio (LR) greater than 10 is conventionally considered to be highly informative.
    8. For example, diagnostic hypothesis of acute MI is verified to be correct if acute ST elevation EKG changes, LR 13 are observed, pulmonary embolism is verified by positive chest CT angiogram, LR 20, covid-19 disease by positive covid-19 test, LR 14, deep vein thrombosis by positive venous ultrasound, LR 16.
    9. This verification occurs in any patient regardless of prior probability of disease indicating the method of verification is not Bayesian (probabilistic).
    10. The method of verification of a diagnostic hypothesis in practice, we suggest, is frequentist, which is one of the two major methods of statistical inference (the other being the Bayesian method).
    11. In the frequentist method, developed by Fisher and Neyman in first half of 20th century, the entity to be inferred is formulated as a hypothesis from available data and it is inferred from a procedure with a high probability of leading to an accurate inference. This procedure consists of performing a test and interpreting a test result with a high frequency of accurate inferences as strong evidence from which the entity is inferred (Mayo, Statistical inference as severe testing, Cambridge University Press 2018). Thus the hypothesis of acute MI is inferred to be correct from acute ST elevation EKG changes, because this test result leads to accurate inference of this disease in 85 percent patients.
    12. The suitability of inferring a disease accurately in different patients with varying prior probabilities with a high degree of diagnostic accuracy by the frequentist method is obvious.
    13. We note the method of hypothesis generation and verification, employed for diagnosis in practice is identical to the scientific method (Jain, Diagnosis 2017; 4: 17-19), which is known to be the most reliable and powerful method of reasoning in any field. The power of the scientific method arises from the fact that proving or verifying a scientific hypothesis correct usually by experiment, but sometimes by observation is an essential feature of this method. We suggest, a test functions like an experiment in diagnosis.
    14. A few words now about relevance of dual process theory (DPT) with its System 1 and 2 reasoning to diagnostic reasoning. As I see it, DPT has been developed in cognitive psychology to describe the day to day reasoning of man in the street. Reasoning in DPT is unscientific as there is no hypothesis generation or its verification by testing in it.
    15. Diagnostic reasoning, on the other hand, is the scientific reasoning, characterized by hypothesis generation and its verification by testing, which is employed by a trained professional. Therefore, we believe, DPT has no relevance to diagnostic reasoning. 
    16. In brief, I believe, diagnostic reasoning in practice is essentially scientific and research in it from this perspective is likely to be highly productive.
    17. Bimal
    18. Bimal Jain, Northshore Medical Center, Salem MA 01970





  • 3.  RE: A Comprehensive Review of Clinical Reasoning Research in Medical Education

    Posted 12 days ago

    Thanks, Bimal, and thanks David!  This is a very nice summary!


    I think that #12 of your excellent formulation is actually worth highlighting: it may seem obvious, as you say, but the "pragmatism" that is inherent in the process of Baysian reasoning must be fully acknowledged and appreciated in this context.  This is where the "scientific" approach to diagnostic reasoning & hypothesis testing can actually diverge from the pure scientific method that was handed down by Sir Francis Bacon.  


    With diagnosis, you don't have quite as open a mind as with other types of pure scientific inquiry--you are constrained by the pretest (prior, or posterior) probability of disease.  


    Bayes is our daily bread in Radiology.


    All the best,


    Mike



      

    Michael A. Bruno, M.D., M.S., F.A.C.R.   

    Professor of Radiology & Medicine

    Vice Chair for Quality & Patient Safety

    Chief, Division of Emergency Radiology

    Penn State Milton S. Hershey Medical Center
    ( (717) 531-8703  |  6 (717) 531-5737

    * mbruno@pennstatehealth.psu.edu  

    1571679014277





  • 4.  RE: A Comprehensive Review of Clinical Reasoning Research in Medical Education

    Posted 12 days ago
    I'd also add to Mike's comments that the notion of diagnosis as a scientific endeavor is tied to the notion of clinical documentation as a scientific endeavor.
    The latter is increasingly being challenged by proponents of narrative medicine, and the former has challengers from the clinical art of medicine going back generations (my favorite quote coming from the 1890s by the then Regius Professor of Physic at Cambridge, Sir Thomas Clifford Allbutt).
    One thing I do think that is forgotten in this mix (both from cognitive and statistical perspectives) is that statistical inference and mechanics do not equate to human inference and mechanics, such that both can operate simultaneously.
    I'd refer to Andy Clark's "whatever next" (Clark, Andy. "Whatever next? Predictive brains, situated agents, and the future of cognitive science." Behavioral and brain sciences 36, no. 3 (2013): 181-204.) and the folkloristic adage that "the patient is an N of 1" in suggesting that ultimately our reasoning can be framed as an amalgam of approaches, both Fisherian and Bayesian.
    The clinical analogy would be to suggest that a diagnosis is both indicative of the population observed, as well as predicted by the patient's prior information.
    I'd be curious as to how such a notion can be better informed by abstraction (be it by Dual Process Theory or another) in the face of uncertainty, as it still is unclear as to whether or not a disease that has an uncertain classification is capable of being effectively "examined scientifically" at the bedside (in that instant).
    Cheers,

    David





  • 5.  RE: A Comprehensive Review of Clinical Reasoning Research in Medical Education

    Posted 11 days ago
    Thank you Mike and David for your comments. I believe it is instructive to look at how diagnosis is actually performed in practice  in a real patient before deciding whether the method employed is Bayesian or frequentist or scientific.
    Let us examine the process of diagnosis in a real patient discussed in a clinical problem solving exercise (NEJM 1992;326: 688-91). The patient is a 40 year old healthy woman who presents with highly uncharacteristic chest pain, in whom acute MI is suspected and an EKG performed which reveals acute ST elevation EKG changes with likelihood ratio (LR) of 13.
    The prior probability of acute MI is estimated to be 7 percent in this patient from its prevalence. It is combined with the LR of 13 to generate a posterior probability of 50 percent.
    The discussing physician in this exercise conclusively and accurately infers acute MI in this patient from acute EKG changes alone which he interprets as string evidence.
    This inference is clearly not Bayesian, which would be, of acute MI being indeterminate from the posterior probability of 50 percent.
    This inference, I suggest, is frequentist as it is based on the known performance of the test result, acute EKG changes, in inferring acute MI accurately in 85 percent patients with varying prior probabilities.
    The presentation in this patient functions as a clue which makes the physician suspect acute MI which is formulated as a diagnostic hypothesis which is verified to be correct by observation of acute EKG changes which indicates acute myocardial injury, a key feature of acute MI.
    This method of hypothesis formulation and verification employed during diagnosis in this patient  is, I believe, the scientific method.
    The Bayesian method is not employed, because it is likely to lead to diagnostic errors at several points in the diagnostic process in this point. First of all, the Bayesian interpretation of the very low prior probability of 7 percent as very strong prior evidence against acute MI may lead to this disease not being suspected or tested leading to a diagnostic error. And then the Bayesian inference of acute MI being indeterminate from the posterior probability of 50 percent is erroneous as pointed out above.
    The great advantage of the frequentist, scientific method employed for diagnosis in this patient is that it leads to a highly accurate diagnosis of acute MI in any patient regardless of presentation (prior probability). 
    One of the key qualities of highly experienced physicians appears to be that a presentation is looked upon merely as a clue from which a disease is suspected and not as prior evidence for a disease. This is clearly seen in all published diagnostic exercises in real patients such as in CPCS and clinical problem solving exercises. This leads to a suspected disease being formulated as a diagnostic hypothesis without any prior evidence for or against it. This is a major factor in accurate diagnosis of rare diseases or those with highly atypical presentations (low prior probabilities) in these exercises.
    Could you please give examples of real patients in whom the Bayesian method has been crucial in leading to an accurate diagnosis? Thanks.

    Bimal





  • 6.  RE: A Comprehensive Review of Clinical Reasoning Research in Medical Education

    Posted 11 days ago

    Hi Bimal,


    I don't have a specific example to share at the moment, but I can tell you that in radiology there is a huge amount of overlap in how things can look--for example: infectious, inflamatory and ischemic disease of bowel can appear absolutely identical on a CT scan.  So for us radiologists, knowng the posterior (pretest) probability is everything--it makes the diagnosic impression for us from an otherwise ambiguous test result.  


    We live and breath Bayesian reasoning in Radiology, as I said before.


    Mike








  • 7.  RE: A Comprehensive Review of Clinical Reasoning Research in Medical Education

    Posted 11 days ago
    Hi Michael,

    Are there certain words drawn from the patient’s history that would help? Could the radiological interpretation be helped by the the history? Would the anatomical region and certain words help?

    Could the above question stimulate other questions where the history might help interpretation still further here difficulty exists?

    Perhaps there are such lists that have been evaluated already?

    Perhaps the problem is relying on the history for accuracy? Are there legal complications?

    Rob Bell, M.D..




  • 8.  RE: A Comprehensive Review of Clinical Reasoning Research in Medical Education

    Posted 11 days ago
    There is a huge literature base on the effect on the accuracy of the radiology reading by receipt of clinical information. (This was drummed into my head in the early 80s by some excellent radiologists.)
    The reader can access this data as well as I can point out papers.
    I find, in the day of Electronic Billing Records, that I am blocked from telling the radiologists the clinical picture. My young colleagues think this is normal and accept the "blinded radiology" reading as gospel.

    tom benzoni





  • 9.  RE: A Comprehensive Review of Clinical Reasoning Research in Medical Education

    Posted 10 days ago
    Hi Mike,
    In the example you give, we can look upon infectious, inflammatory and ischemic diseases of the bowel as diagnostic hypotheses which are evaluated by the test, CT scan. As this test yields an identical result in all these diseases, it is non-informative, with a likelihood ratio of 1. Therefore the diagnostic impression in Radiology based on the pretest probabilities of these diseases is correct but it does not bring us any nearer to knowing what the patient actually has. We would need to perform another test which is capable of yielding a highly informative result (likelihood ratio greater than 10) to infer the disease which is present.
    An analogous example is that of a patient with chest pain in whom we suspect acute MI and pulmonary embolism. We perform an EKG which reveals non-specific T wave changes (likelihood ratio of 1) which do not differentiate between these two diseases. Our diagnostic impression after this test would be based on pretest probabilities of these diseases, but it is of no help in knowing what the patient actually has, for which we would need to perform other tests such as chest CT angiogram for example.
    The point I am making is that our goal in diagnosis is to accurately determine the disease causing illness in a patient with symptoms which is achieved by a process of hypothesis generation and verification. A hypothesis is generated from a presentation as a clue. In this process, a pretest probability represents chance of a disease and not prior evidence for a disease in a patient.A hypothesis is verified and a disease inferred with a high degree of accuracy from a highly informative test result (likelihood ratio greater than 10) alone.
    The Bayesian method, in which a prior probability is interpreted as prior evidence and a disease inferred from a posterior probability generated by combining a prior probability and a likelihood ratio is not employed for diagnosis in practice, as it is likely to lead to diagnostic errors.
    Thanks for engaging in this interesting and important discussion about diagnosis.

    Bimal





  • 10.  RE: A Comprehensive Review of Clinical Reasoning Research in Medical Education

    Posted 10 days ago

    This is a great discussion and I appreciate hearing everyone's point of view.  


    To answer Rob's question first - yes, I do think that the radiological interpretation is definitely better if the provided history is more detailed and accurate, especially if anatomical landmarks and references are used to assist in correlating the physical findings to the patient's symptoms.  Information like that can focus the radiologist's attention and alert them to a higher (Baysian) pretest probability of disease and help to avoid false-negatives due to the area of actual concern not getting the amount of scrutiny it deserves; there is a downside, however, is that a detailed history can also serve to implant biases (e.g., anchoring) in the mind of the radiologist, which could potentially lead the radiologist astray.  As Tom points out, there is indeed a large lliterature on this and people have actually tried to test whether the risk of misleading the radiologist was larger or smaller than the benefit.  Short answer: the benefit is greater than the risk.  While there are no specific words that alwasy help us, in general the more detail we are given from the H&P to frame the question that the CT is trying to answer, the better.  


    But Tom is also correct in saying that the EMR is designed to prevent this type of information sharing.  As Dr. "Sam Shem" points out in his excellent new book, Man's 4th Best Ho$pital, the purpose of the EMR is primarily for billing and revenue enhancement--and it is deliberately set up to not capture extraneous data or information which is not useful for that primary purpose.  Only we doctors naiively think that the EMR has anything to do with facilitating patient care!  The companies who design and provide EMRs, such as EPIC, require their customers (including all of their company's employees) to sign a gag order forbidding them to ever say anything bad about EPIC.  They don't do that without reason.  Since we don't use EPIC at Penn State, I can tell you about how bad it really is, but the EPIC users out there can neither confirm nor deny it!  The EMR vendors also generally prevent inter-site interoperability of their system between centers since it is more lucrative to sell multiple site licenses that way, each with varying bells and whistles; you want more bells?  That will cost you.  And these companies take their cut of the hospital's future profits largely up-front, as these systems cost hospitals--literally--billions of dollars.


    It is a separate problem that young doctors naiively believe that radiology is some sort of a black box "truth machine" that spits out the correct diagnosis every time and requires no customized input.  We older docs hopefully know better.  I remember the daily "radiology rounds," where we used to have excellent, two-way conversations (in person!) about every case.  That rich conversation was, of course, much better than anything that would be written into a comment field on an EMR or even written by hand onto a paper requisition in the pre-EMR age.  Both the clinician and the radiologist deepened their understanding of the case from these vital conversations, and the patient benefited tremendously.  That was the key communication that took place between radiologists and clinicians, and the dictated radiology report was really just for long-term documentation.  So the real problem, perhaps, with all of this in terms of diagnostic accuracy is that we are all now out of the habit of talking with one another?  Of course, if you need to make back the $1.9B your institution spent on EPIC, you are going to need to move the patients through the clinic (and hospital) at blinding speed, so there is no time for frivolities like talking to one another in order to improve the diagnosis.


    Finally, to Bimal's point.  You are right, of course--and we are looking at this from different levels.  You are referring to the final, "gold-standard" diagnosis, which requires some sort of definitive test with a very high likelihood/predictive ratio, and I'm referring to interpreting a range of imaging tests where there is a high degree of uncertainty present, even when those tests are positive.  These are tests with a lower likelihood ratio, as you say.  An current analogy is the Chest X-ray vs. the RNA test for COVID-19.  On the CXR I can see multifocal rounded, patchy and confluent areas of ground-glass and airspace opacities in both lungs--with no pleural effusion--in a patient with new onset SOB, fever and hypoxia.  The clinician provides the clinical history "PUI for COVID-19."  In such a case, I use Baysian reasoning to reach the diagnosis of "atypical/viral pneumonia, consistent with COVID-19."  At that point, I am done.  I would argue that my likelihood ratio is >1 at that point.  But the clinician still does not have a final diagnosis yet.  The CXR is insufficient to establish the diagnosis.  He or she has a strong diagnostic hypothesis now, supported by considerable evidence, but not a final answer.  Only after the RNA test is completed and is positive (likelihood ratio greater than 10), is the diagnosis finally established.  In cases where there are no lung findings on the CXR, my negative test result is even LESS helpful, since we know that most COVID-19 patients do not develop the atypical/viral pneumonia.  In that case, the CXR has contributed nothing, and the clinician is really still at square-one. 


    Science, unlike medicine, requires a very high standard of proof.  Even medical science looks for a p-value of 0.05 or better, which is to say a confidence interval of 95% or more.  In the physical sciences, they would laugh at such a low level of confidence--at CERN the mass of the Higgs Boson was determined to 5-sigma!  But in medicine, with a sick patient in front of us, we often must act based on a lower standard of proof, i.e., while there is still a great deal of diagnostic uncertainty left.  Some of what we call "diagnostic error" therefore, is, my way of thinking, merely a manifestation of that fundamental uncertainty--the diagnostic gray area that still remains even after a diagnostic test is performed and we have the result, paired with a level of urgency that does not allow us the luxury of trying to reach a higher level of certainty, before some action-plan, decision and clinical intervention will be done on behalf of the patient.  


    I personally believe that the fundamental role of radiology is to decrease that uncertainty to the point where a clinician can have enough confidence to act (or withhold an action).  This is a much lower standard than making a definitive, final diagnosis in most cases. 


    I've attached a few PDFs of older published papers on this topic, including two from our own SIDM journal Diagnosis, in case anyone is interested and might want to read and ponder this idea further from the standpoint of understanding the role of diagnostic radiology in diagnosis.  There is also a nice paper from the same issue of Diagnosis by Dr. Kevin Johnson on how Bayesian reasoning works in radiology--I don't have that PDF handy--but the full reference is Johnson, K.M., "Using Bayes' rule in diagnostic testing: a graphical explanation." Diagnosis 2017;4:149-157.  


    In short: radiology adds tremendous value to the diagnostic process, but we are generally NOT the final answer.  That is why I am a Baysean, and Bimal is not!


    All the best,


    Mike



    Michael A. Bruno, M.D., M.S., F.A.C.R.  
    Professor of Radiology & Medicine

    Vice Chair for Quality & Patient Safety

    Chief, Division of Emergency Radiology

    Penn State Milton S. Hershey Medical Center
    ( (717) 531-8703  |  6 (717) 531-5737

    * mbruno@pennstatehealth.psu.edu  

    1571679014277





  • 11.  RE: A Comprehensive Review of Clinical Reasoning Research in Medical Education

    Posted 9 days ago
    Mike, congratulations on your great post. I am a pulmonary physician and work closely with radiologists, reviewing X-rays and CT scans with them all the time. I am greatly appreciative of their help and could not practice without it.
    If we focus on what we actually do in practice in taking care of patients and forget about labeling it as Bayesian or non-Bayesian, I agree practically with everything that Mike writes about.
    The biggest problem in diagnosis which makes it challenging is the fact that the same disease occurs in different patients with varying presentations and therefore with varying prior probabilities. I illustrate this challenge with a recent patient seen by me.
    A 40 year old woman, non smoker with a history of asthma was found to have a highly irregular 4 cm. lung mass on her chest X-ray and chest CT scan. On reviewing these images with a radiologist, both of us agreed, this mass was highly suspicious for cancer, which was confirmed by a subsequent needle biopsy.
    The very low prior probability of lung cancer did not appear to play any role in radiologic interpretation in this patient.
    If a 65 year old man with a history of heavy smoking had an identical chest CT finding, I believe, the radiologic interpretation would be the same as in the 40 year old woman despite the very high prior probability of lung cancer in this patient.
    I believe, the similar radiologic interpretation in both these patients is driven by the knowledge that the highly informative finding (high likelihood ratio) of an irregular lung mass represents lung cancer in  most or nearly all patients regardless of prior probability of lung cancer. 
    My point is we need to look carefully at how we actually interpret data during diagnosis in practice and then decide how this process is best described (Bayesian or non-Bayesian), instead of deciding beforehand  this process is Bayesian and  interpreting in a Bayesian manner.
    What matters most in diagnosis, I believe, is diagnostic accuracy,; therefore the method of interpretation and diagnosis which is best in practice is one which leads to greatest diagnostic accuracy.
    Bimal





  • 12.  RE: A Comprehensive Review of Clinical Reasoning Research in Medical Education

    Posted 9 days ago

    Thanks, Bimal.  You are correct that some CT findings have a high predictive value, while others less so; thus Baysean analysis would not be uniformly applied in every case.


    All the best,


    Mike




    Michael A. Bruno, M.D., M.S., F.A.C.R.  
    Professor of Radiology & Medicine

    Vice Chair for Quality & Patient Safety

    Chief, Division of Emergency Radiology

    Penn State Milton S. Hershey Medical Center
    ( (717) 531-8703  |  6 (717) 531-5737

    * mbruno@pennstatehealth.psu.edu  

    1571679014277