Discussion Board

Expand all | Collapse all

Accuracy of Practitioner Estimates of Probability of Diagnosis Before and After Testing | Breast Cancer | JAMA Internal Medicine | JAMA Network

  • 1.  Accuracy of Practitioner Estimates of Probability of Diagnosis Before and After Testing | Breast Cancer | JAMA Internal Medicine | JAMA Network

    Posted 04-16-2021 17:03
    Next steps, educators?


    Sent from my iPhone

    David L Meyers, MD MBE FACEP

  • 2.  RE: Accuracy of Practitioner Estimates of Probability of Diagnosis Before and After Testing | Breast Cancer | JAMA Internal Medicine | JAMA Network

    Posted 30 days ago
      |   view attached
    There are several comments I would like to make on this paper on Accuracy of Probability Estimate.
    1. The authors find that physicians make errors in estimating prior probability and posterior probability of a disease after testing from which they conclude that these erroneous estimates lead to diagnostic errors.
    2. In reaching this conclusion, they assume that a disease is diagnosed, that is inferred to be present or absent in a patient, from its posterior probability. But when we examine the process of diagnosis in real patients in practice, we find probability does not play any role in it.
    3. We do not find prior probability of a disease to be estimated or a disease diagnosed (inferred) from a posterior probability in any of the hundreds of published CPCs or clinical problem- solving exercises.
    4. Instead what is done in these diagnostic exercises is to suspect a disease from a presentation and formulate it as a hypothesis without attaching any prior probability to it so that it does not have any prior degree of belief for or against it. The hypothesis is evaluated by a test and a disease diagnosed (inferred) with a high degree of accuracy if a highly informative result (likelihood ratio [LR] greater than 10 ) is observed. This method enables a disease with a typical presentation (high prior probability) as well as one with an atypical presentation to be diagnosed with a high degree of accuracy.
    5. In the above method of suspecting and testing, probability does not play any significant role. I agree with the physician mentioned in the Discussion who comments "estimating probability of a disease isn't how you do medicine".
    6. The method of suspecting and testing employed during diagnosis in practice is identical to the frequentist confidence method of statistical inference, which is the other major method of statistical inference (other than the Bayesian method). I describe and discuss the confidence method in detail in the attached paper on diagnosis.
    7. I believe, the confidence method is employed for diagnosis in practice in general. For example, acute MI is diagnosed (inferred) from acute ST elevation EKG changes (LR 13) in any patient regardless of its prior probability with a diagnostic accuracy of around 85 percent. Pulmonary embolism is diagnosed from positive chest CT angiogram (LR 20) and deep vein thrombosis from positive venous ultrasound study (LR 16) in a similar manner in practice.
    8. The reason a probability based approach is not employed in practice is because practically any given disease occurs in different patients with varying presentations and therefore with varying prior probabilities. The estimation of a prior probability and interpreting it as a prior degree of belief for or against a disease does not help in any way, I believe, in diagnosing a disease accurately. Instead, it may lead to a diagnostic error by not suspecting or testing a disease with an atypical presentation whose low prior probability is interpreted as prior degree of belief against it.
    9. What is important in teaching diagnosis to achieve high diagnostic accuracy, I believe, is not about estimation of prior and posterior probability but teaching about the wide variation in presentations (prior probabilities) of a given disease in different patients and emphasizing the importance of suspecting a disease and formulating it as a hypothesis regardless of its prior probability. It is important also to teach about the informative content of a diagnostic test in terms of its likelihood ratio and its accuracy in diagnosing a disease in patients with varying prior probabilities

    Bimal Jain MD
    Salem Hospital
    Mass General Brigham
    Salem MA 01970.

  • 3.  RE: Accuracy of Practitioner Estimates of Probability of Diagnosis Before and After Testing | Breast Cancer | JAMA Internal Medicine | JAMA Network

    Posted 30 days ago

    Thanks Dr Jain.... all Good points :) 

    the fact that most clinicians in the article were unable to accurately calculate either a PPV or NPV is not surprising... that simple observation combined with the wide spectrum of post test probabilities shown in the graphs (after both a pos test and a neg test) are a sobering summary of the state of dx accuracy in our country....   but these sobering concerns are as much about the nature of the test characteristics of the lab/imaging tests that we use to diagnose disease as much as the clinical reasoning skills of the clinicians

    IFFFF the surveyors had instead asked "what is the PPV of a test in the setting of a disease prevalence of 10% , a sens of 90% and a spec of 95%?"
    I would hazard a guess that the respondents would ALSO say "high" in that setting as well (and their intuition would be correct as the correct math answer would be 67%) so HOW you ask the question is as important as the knowledge you are trying to assess....   their usage of a very low prevalence of 1 in 1000 leads the intuition of the respondents astray....

    To Dr Jains comments...  >>   I think we all get a little confused by the semantics of the dx process per se; both the frequentist method and the bayesian method  use math and deductive reasoning to get from point A (pt presentation) to point B (a working dx / diff dx) and, in point of fact, further observation of the pt over time and their response (or lack there of) to any initial rx then leads to the "final" dx (which can of course be modified based on later observations)

    a better way to describe the dx journey in my mind is using the scientific method as an analogous journey, you generate a hypothesis/prelim/working dx based on initial data and observations ... and then you accept or reject that hypothesis based on subsequent data and observations until you reach your final dx . This analogy emphasizes the dynamic nature of the dx journey and that we can NOT have dx error per se until our journey is complete
    (of course one could argue that a certain amount of data should lead to a correct dx in a certain time frame and we can debate endlessly about how quick it should take to complete the dx journey ... To be perfectly blunt the AMOUNT of time that it takes to complete the dx journey is REALLY what this whole debate of dx error circles around in my opinion.... (and also what leads to pt harm and litigation)

    I have two main comments/thoughts for Dr Jain and for the authors

    1: one of the biggest issues with bayesian analysis in my opinion is that we really do NOT know what the "true" pretest prob is for any given pt until we take a thorough and complete history; eg lets take a young woman age 30 yrs who presents to ER with the pleuritic CP and mild dyspnea on exertion

    her pretest prob for pulmonary embolism (high on the differential) is modified by her BMI, birth control usage, recent surgery, recent pregnancy, recent travel, recent lifestyle, exposure to ill family, job exposure, etc etc etc so the TRUE pretest probability can only be calculated AFTER a thorough evaluation of the pertinent positives and negatives in her history and THUS, her calculated pretest prob is mostly related to the adequacy and completeness of the history taking!!! (and I'm not even going to touch adequacy of the physical exam...)
    Nowhere do the authors talk about this aspect of hx taking and its relevance to accurate calculation of pretest prob   :(

    In reality, there is NOT one pretest prob for this pt , there are a host of pretest prob that are higher or lower depending on the adequacy of assessment of the presence or absence of the relevant associated features as well as the adequacy of the symptom complex / description of symptoms over time!!!

    2: the accuracy of any proposed dx depends on the test characteristics of the dx test used to establish said dx as Dr Jain astutely pointed out. A highly specific test such as CTA for PE or duplex US for DVT will generally lead to a high PPV no matter the pretest prob given its very high specifity (aka "pathognomic"); a less specific test such as portable CXR can never achieve the same clarity of dx.... so we need to be mindful of how we characterize dx accuracy depending of the test characteristics of the  individual dx test that we are using to establish any given dx

    I dont believe the authors took these factors into adequate consideration

    I personally think it would be worthwhile for SIDM to focus on moving the concept of TIME into the dx error arena; ie focus on distinguishing between working/prelim/admitting dx versus final dx (of course this would make it harder to code diagnoses in EMRs but it would be worthwhile in my mind and lead to more accurate assessments of where when how and why dx errors occur

    Thank you
    Tom Westover MD

  • 4.  RE: Accuracy of Practitioner Estimates of Probability of Diagnosis Before and After Testing | Breast Cancer | JAMA Internal Medicine | JAMA Network

    Posted 29 days ago
    Thank you, Dr. Westover for your detailed comments.
    There are a few other points about my post and the attached paper I would like to make.
    1. My account of diagnosis is based on a careful examination and analysis of how experienced physicians diagnose in practice. Therefore this account can be looked upon as being descriptive.
    2. The greatest challenge in achieving high diagnostic accuracy in practice, which is the primary goal of all physicians, is the variation in presentations and therefore of prior probabilities in different patients.
    3. This challenge is met by employing a method of diagnosis in which a disease is suspected from a presentation and formulated as a hypothesis without any prior probability attached to it, so that it does not have any prior degree of belief for or against it. The disease hypothesis is evaluated by performing a test and diagnosed (inferred) conclusively with a high degree of accuracy from a highly informative test result in every patient regardless of its prior probability.
    4. This method is identical, I have found, to the frequentist confidence method, which is the other major method of statistical inference associated with the names of the famous statisticians, Sir R A Fisher and Jerzy Neyman. I find it remarkable that physicians have developed  a method on their own to meet the challenge of achieving high diagnostic accuracy of a disease in patients with varying prior probabilities which is identical to one of the two major methods of statistical inference. This is in sharp contrast to the Bayesian method, which has been prescribed due to its coherence and not due to its diagnostic accuracy without a careful analysis of the goal in and process of diagnosis in practice.
    5. A major problem, among many other problems with the Bayesian method, is that its diagnostic accuracy is unknown. For example, we do not know the diagnostic accuracy of the Bayesian diagnosis of acute MI in a hundred patients with acute ST elevation EKG changes. By contrast the diagnostic accuracy of this diagnosis by the confidence method is known to be around 85 percent. In my view, prescribing the Bayesian method for diagnosis without knowing its diagnostic accuracy is like prescribing a treatment for a disease without knowing its therapeutic efficacy.
    6. I believe, empirical studies of how various diseases are actually diagnosed in practice is likely to be more fruitful than doing studies about the Bayesian method, such as about estimation of prior and posterior probabilities, when the Bayesian method does not appear to be employed for diagnosis in practice.