Please note that this e-mail is not secure (encrypted). If you do not wish to continue communication over unencrypted e-mail, please notify the sender of this message immediately. Continuing to send or respond to e-mail after receiving this message means you understand and accept this risk and wish to continue to communicate over unencrypted e-mail.
Hi David, Hi Bimal,
As always, this is a really fun discussion! I have to agree that there aren't very many examples where the theory that proved correct was created de novo, without any experimental / experiential data first. The best example is the Standard Model of particle physics, which led Higgs to predict the existence of a new heavy boson which had zero basis in experiment. Only much later, after building the largest experimental apparatus ever constructed at CERN, was the existence of this particle verified. So... it can happen.
I think that there is a lot of emphasis placed on the final steps in establishing a diagnosis, i.e., the final diagnosis which glosses over some of the intermediate steps of high importance to the final product, i.e., the development of a differential diagnosis, advancement to a provisional diagnosis and later, a systematic evaluation of provisional diagnoses-which often takes form of a therapeutic trial. Some of those intermediate steps are clearly Bayesian in practice as well as in theory. The best example I can think of from my own experience is radiologists' interpretation of imaging studies, which often informs the final diagnosis substantially. The reasoning behind those interpretations are largely Bayesian, due to the very high level of uncertainty involved.
So I still believe that Bayesian reasoning has a place in the diagnostic process-even if it is not the final step.
All the best,
Michael A. Bruno, M.D., M.S., F.A.C.R. Professor of Radiology & Medicine
Vice Chair for Quality & Patient Safety
Chief, Division of Emergency Radiology
Penn State Milton S. Hershey Medical Center ( (717) 531-8703 | 6 (717) 531-5737
* firstname.lastname@example.org |
*****E-Mail Confidentiality Notice***** This message (including any attachments) contains information intended for a specific individual(s) and purpose that may be privileged, confidential or otherwise protected from disclosure pursuant to applicable law. Any inappropriate use, distribution or copying of the message is strictly prohibited and may subject you to criminal or civil penalty. If you have received this transmission in error, please reply to the sender indicating this error and delete the transmission from your system immediately.
The discussion reminds me of Arthur Elstein's work, where he was seeking to understand diagnosis by having them work through case scenarios, and explaining their reasoning. This seems to be the kind of observational approach that Bimal is suggesting. And Arthur's conclusion was that a great deal of diagnosis was indeed pattern recognition.
It is also possible that more than one "answer" is correct. Perhaps certain specialists (eg Radiologists, like Michael Bruno) might be more "Bayesian" in their approach than front-line clinicians seeing undifferentiated cases. Or certain problems may be more amenable to a Bayesian approach (eg pulmonary embolism) than undifferentiated problems seen in primary care. For the diagnosis of pulmonary embolism, there is a ton of data on pre-test probabilities and on the power of the relevant diagnostic tests, with an abundance of tools to calculate the likelihood of PE in a Bayesian manner.
Mark L Graber, MD FACP
Founder and President Emeritus, Society to Improve Diagnosis in Medicine
Professor Emeritus, Stony Brook University
A quick note. "Diagnosis" is a decision, and so, posterior probabilities alone would not account for behavior; One must take into account thresholds as well, which are functions of the benefit to the patient of treating the disease and the cost (financial and safety) to the patient of not treating (assuming we take the patient's perspective in diagnosis).
So, "diagnosing with near certainty" may simply mean that the posterior is so far away from threshold that it is doubtful that more information would change your mind. If my threshold for performing an LP is 1/1,000 (which residents have reported to me over 20 years' of asking), then a posterior of 10% is way over threshold, and I would have not doubt that the patient "needs" an LP. Whether I should be equally certain about the patient having meningitis---and therefore hospitalizing and treating, with a new set of costs---depends on a different threshold.
That's the decision theory behind diagnosing. I don't know that we have a lot of empirical data on thresholds, but if one were to assemble a research program on diagnosing, that's the place of theory to start from. (Doesn't Benjamin Djulbegovic's work come into play here?), but, if
Also, I think we have to be careful, though, in taking physicians' assessment of certainty as gold standard, since there is plenty of data on wrong diagnosis (with respect to autopsies) and on the lack of correlation between such certainty and accuracy.
Harold Lehmann MD PhD
Section on Biomedical Informatics and Data Sciences
Division of General Internal Medicine
Department of Medicine
Johns Hopkins School of Medicine