Thanks, Bimal, and thanks David! This is a very nice summary!
I think that #12 of your excellent formulation is actually worth highlighting: it may seem obvious, as you say, but the "pragmatism" that is inherent in the process of Baysian reasoning must be fully acknowledged and appreciated in this context. This is where the "scientific" approach to diagnostic reasoning & hypothesis testing can actually diverge from the pure scientific method that was handed down by Sir Francis Bacon.
With diagnosis, you don't have quite as open a mind as with other types of pure scientific inquiry--you are constrained by the pretest (prior, or posterior) probability of disease.
Bayes is our daily bread in Radiology.
All the best,
Michael A. Bruno, M.D., M.S., F.A.C.R.
Professor of Radiology & Medicine
Vice Chair for Quality & Patient Safety
Chief, Division of Emergency Radiology
Penn State Milton S. Hershey Medical Center ( (717) 531-8703 | 6 (717) 531-5737
I don't have a specific example to share at the moment, but I can tell you that in radiology there is a huge amount of overlap in how things can look--for example: infectious, inflamatory and ischemic disease of bowel can appear absolutely identical on a CT scan. So for us radiologists, knowng the posterior (pretest) probability is everything--it makes the diagnosic impression for us from an otherwise ambiguous test result.
We live and breath Bayesian reasoning in Radiology, as I said before.
This is a great discussion and I appreciate hearing everyone's point of view.
To answer Rob's question first - yes, I do think that the radiological interpretation is definitely better if the provided history is more detailed and accurate, especially if anatomical landmarks and references are used to assist in correlating the physical findings to the patient's symptoms. Information like that can focus the radiologist's attention and alert them to a higher (Baysian) pretest probability of disease and help to avoid false-negatives due to the area of actual concern not getting the amount of scrutiny it deserves; there is a downside, however, is that a detailed history can also serve to implant biases (e.g., anchoring) in the mind of the radiologist, which could potentially lead the radiologist astray. As Tom points out, there is indeed a large lliterature on this and people have actually tried to test whether the risk of misleading the radiologist was larger or smaller than the benefit. Short answer: the benefit is greater than the risk. While there are no specific words that alwasy help us, in general the more detail we are given from the H&P to frame the question that the CT is trying to answer, the better.
But Tom is also correct in saying that the EMR is designed to prevent this type of information sharing. As Dr. "Sam Shem" points out in his excellent new book, Man's 4th Best Ho$pital, the purpose of the EMR is primarily for billing and revenue enhancement--and it is deliberately set up to not capture extraneous data or information which is not useful for that primary purpose. Only we doctors naiively think that the EMR has anything to do with facilitating patient care! The companies who design and provide EMRs, such as EPIC, require their customers (including all of their company's employees) to sign a gag order forbidding them to ever say anything bad about EPIC. They don't do that without reason. Since we don't use EPIC at Penn State, I can tell you about how bad it really is, but the EPIC users out there can neither confirm nor deny it! The EMR vendors also generally prevent inter-site interoperability of their system between centers since it is more lucrative to sell multiple site licenses that way, each with varying bells and whistles; you want more bells? That will cost you. And these companies take their cut of the hospital's future profits largely up-front, as these systems cost hospitals--literally--billions of dollars.
It is a separate problem that young doctors naiively believe that radiology is some sort of a black box "truth machine" that spits out the correct diagnosis every time and requires no customized input. We older docs hopefully know better. I remember the daily "radiology rounds," where we used to have excellent, two-way conversations (in person!) about every case. That rich conversation was, of course, much better than anything that would be written into a comment field on an EMR or even written by hand onto a paper requisition in the pre-EMR age. Both the clinician and the radiologist deepened their understanding of the case from these vital conversations, and the patient benefited tremendously. That was the key communication that took place between radiologists and clinicians, and the dictated radiology report was really just for long-term documentation. So the real problem, perhaps, with all of this in terms of diagnostic accuracy is that we are all now out of the habit of talking with one another? Of course, if you need to make back the $1.9B your institution spent on EPIC, you are going to need to move the patients through the clinic (and hospital) at blinding speed, so there is no time for frivolities like talking to one another in order to improve the diagnosis.
Finally, to Bimal's point. You are right, of course--and we are looking at this from different levels. You are referring to the final, "gold-standard" diagnosis, which requires some sort of definitive test with a very high likelihood/predictive ratio, and I'm referring to interpreting a range of imaging tests where there is a high degree of uncertainty present, even when those tests are positive. These are tests with a lower likelihood ratio, as you say. An current analogy is the Chest X-ray vs. the RNA test for COVID-19. On the CXR I can see multifocal rounded, patchy and confluent areas of ground-glass and airspace opacities in both lungs--with no pleural effusion--in a patient with new onset SOB, fever and hypoxia. The clinician provides the clinical history "PUI for COVID-19." In such a case, I use Baysian reasoning to reach the diagnosis of "atypical/viral pneumonia, consistent with COVID-19." At that point, I am done. I would argue that my likelihood ratio is >1 at that point. But the clinician still does not have a final diagnosis yet. The CXR is insufficient to establish the diagnosis. He or she has a strong diagnostic hypothesis now, supported by considerable evidence, but not a final answer. Only after the RNA test is completed and is positive (likelihood ratio greater than 10), is the diagnosis finally established. In cases where there are no lung findings on the CXR, my negative test result is even LESS helpful, since we know that most COVID-19 patients do not develop the atypical/viral pneumonia. In that case, the CXR has contributed nothing, and the clinician is really still at square-one.
Science, unlike medicine, requires a very high standard of proof. Even medical science looks for a p-value of 0.05 or better, which is to say a confidence interval of 95% or more. In the physical sciences, they would laugh at such a low level of confidence--at CERN the mass of the Higgs Boson was determined to 5-sigma! But in medicine, with a sick patient in front of us, we often must act based on a lower standard of proof, i.e., while there is still a great deal of diagnostic uncertainty left. Some of what we call "diagnostic error" therefore, is, my way of thinking, merely a manifestation of that fundamental uncertainty--the diagnostic gray area that still remains even after a diagnostic test is performed and we have the result, paired with a level of urgency that does not allow us the luxury of trying to reach a higher level of certainty, before some action-plan, decision and clinical intervention will be done on behalf of the patient.
I personally believe that the fundamental role of radiology is to decrease that uncertainty to the point where a clinician can have enough confidence to act (or withhold an action). This is a much lower standard than making a definitive, final diagnosis in most cases.
I've attached a few PDFs of older published papers on this topic, including two from our own SIDM journal Diagnosis, in case anyone is interested and might want to read and ponder this idea further from the standpoint of understanding the role of diagnostic radiology in diagnosis. There is also a nice paper from the same issue of Diagnosis by Dr. Kevin Johnson on how Bayesian reasoning works in radiology--I don't have that PDF handy--but the full reference is Johnson, K.M., "Using Bayes' rule in diagnostic testing: a graphical explanation." Diagnosis 2017;4:149-157.
In short: radiology adds tremendous value to the diagnostic process, but we are generally NOT the final answer. That is why I am a Baysean, and Bimal is not!
Michael A. Bruno, M.D., M.S., F.A.C.R. Professor of Radiology & Medicine
Thanks, Bimal. You are correct that some CT findings have a high predictive value, while others less so; thus Baysean analysis would not be uniformly applied in every case.