Discussion Board

Expand all | Collapse all

Artificial Intelligence (AI)

  • 1.  Artificial Intelligence (AI)

    Posted 02-09-2020 15:12
    AI is raising its head in many ways. Already we have breast cancer radiographs and prostate cancer slides being read by machines with some success. This raises some a number of questions for the future.


    1. Will the current biases in diagnosis, so widely discussed on this list, be replaced by paradigms that have their own biases for efficiency, productivity, finances, etc, etc.?

    2. It seems that AI paradigms will in part be relying on the accuracy of the support tests undertaken in medicine. As we seem not to be able, at this time, to show how well we are doing with diagnostic accuracy, would it be wise while waiting to focus on those things that can be made more accurate (Blood pressure measurements come to mind, as does the accuracy of the stethoscope with different experiences and hearing loss), and in addition many, many more issues to be looked at)?

    3. As AI is likely to lead to one of the biggest revolutions in medicine, it could be asked how well prepared are we for what is coming?

    4. Does the AMA, SIDM, the SIDM Coalition and others have watching briefs on what is developing and coming out of the research labs? Are reports being issued?

    5. Do we have any beginning ideas on what the AI movement is going to do to medical education, ethics, medical employment, salaries, specialty organizations, litigation, diagnostic advances, and patient satisfaction, etc.?

    A few thoughts.

    Robert Bell M.D.


  • 2.  RE: Artificial Intelligence (AI)

    Posted 02-09-2020 16:21

    Hi Bob,


    Great questions!  


    There has been one published article to my knowledge, from JAMA, which came out last year (2019) directly addressing the medico-legal ramifications of AI for physicians--and it is not good.  Basically, the authors, at least one of whom was a lawyer, warned that we doctors are going to be much more likely to be sued in the age of AI, and that the very existence of AI will be used to extract large jury verdicts and settlements against physicians, whether or not AI is even used.  


    The authors of that paper suggested that, in the future, docs who use AI in their practice will be sued under two main (and common) scenarios: either when the doctor goes along with the AI's diagnosis and it later turns out to have been wrong, or when the doctor "fails" to go along with the AI's diagnosis and it later is shown to have been correct, in retrospect.  Any doc who does not use AI at all in their practice will be sued for not having the good judgement to have used it!  So pretty much whenever there is a bad outcome, the plaintiffs' bar will be insisting that the doctor would not have made the critical mistake in any given case, and the patient would have done better, if only the doctor had used AI, or alternatively if only the doctor had been appropriately skeptical of their AI, depending.  


    So AI is potentially a two-headed coin for the plaintiff's bar, they can use it to win at every toss.


    As we reflect on the cognitive biases, the JAMA article suggested that the courts will use AI in this way to double-down on their tendency toward outcomes bias, wherein any bad patient outcome constitutes de facto proof of antecedent physician malpractice.  So the very existence of medical AI, the authors concluded, will be a real money-maker for the plaintiff's lawyers in the future.  It will always be in their hands as a blunt weapon with which to more effectively club the heads of doctors, the doctor's defense lawyers and especially the doctor's malpractice insurers (Dana?) like so many helpless baby seals lying on the beach, each of us just waiting for our turn to be bludgeoned.  AI may thus potentially usher in another great malpractice crisis, like the one we had in the 1970s.


    I will try to find that article again and forward it to the group.  The attorneys among us will undoubtedly conclude that the authors of the article (and my summary) are horribly biased!  And I can't deny it...  But hopefully the future reality of AI in legal-medicine won't be quite as grim as the article predicts.


    With regard to the AI algorithms themselves, as we have discussed so often before, it is clear that they are plagued by biases, and that those biases are often woven into the fabric of the thing in such a way that they very difficult to eradicate.  You can refer to one of my prior posts for a more detailed discussion. So there is that.


    Many of us have been watching the development of AI with some interest.  In Radiology there was a new publication in The Lancet from just last week showing that a novel AI algorithm has been a significant help to radiologists interpreting mammograms--enhancing the accuracy significantly beyond the performance of the human reader alone.  I don't think that study tested the performance of the AI-alone, it just compared "human alone" to "AI+human."  And the combo was significantly better.  I think this is the greatest potential benefit of AI, to improve human accuracy and performance (much more so than it's potential to replace humans like me).  Mammography is a particularly fertile field for AI in this regard, since the answers are pretty much binary, i.e., the mammogram either shows breast cancer or no breast cancer.  AI is not quite as good for making more nuanced diagnoses using other types of imaging.


    I don't have that Lancet paper handy, but someone on the list might be able to look it up and post the PDF for us.


    All the best,


    Mike










  • 3.  RE: Artificial Intelligence (AI)

    Posted 02-09-2020 16:27
    Mike,
    It should be Open Access to those inclined.
    Cheers,

    David





  • 4.  RE: Artificial Intelligence (AI)

    Posted 02-09-2020 17:01
      |   view attached
    Here is a pdf of the JAMA article summarizing medicolegal consequences of using AI in clinical settings, cited today by Michael Bruno:


    Peter Rudd, MD, FACP
    Professor of Medicine, emeritus
    Stanford University School of Medicine



    Attachment(s)

    pdf
    Price WN JAMA 2019.pdf   162K 1 version


  • 5.  RE: Artificial Intelligence (AI)

    Posted 02-09-2020 20:23
      |   view attached
    Likely Lancet article.
    tom



    Attachment(s)

    pdf
    Lancet AI.pdf   102K 1 version


  • 6.  RE: Artificial Intelligence (AI)

    Posted 02-09-2020 18:55
    Dear Michael and others,

    Thanks for the comments and links.

    One thought that I had is, could we be better prepared?

    Grew up in the UK during WW2 and I think that I am correct in saying that the Spitfire plane and Radar were developed BEFORE WW2 in England.

    Those two things were mainly responsible for Germany not invading the UK in the war.

    My father was in Radar with the RAF stationed not too far from us in the South of England. I recall on one occasion he called my mother at home for us to take cover as the Messchersmitts were on there way heading towards us. Family protection at it best!

    Would also like to see us focus on what does not work well that could be improved (BP and stethoscope! etc. etc.) - and I realize that would aid AI but if we are part of the story we may be able to see that we too are protected in some way. Can we collaborate with the research people? Also, even if we we could not specifically show a reduction in deaths and injury we should be able to sleep better knowing that we are probably improving things.

    Kind wishes.

    Rob Bell




  • 7.  RE: Artificial Intelligence (AI)

    Posted 02-09-2020 20:07

    Bob--what a terrific story--thanks for sharing!  Special thanks also to David and Peter for digging up the PDFs of the two articles I mentioned (JAMA and Lancet).  


    Would love to hear if anyone reached any different conclusions than I did after reading them. 


    Best wishes to all,


    Mike



    Michael A. Bruno, M.D., M.S., F.A.C.R.   
    Professor of Radiology & Medicine

    Vice Chair for Quality & Patient Safety

    Chief, Division of Emergency Radiology

    Penn State Milton S. Hershey Medical Center
    ( (717) 531-8703  |  6 (717) 531-5737

    * mbruno@pennstatehealth.psu.edu  

    1571679014277

     






  • 8.  RE: Artificial Intelligence (AI)

    Posted 02-09-2020 20:41
    Very interesting articles Mike,
    I think that one question to be answered is how decision support will be managed when it comes to malpractice (or indeed even more broadly how it pertains to forensic reconstruction).
    As it stands, in the context of tort law, their analysis sounds reasonable at face value.
    That said, I am unsure as to what will happen when in a non-civil context (i.e. in the forensic sense).
    In this context, I think we should be looking to our neighbors to the North and the leader educational competency to "Use health informatics to improve the quality of patient care and optimize patient safety", in which the clinician's burden will be put on effective use of information (and pertain to the context of clinical informatics if we are to bring it into the American educational context) as it pertains to quality and safety.
    If that's the standard with which we should judge AI's operationalization, I think the liability will become one of outcome rather than one of process.
    All of this said, I'm no lawyer, so I'd be curious to what folks more aligned with the legal profession think.
    Thanks,

    David





  • 9.  RE: Artificial Intelligence (AI)

    Posted 02-09-2020 20:37

     

     

    February 9, 2020

    7:50 PM

     

    As you all know, Aerospace is very advanced in using computers to perform tasks that need high speed or advanced determination of conditions.  These are all preprogrammed based on a set of specific conditions.  However AI has not gained much of a place in flying of an airplane, etc.  The reason is simple – AI cannot guarantee to do the same thing twice under a given set of conditions.  Continuous learning by the decision algorithyms, almost guarantees a more refined (or altered) outcome for the second occurrence.

     

    So as AI progresses in the medical workplace, it will likely be another "tool" to help the clinicain and patient make determinations and decisions.  If physicians allow it to make the "selected" determination, they must confirm.  Otherwise your defense team cannot point to precisely how a determination was made, other than to point and say it was "the machine".

     

    All tools are there to make better determinations and decisions for treatment.  The physician that fails to comfirm the situation, is derelict in responsibility.  If you look at commercial airplane accidents, you see a predominance of "pilot error".  This is always the case when the pilot fails to be aware of the situation and  think before acting.

     

       Nelson Toussaint

     

    TAMARAC LLC

    860-844-0199

    ntoussaint@tamarac.com

     

     






  • 10.  RE: Artificial Intelligence (AI)

    Posted 02-10-2020 07:46

    Mike: good points and good discussion.

    The concern that AI will inevitably have its own biases, either the ones we give the systems during the design process, or others that emerge from deep learning processes that we don't yet understand, is probably real.

    The trap of outcome bias seems inevitable. It seems any output from AI systems is going to need examining for inherent biases.

    If AI is meant to simulate human decision making then it seems likely it will be biased as bias is a normal operating characteristic of the brain.

    If it is meant to simulate human intelligence, we can probably take little comfort either as the cognitive sciences folk tell us that susceptibility to bias is the main challenge to rationality and rationality may not equate with intelligence.

    The good news might be that AI will be more amenable to debiasing strategies than we are.

     

    An immediate goal might be to find out more about how biases work collectively in human decision making. Mostly they have been studied in isolation but interactivity is very likely and might be important.

    Can we say, for example, that whenever we see framing bias then ascertainment bias (seeing what you expect to see) inevitably follows. What is the likelihood of confirmation bias once anchoring has occurred? What about the more recently described snowball and cascade biases, described by Itiel Dror? If we knew a little more about interactivity, we might be able to advise AI developers better.

    Pat

     






  • 11.  RE: Artificial Intelligence (AI)

    Posted 02-10-2020 09:52
    Consideration of the use of AI in diagnosis, and medicine in general, should also take account of the “black box” nature of much modern AI. In the early days of biomedical informatics we expected an “expert system” to provide an audit trail of logical steps for its decisions. The best ones had a “why” function.

    With machine learning-based AI, the reasoning is in general not available, so the algorithm does not provide any reason a physician can offer in her/his defense. A nice conundrum for the profession and for lawyers lies in the new development of Explainable AI (XAI). In some cases at least, what is on offer is a second black box that “explains” the conclusion—not to say decision—of the first black box. Would this wash in a court of law?

    Tony

    Anthony Solomonides PhD MSc(Math) MSc(AI) FAMIA
    Program Director, Outcomes Research and Biomedical Informatics
    Research Institute
    NorthShore University HealthSystem
    1001 University Place, Evanston, IL 60201

    224-364-7497






    [cid:image96c6e2.PNG@4b90daf8.4aafd645]

    ________________________________
    Legal Disclaimer: Information contained in this e-mail, including any files transmitted with it, may contain confidential medical or business information intended only for use by the intended recipient(s). Any unauthorized disclosure, use, copying, distribution or taking of any action based on the contents of this email is strictly prohibited. Review by any individual other than the intended recipient does not waive or surrender the physician-patient privilege or any other legal rights. If you received this e-mail in error, please delete it immediately and notify the sender by return email.






  • 12.  RE: Artificial Intelligence (AI)

    Posted 02-11-2020 20:12
    Could the learning process be "manipulated" to reduce the number of CT scans recommended to be undertaken?





  • 13.  RE: Artificial Intelligence (AI)

    Posted 02-11-2020 20:22
    There are plenty of adverserial approaches to defeat the training of the AI, whether by input bias or other methods.
    My favorite paper on the topic is the " Deep neural networks are easily fooled: High confidence predictions for unrecognizable images" from IEEE CVPR 2015 <http://www.evolvingai.org/fooling>, and its accompanying YouTube video: <https://www.youtube.com/watch?v=M2IebCN9Ht4>.
    The question becomes how interpretative are these introduced errors (and how manipulable the systems become in using the images).
    Cheers,

    David





  • 14.  RE: Artificial Intelligence (AI)

    Posted 02-12-2020 14:42
    I think it could go either way. To manipulate it, you'd have to give an incomplete data set.
    E.g., in a group whose stats I've seen, from 8.5% to 50% of patients get CTs with no immediate medical outcomes differences. However, as the 50% person makes more $ for the hospital and sees more patients, I think it perfectly logical that AI would indicate more CTs be done; certainly more than the 8.5%.
    As I read AI, it is input/output agnostic, just creating various weights for a near-infinite # of variables, requiring only the output = 1 (all internally-generated variables accounted for.)
    I think one of the most pervasive errors promulgated about AI is that we can't query the algorithm.
    We can query it, we just can't hold that many variables in our heads and make that many computations in a reasonable lifetime.

    tom benzoni





  • 15.  RE: Artificial Intelligence (AI)

    Posted 02-12-2020 16:06
    Good overview of AI in current JAMA: 323(6):509-510
    Ed





  • 16.  RE: Artificial Intelligence (AI)

    Posted 02-12-2020 16:10
    Thanks Tom,

    Does that suggest that medical oversight is called for early in the development stages.

    Who will approve any AI diagnostic package?

    Will a body similar to the current FDA be needed to provide control and approval?

    Are there any government developmental programs being discussed?

    Are there any SIDM committees now dedicated to keeping an eye on AI in diagnosis?

    I suspect we already need to be pretty active. 

    Rob Bell





  • 17.  RE: Artificial Intelligence (AI)

    Posted 02-10-2020 10:15

    Thanks, Pat!

     

    I appreciate your detailed and insightful discussion about these biases, and I think that your point is extremely well-taken about how the AI designers will almost certainly be "imparting" their own human-brain biases into any AI system designed by humankind – the AI is being designed to mimic human thinking, after all – and so then the "imported" biases will become a normal operating characteristic of the AI, just as it is for the brains that designed it.  But, at least in theory, as you say, de-biasing might be more possible with the machine than with the man.  But there are examples to the contrary.  Amazon, not long ago, designed a hiring algorithm to use AI to screen resumes.  They were horrified to learn that it rapidly became biased in favor of white male applicants.  No matter what they did to "tweak" it, the algorithm could not be de-biased!  They were forced to scrap it, or else they would have had to scrap their commitment to diversity in hiring. 

     

    Getting to Anthony's point, much of what happens inside the AI is a black-box... as if it does all of it's thinking in Vegas!  Explainable AI  (XAI) is an exciting concept, but as Anthony points out, it's just another black box that tries to explain the first one!  So it's very limited, and trustworthiness is a huge issue.

     

    I remain skeptical about AI applications in (very conservative) medicine, but Google, Amazon and others are rushing headlong into our business and they will disrupt us in some way-count on it.  The JAMA article and my earlier post were mostly about how an over-hyped AI approach could rapidly be twisted for nefarious purposes and it is easy to imagine how AI could be effectively weaponized by the plaintiff's bar.  For them, it could become the perfect all-purpose tool: adaptable to any situation and always useful for supporting their claims... like a two-headed coin.  They always win, and doctors/hospitals always lose, with every toss of this rigged coin. 

     

    In that way, AI will potentially introduce even MORE bias (especially more outcomes bias) into the legal system than already exists there.

     

    All the best,

     

    image004.png@01D112FF.F77F98B0

    Michael A. Bruno, M.D., M.S., F.A.C.R.  
    Professor of Radiology & Medicine

    Vice Chair for Quality & Patient Safety

    Chief, Division of Emergency Radiology

    Penn State Milton S. Hershey Medical Center
    ( (717) 531-8703  |  6 (717) 531-5737

    * mbruno@pennstatehealth.psu.edu  |  
    image001.jpg@01D04A9B.917CDCD0

     

    *****E-Mail Confidentiality Notice*****
    This message (including any attachments) contains information intended for a specific individual(s) and purpose that may be privileged, confidential or otherwise protected from disclosure pursuant to applicable law.  Any inappropriate use, distribution or copying of the message is strictly prohibited and may subject you to criminal or civil penalty.  If you have received this transmission in error, please reply to the sender indicating this error and delete the transmission from your system immediately.

     

     






  • 18.  RE: Artificial Intelligence (AI)

    Posted 02-10-2020 10:41

    Humans are black boxes as well; we just have a lot more experience as a society with addressing the consequences of biased human decisionmaking.  Including a better comfort level with regard to the boundaries around it.

     

    Looking at the chess world:  Once Deep Blue beat the best human players, it became less interesting to create better and better chess-playing algorithms as stand-alones.  Instead, there has been more attention on human-computer team-based tournaments.  https://en.wikipedia.org/wiki/Advanced_chess  Which turned out to work pretty well.

     

    I think the next logical step for medical AI will be collaborative human-computer teams, where each party acts as a check on the others' idiosyncrasies.

     

    --Brian

     






  • 19.  RE: Artificial Intelligence (AI)

    Posted 02-10-2020 10:47
    Brian,

    I agree with your observations, except your opening sentence! There is no way humans are black boxes in the sense that machine learning AI is a black box, unless you believe that all human learning is based on nothing but conditioning.

    Best,

    Tony




    Anthony Solomonides PhD MSc(Math) MSc(AI) FAMIA
    Program Director, Outcomes Research and Biomedical Informatics
    Research Institute
    NorthShore University HealthSystem
    1001 University Place, Evanston, IL 60201

    224-364-7497






    [cid:image4c2a03.PNG@4a1f95d8.4abc3dc6]

    ________________________________
    Legal Disclaimer: Information contained in this e-mail, including any files transmitted with it, may contain confidential medical or business information intended only for use by the intended recipient(s). Any unauthorized disclosure, use, copying, distribution or taking of any action based on the contents of this email is strictly prohibited. Review by any individual other than the intended recipient does not waive or surrender the physician-patient privilege or any other legal rights. If you received this e-mail in error, please delete it immediately and notify the sender by return email.






  • 20.  RE: Artificial Intelligence (AI)

    Posted 02-10-2020 10:59

    Humans are a very different sort of inscrutable device, to be certain.  Maybe I should have said, dark gray box? 

     






  • 21.  RE: Artificial Intelligence (AI)

    Posted 02-10-2020 12:14
    Thanks Pat,

    I was thinking that the majority of the current biases we have would disappear, but a few would be transferred.

    However other biases would surface with incomplete disease frequency data being important (keeping that pertinent globally would be immensely difficult).

    And would efficiency, labor, and cost considerations become more important, providing the main new biases?

    Rob Bell




  • 22.  RE: Artificial Intelligence (AI)

    Posted 02-11-2020 10:03
    Hi everyone,

    Very interesting discussion. I think it's important that we're all on the same page about what AI means and how it is "made", because it has important implications for this discussion.

    In modern computing, machine learning algorithms are not intelligent in the way people use the term and are not programmed to make specific decisions. They are simply a set of algorithms that allow a machine to train it's own function, (usually a type of discriminatory function). Rather than program the function, the machine is provided with data and "learns" or refines the function over time. As the function is refined with more data, the function becomes better and better.

    This means that humans cannot impart their own biased thinking because - again - they are not programming the function. There are really 2 sources of bias that can be introduced in such algorithms and both are the result of the training data provided:
    1. For so-called supervised algorithms (essentially all of the kinds we are talking about), we could provide the wrong diagnosis in input-output pairs in the training data. If there are no errors in the final diagnosis of the training data, then this source of bias would not be present.
    2. For both supervised and unsupervised algorithms, could be given biased testing data. In other words, if they are only given certain variables that we think are important, rather than all data the clinician uses, then due to an incomplete picture, the machine learning algorithm may utilize data that leads it to the wrong conclusion because of omitted information.

    In both cases, the "bias" of the machine learning algorithm is really a failure to provide the machine with objectively complete data or to provide the machine with the correct diagnosis. In the absence of these errors of the humans training - not programming! - the algorithm, the biases used by humans in the diagnostic process cannot be imparted to the algorithm, because the humans can't directly affect the computation or "thinking". Humans only affect the input and output data it uses to program itself!

    I hope this helps advance the very interesting discussion.

    Steve

    ------------------------------
    Steven Roy
    ------------------------------



  • 23.  RE: Artificial Intelligence (AI)

    Posted 02-11-2020 10:22
    Folks, 

    Very helpful discussions as I am working on algorithms for diagnosis of HLH and mimics as well as assessing  clinician behavior when  a hierarchy of intelligence, expertise is perceived: thank you. 

    For any of you in the Columbus, Ohio (OSU)  area later this week, 2/14, 2/15, highly recommend a play about AI and relative intellectual disability,  The Shadow whose Prey the Hunter Becomes.  https://backtobacktheatre.com/

    Beth Martin-Kool






  • 24.  RE: Artificial Intelligence (AI)

    Posted 02-11-2020 10:29

    Wow, that play sounds very interesting.  You will need to write a review for our list-serve after you see it!   How are your algorithms going to diagnose HLH?

     

    Mike

     

     

     






  • 25.  RE: Artificial Intelligence (AI)

    Posted 02-11-2020 11:44

    There's an important category of AI bias related to spectrum that I think is worth pointing out:  algorithms will almost always be less accurate at classifying subsets that are under-represented in the test set.  For example, you may have heard news coverage of facial recognition software that is much worse at recognizing black faces than white faces.  In most of these cases, it is because the algorithms were trained on data sets that contained mainly white (or at least non-black) faces.  Most AI algorithms are designed to optimize the average performance across the full test set, rather than optimizing within individual subgroups (though this is technically possible as well).  Thus, the particular combination of weights that optimizes the identification of light-colored faces could be quite different from the particular combination of weights that would optimize the identification of dark-colored faces.  And which combination wins out in that scenario? It comes down to majority rule – are there more light or dark faces in the training set?

     

    In a medical context, there's a famous AI example from researchers at the University of Pittsburgh where they developed algorithms for classifying pneumonia patients into high-risk and low-risk.  The algorithms were quite accurate on average, but were inaccurate for patients who had both pneumonia and asthma.  Which turns out to be a subset that is extremely clinically important, but numerically in the minority.  Now, there was more going on with the pneumonia data set beyond simply spectrum issues, but this story does illustrate how AI can badly misfire on numerically small subsets without being obvious to the algorithm developers.

     

    --Brian

     






  • 26.  RE: Artificial Intelligence (AI)

    Posted 02-11-2020 11:53

    Hi Brian,
    I totally agree. The training data determines the quality of the trained function. This is why Google and Facebook are doing so well in the fields of machine learning: not so much because they have better machine learning algorithms, but because they have access to such enormous training sets that the algorithms get trained better on even the low probability events!

    Steve



    ------------------------------
    --
    Steven Roy
    ------------------------------



  • 27.  RE: Artificial Intelligence (AI)

    Posted 02-10-2020 10:31
      |   view attached
    Hello everyone, 
    I am a new SIDM member but have been active for many years in the health informatics community and the Society for Participatory Medicine.

    This discussion reminded me of an issue that was discussed in the HI community about 10 years ago - Vendor "Hold Harmless" clauses.  These clauses protect the vendor if an error in the software causes patient harm. At that time the concern was clinical decision support algorithms and, although not my specialty area, would appear to have similar EHR related issues as the foundation of both are algorithms and machine learning? I have attached an article that may be of interest. 

    Koppel, R., & Kreda, D. (2009). Health care information technology vendors' "hold harmless" clause: implications for patients and clinicians. JAMA: The Journal Of The American Medical Association, 301(12), 1276-1278.

    Regards, 
    Marge

    Marge Benham-Hutchins, PhD, RN
    Associate Professor
    Chair Department of Biobehavioral Health Science
    Texas A&M University, Corpus Christi

    ------------------------------
    Marge Benham-Hutchins
    Texas A&M University Corpus Christi
    ------------------------------

    Attachment(s)

    pdf
    Hold Harmless.pdf   122K 1 version