As AI capabilities advance in complex medical scenarios that doctors face on a daily basis, the technology remains controversial in medical communities.

  • theluddite@lemmy.ml
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    4
    ·
    1 year ago

    Ya that’s a fundamental misunderstanding of percentages. For an analogous situation with which we’re all more intuitively familiar, a self driving car that is 99.9% accurate in detecting obstacles crashes into one in one thousand people and/or things. That sucks.

    Also, most importantly, LLMs are incapable of collaboration, something very important in any complex human endeavor but difficult to measures, and therefore undervalued by our inane, metrics-driven business culture. Chatgpt won’t develop meaningful, mutually beneficial relationships with its colleagues, who can ask each other for their thoughts when they don’t understand something. It’ll just spout bullshit when it’s wrong, not because it doesn’t know, but because it has no concept of knowing at all.

    • deweydecibel@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      edit-2
      1 year ago

      It really needs to be pinned to the top of every single discussion around chatgbt:

      It does not give answers because it knows. It gives answers because it thinks it looks right.

      Remember back in school when you didn’t study for a test and went through picking answers that “looked right” because you vaguely remember hearing the words in Answer B during class at some point?

      It will never have wisdom and intuition from experience, and that’s critically important for doctors.

      • Puzzle_Sluts_4Ever@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        1 year ago

        How do you think medicine works?

        Tests are expensive, invasive, and often outright destructive. Medicine is very much about treating symptoms and trying to determine what diagnosis “looks right”. If we had perfect knowledge then this would actually be trivial for AI because it would literally just be looking up symptoms in a textbook.

        I was going to make a joke about not watching so much House but… actually watch more House. Ignore all the medical malpractice, complete lack of ethics, etc. Instead, focus on just how often they get down to two or three probable diagnoses based on the known data (… and ignore that it is always the obscure one). And then they get a new symptom and a new set of probable diagnoses.

        • ourob@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          “Looks right” in a human context means the one that matches a person’s actual experience and intuition. “Looks right” in an LLM context means the series of words have been seen together often in the training data (as I understand it, anyway - I am not an expert).

          Doctors are most certainly not choosing treatment based on what words they’ve seen together.

          • Puzzle_Sluts_4Ever@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            edit-2
            1 year ago

            What is experience and intuition other than past data and knowledge of likely outcomes?

            if my doctor is “using their gut” to decide what treatment I need and not basing it on data? I am gonna ask for another doctor. Also, if/when I die, they are completely screwed if there is any form of an investigation.

            Doctors are most certainly not choosing treatment based on what words they’ve seen together.

            They literally (in the truest sense of the word) are. Words have meaning and are how we convey information related to collected data. Collected data is compared against past data and experience and slotted in to likely diagnoses. This is likely an incredibly bad example but: Stomach ache, fever, constipation, and nose bleeds? You might have stomach cancer. Let’s do a biopsy.

            Doctors read charts. That is how they know what collected data there is. There is a LOT of value in the doctor also speaking to the patient, but that more speaks to patients not communicating valuable information because “they are just a nurse” or whatever other stupidity.

            But, at the end of the day: it is data aggregation followed by pattern matching. Which… AI are actually incredibly good at. It is just the data collection that is a problem and that is a problem in medicine as a whole. Just ask any woman or person of color who was ignored by their doctor until they got a white dude to talk for them.

            This is very much a situation where a bit of a philosophy background helps a lot. Because the idea of “what is consciousness” and “what is thought” long predate even the idea of artificial intelligence. And as you start breaking down cognitive functions and considering what they actually mean…


            Just to be clear, I am not at all saying LLMs will take over for doctors. But as a preliminary diagnosis and a tool used throughout patient care? We have a ways to go, but this will be incredibly valuable. Likely in the same way that good doctors learn to listen to and even consult the nurses who spend the most time with those patients.

            Which is what we already see in computer programming and (allegedly) other fields like legal writing and the like. You still need a trained professional to check the output. But reading through a dozen unit tests and utility functions or some boilerplate text is a lot faster than writing it. And then they can spend the majority of their “brain juice” on the more interesting/difficult problems.

            Same here. Analyzing blood tests after a physical or the vast majority of stuff at an urgent care? Let the AI do the first pass and the doctors can check that that makes sense. Which lets them focus on the emergencies and the “chronic pain” situations.