Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.

  • yukijoou@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    for it to “hallucinate” things, it would have to believe in what it’s saying. ai is unable to think - so it cannot hallucinate

    • Jrockwar@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Hallucination is a technical term. Nothing to do with thinking. The scientific community could have chosen another term to describe the issue but hallucination explains really well what’s happening.

      • yukijoou@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        huh, i kinda assumed it was a term made up/taken by journalists mostly, are there actual research papers on this using that term?

        • TheBlackLounge@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          It used to mean all generated output though. Calling only mistakes hallucinations is new, definitely because of hype.