• paddirn@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    4 months ago

    It seems like such a weird thing to marry up with internet searching. This method where the algorithms can & will “hallucinate” and just make shit up vs finding very specific information that a person is searching for. Why ever trust these LLMs with facts? These things should’ve only ever been marketed for creative writing and art, not shit like writing legal briefs and school papers and such.

    • Blóðbók@slrpnk.net
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      4 months ago

      Maybe I can share some insight into why one might want to.

      I hate searching the internet. It’s a massive mental drain for me to try figure out how I should put my problem into words that others with similar ideas will have done before me - it’s my mental processing power wasted on purely linguistic overhead instead of trying to understand and learn about the problem.

      I hate the (dis-/mis-)informational assault I open myself to by skimming through the results, because the majority of them will be so laughably irrelevant, if not actively malicious, that I become a slightly worse person every time I expose myself.

      And I hate visiting websites. Not only because of all the reasons modern websites suck, but because even if they are a delight in UX, they are distracting me from what I really want, which is (most of the time) information, not to experience someone’s idiosyncratic, artistic ideas for how to organise and present data, or how to keep me ‘engaged’.

      So yes, I prefer stupid a language model that will lie about facts half the time and bastardise half my prompts if it means I can glance a bit of what the internet has to say about something, because I can more easily spot plausible bullshit and discard it or quickly check its veracity than I can magic my vague problem into a suitable query only to sift through more ignorance, hostility, and implausible bullshit conjured by internet randos instead.

      And yes, LLMs really do suck even in their domain of speciality (language - because language serves a purpose, and they do not understand it), and they are all kinds of harmful, dangerous, and misused. Given how genuinely ignorant people are of what an LLM really is and what it is really doing, I think it’s irresponsible to embed one the way google has.

      I think it’s probably best to… uhh… sort of gatekeep this tech so that it’s mostly utilised by people who understand the risks. But capitalism is incompatible with niches and bespoke products, so every piece of tech has to be made with absolutely everyone as a target audience.