Genuinely curious.

Why do you like LLMs? What hopes do you have for AI & AGI in our near and distant future?

  • nottheengineer@feddit.de
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    Right now an LLM is basically a two year old that knows every language in the world and has the entire knowledge of humanity squeezed in its little head.

    They’re also fun to work with. Error messages are boring when you can instead try to figure out where an LLM got the idea to say what it did.

    • Blaed@lemmy.worldOPM
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      What I find particularly exciting is that we’re seeing this evolution in real-time.

      Can you imagine what these models might look like in 2 years? 5? 10?

      There is a remarkable future on the horizon. I hope everyone gets an equal chance to be a part of it.

      • nottheengineer@feddit.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        They will get better and might actually become a threat for software engineers, but I don’t think LLMs in their current form will get us much closer to AGI.

        We need to do reinforcement learning in the real world to get there. And that will be hard, because right now we have the internet as an essence of human knowledge, mostly in text form so it’s super easy go work with. It’s basically easy mode in the context of AGIs (not to discredit the people working of SOTA LMMs, I just think that the way ahead of us will be even harder).