• PeepinGoodArgs@reddthat.com
    link
    fedilink
    English
    arrow-up
    11
    ·
    4 months ago

    Anecdotally, this was my experience as a student when I tried to use AI to summarize and outline textbook content. The result says almost always incomplete such that I’d have to have already read the chapter to include what the model missed.

    • just another dev@lemmy.my-box.dev
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      4 months ago

      I’m not sure how long ago that was, but LLM context sizes have grown exponentially in the past year, from 4k tokens to over a hundred k. That doesn’t necessarily affect the quality of the output, although you can’t expect it to summarize what it can’t hold on memory.