Things are still moving fast. It’s mid/late july now and i’ve spent some time outside, enjoying the summer. It’s been a few weeks since things exploded in the month of may this year. Have you people settled down in the meantime?

I’ve since then moved from reddit and i miss the LocalLlama over there, that was/is buzzing with activity and AI news (and discussions) every day.

What are you people up to? Have you gotten tired of your AI waifus? Or finished indexing all of your data into some vector database? Have you discovered new applications for AI? Or still toying around and evaluating all the latest fine-tuned variations in constant pursuit of the best llama?

  • bia@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I used it quite a lot at the start of the year, for software architecture and development. But the number of areas where it was useful were so small, and running it locally is quite slow. (which I do for privacy reasons)

    I noticed that much of what was generated needed to be double checked, and were sometimes just wrong, so I’ve basically stopped using it.

    Now I’m hopeful for better code generation models, and will spend the fall building a framework around a local model. See if the helps in guiding the models generation.

    • zephyrvs@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      1 year ago

      I’m pumped for Llama2 which was released yesterday. Early tests slow some big improvements. Can’t wait for Wizard/Vicuna/Uncensored versions of it.

      • Toxuin@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        It’s marginally better than original but WAYY more censored. It is pretty intrusive. It refused to write a bash script to kill a process by regexp 🤦

        • zephyrvs@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          1 year ago

          The first uncensored variants are already on Huggingface though, look for The Bloke. :)

  • zephyrvs@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I’m building an assistant for Jungian shadow work with persistent storage, but I’m a terrible programmer so it’s taking longer than expected.

    Since shadow work is very intimate and personal, I wouldn’t trust a ChatGPT integration and I’d never be fully open in conversations.

    • rufus@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      Wow. I’m always amazed by what - previously unknown (to me) stuff - people do. I had to look that one up. Is this some kind of leisure activity? self-improvement or -therapy? or are you just pushing the boundaries of psychology?

      • zephyrvs@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        1 year ago

        I was fascinated by Jung’s works after tripping on shrooms and becoming obsessed with understanding conciousness. I already stumbled upon llama.cpp and started playing around with LLMs and just decided to build a prototype for myself, because I’ve doing shadow work for self-therapy reasons anways.

        It’s not really that useful yet, but making it into a product is unlikely because most people who wouldn’t trust ChatGPT won’t trust an open source model on my machine(s) either. Also shipping a product glued together from multiple open source components with rather strict GPU requirements seems like a terrible experience for potential customers and I don’t think I’d be able to handle the effort of supporting others to properly set it up. Dunno, we’ll see. :D