Trying something new, going to pin this thread as a place for beginners to ask what may or may not be stupid questions, to encourage both the asking and answering.

Depending on activity level I’ll either make a new one once in awhile or I’ll just leave this one up forever to be a place to learn and ask.

When asking a question, try to make it clear what your current knowledge level is and where you may have gaps, should help people provide more useful concise answers!

  • librecat@lemmy.basedcount.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    Knowledge level: Enthusiastic spectator, I don’t make or finetune llms, but I do watch AI news, try out local llms, and use things like Github copilot and chat gpt.

    Question: Is it better to use code llama 34b or llama2 13b for a non coding related task?

    Context: I’m able to run either model locally, but I can’t run the larger 70b model. So I was wondering if running the 34b code llama would be better since it is larger. I heard that models with better coding abilities are better for other types of tasks too and that they are better with logic (I don’t know if this is true I just head l heard it somewhere).

    • noneabove1182@sh.itjust.worksOPM
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 year ago

      I feel like for non coding tasks you’re sadly better off using a 13B model, codellama lost a lot of knowledge/chattiness from its coding fine tuning

      THAT SAID it actually kind of depends on what you’re trying to do, if you’re aiming for RP don’t bother, if you’re thinking about summarization or logic tasks or RAG, codellama may do totally fine, so more info may help

      If you have 24gb of VRAM (my assumption if you can load 34B) you could also play around with 70B at 2.4bpw using exllamav2 (if that made no sense lemme know if it interests you and I’ll elaborate) but it’ll probably be slower