For example, does a 13B parameter model at 2_K quantiation perform worse than a 7B parameter model at 8bit or 16bit?
https://github.com/ggerganov/llama.cpp#quantization
https://github.com/ggerganov/llama.cpp/pull/1684
Regarding your question: 13B 2_K seems to be on par with 7B 16bit and 8bit. Not much of a difference between all those. (Look at the perplexity values. Lower is better.) The second link has a nice graph.
Most people don’t go as low as 2bit though. It’s considerably worse than 4bit.
Anyone else see 11 comments on the post count but only 2 comments…?
Yes
Well, a few of those extra numbers are my fault. I edited my answer a few times. And lemmy reportedly counts every edit as an additional comment. (When user and community are on different instances.) I hope they fix that soon.