Some disgusting thing about recent updates for Mistral:
Some disgusting thing about recent updates for Mistral:
Would it be possible to fork Mistral code and remove parts about ethicalguideline stuff? If so, we should create a community who might donate their hardware resources for training such model/models…I am strongly against of any censorship/ethnical things in models.
I am wondering if I can run mistral/mixtral on my server. It doesn’t have videocard but RAM amount can be almost unlimited, I have unused ~100GB inside and can top up to 1TB if needed, and give 20-25 vCPU cores(the rest cores of CPU are used already).
Do cloud services see everything - text/images data for training? And finished trained model? If so, runpod.io etc are no solution.
For me, I found only one group about LLM here in search. You can make Lemmy better if share another groups for similar topics since search feature doesn’t work well yet.
How to download it for MLCChat(Android)?
Is it censored one?
Open training code too?