And no i do not have the privilege of running a local model. I have heard of a AI called Maple and tried it out, it was pretty limited to the point that it was a deal breaker (25 messages per week cap). I would like to know more services
And no i do not have the privilege of running a local model. I have heard of a AI called Maple and tried it out, it was pretty limited to the point that it was a deal breaker (25 messages per week cap). I would like to know more services
The Qwen models that you can run with llama.cpp (Jan’s main backend) are quite brilliant for their size