

112·
13 hours agoI successfully ran local Llama with llama.cpp and an old AMD GPU. I’m not sure why you think there’s no other option.


I successfully ran local Llama with llama.cpp and an old AMD GPU. I’m not sure why you think there’s no other option.


Nice try, Nintendo, I will not buy your wares.
Llama.cpp now supports Vulkan, so it doesn’t matter what card you’re using.