Yes, I’ve heard it a million times by now. AMD drivers on GNU/Linux are great for anything that’s related to display or gaming. But what about compute? How is the compute experience for AMD GPUs?

Will it be like Nvidia GPUs where, no matter what I do, as long as programmes support hardware acceleration, that they supports my GPU? Or is there some sort of configuration or trickery that I need to do for programmes to recognise my GPU?

For example, how well is machine learning supported on AMD GPUs, like LLMs or image generation?

I know from past benchmarks that, for example, Blender’s performance has always been worse on AMD GPUs because the software quality just wasn’t there.

I use my GPU mostly for production tasks, such as Blender, image editing, some machine learning inference, such as text generation, image generation, etc. And lastly, video games.

With this use case in mind, does it make sense to switch to AMD for a future production first, video games second PC? Or, with that use case, should I just stick to Nvidia’s price gouging and VRAM gimping?

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    2 days ago

    Didn’t they just release their Ryzen AI Software as a preview for Linux? I think that was a few days ago. I don’t know about the benchmarks as of today, but seems they’ve been working on drivers, power reporting, toolkit and have been mainlining stuff into the kernel so the situation improves.

    I think CUDA (Nvidia) is still dominating the AI projects out there. The more widespread and in-use projects sometimes have backends for several ecosystems and they’ll run on Nvidia, AMD or Intel or a CPU. Same for the libraries which build the foundation. But not all of them. And most brand-new tech-demos I see, are written for Nvidia’s CUDA. And I’ll have to jump through some hoops to make it work on different hardware and sometimes it works well, sometimes it’s not optimized for anything but Nvidia hardware.