Yes, I’ve heard it a million times by now. AMD drivers on GNU/Linux are great for anything that’s related to display or gaming. But what about compute? How is the compute experience for AMD GPUs?

Will it be like Nvidia GPUs where, no matter what I do, as long as programmes support hardware acceleration, that they supports my GPU? Or is there some sort of configuration or trickery that I need to do for programmes to recognise my GPU?

For example, how well is machine learning supported on AMD GPUs, like LLMs or image generation?

I know from past benchmarks that, for example, Blender’s performance has always been worse on AMD GPUs because the software quality just wasn’t there.

I use my GPU mostly for production tasks, such as Blender, image editing, some machine learning inference, such as text generation, image generation, etc. And lastly, video games.

With this use case in mind, does it make sense to switch to AMD for a future production first, video games second PC? Or, with that use case, should I just stick to Nvidia’s price gouging and VRAM gimping?

  • utopiah@lemmy.ml
    link
    fedilink
    arrow-up
    12
    ·
    2 days ago

    A friend of my is a researcher working on large scale compute (>200 GPUs) perfectly aware of ROCm and sadly he said last month “not yet”.

    So I’m sure it’s not infeasible but if it’s a real use case for you (not just testing a model here and there but running frequently) you might have to consider alternatives unfortunately, or be patient.