Hey everyone, I’m building a new server to run Jellyfin (with a few other services like Pi-hole) and I’m stuck on GPU or CPU transcoding.
My main concern is smooth 4K HDR transcoding for 1 stream. I’ve been reading mixed advice online – some people say a strong CPU with good single-core performance can handle it, while others recommend a dedicated GPU.
Should I focus my budget (~$1000AUD/$658USD) on a good CPU, or spend some of it on a dedicated GPU?
One of my miniPCs is just a little N95 and it can easily transcode 4K HDR to 1080p (HDR or tonemapped SDR) to a couple of clients, and with excellent image quality. You could build a nice little server with a modern i3 and 16gigs of ram and it would smash through 4 or 5 high bitrate 4K HDR transcodes just fine.
Is that one transcoding client local to you? or are you trying to stream over the web? if it’s local, put some of the budget to a new player for that screen perhaps?
I tried that with a cheap minipc I bought and it was CPU limited. The GPU was fine it was the overhead that killed me.
Was it an n100? They have a severely limited power budget of 6w compared to the n95 at 25w or so.
I’m running jellyfin ontop of ubuntu desktop while also playing retro games. That all sits in a proxmox vm with other services running alongside it. It’s perfectly snappy.
I believe it is N9505 if I remember correctly. It is also possible I didn’t give it enough cores.
N5095 ? lots of reports of that one not supporting everything it should based on other Jasper Lake chips, CPU getting hit for Decode when it shouldn’t for example. Also HDR to SDR cant be accelerated with VPP on that one as far as I know so the CPU gets smashed. I think you can do it with OpenCL though.
It is a N5095A