very typical for people that never even run or host their own server from data center or even cloud service.
live streaming is worse in bandwidth consumption compare to youtube with same resolution input to output. Like youtube can do whatever they like to keep the outgoing low even if you encode according to spec. But streaming with the demand of like 4~6s delay their 2nd pass to try lower the output bitrate is just not gonna be as good as youtube. That’s why twitch still don’t have 4k stream, they have new beta programs thanks to newer codec on newer GPU, as otherwise their data center is gonna get crushed hard.
They’re also probably relying on AWS right? I’m assuming the pipeline for serving up prime video would be similar but it’s hard to tell how much that service “makes”. I feel like anything they’re using their own GPUs for is losing quite a bit of money compared to charging their cloud compute customers for it.
If twitch shuts down in a few months I won’t be surprised.
Since they are bought by Amazon I think any service they wasn’t on AWS would have been moved to AWS. Basically, on demand video streaming service (netflix, youtube, etc) does have finer control of how they want to re-encode and have like bit rate throttle on the server/client side so you don’t see too much buffering if internet connection is acting up. This means they can throttle you down to 360p like youtube auto if their data center isn’t fast enough to fetch the high bit rate yet and then feed you the higher quality one once they got it. (or down grade if your connection goes bad) But twitch stream is like I have a 10Mbits stream incoming and I have to copy, run a 2nd pass on the fly for different resolution, duplicate to outgoing servers and send to user all under 4~6s delay. I am not expert on the backend side and only have some experience dealing with streaming around 2016~2018. So to me that’s incredible feat but the short timespan means they can’t crunch the output bit rate even if it’s pretty static video. Compare to youtube, if I uploaded a 20~30 minutes video in about 12GB on disk, it took them about 3~5 hours to re-encode, even if the source is already encoded with AV1. (I am not partner so I join the queue like any normal pleb on the internet.)
edit forgot to respond to the cloud GPU thing, I think AWS will be charging Twitch the same way as other company, so AWS aren’t really “losing” money if Twitch choose to use cloud instance with GPU(which would be kinda dumb). They need higher throughput for the data in/out so whatever the CPU ingest part I mentioned above is just to breakdown the stream and feed to user as quick as possible. They are not going to waste anytime to give you better quality stream with lower bandwidth cost. they just feed you whatever fits into their bandwidth budget basically.
very typical for people that never even run or host their own server from data center or even cloud service.
live streaming is worse in bandwidth consumption compare to youtube with same resolution input to output. Like youtube can do whatever they like to keep the outgoing low even if you encode according to spec. But streaming with the demand of like 4~6s delay their 2nd pass to try lower the output bitrate is just not gonna be as good as youtube. That’s why twitch still don’t have 4k stream, they have new beta programs thanks to newer codec on newer GPU, as otherwise their data center is gonna get crushed hard.
They’re also probably relying on AWS right? I’m assuming the pipeline for serving up prime video would be similar but it’s hard to tell how much that service “makes”. I feel like anything they’re using their own GPUs for is losing quite a bit of money compared to charging their cloud compute customers for it.
If twitch shuts down in a few months I won’t be surprised.
Since they are bought by Amazon I think any service they wasn’t on AWS would have been moved to AWS. Basically, on demand video streaming service (netflix, youtube, etc) does have finer control of how they want to re-encode and have like bit rate throttle on the server/client side so you don’t see too much buffering if internet connection is acting up. This means they can throttle you down to 360p like youtube auto if their data center isn’t fast enough to fetch the high bit rate yet and then feed you the higher quality one once they got it. (or down grade if your connection goes bad) But twitch stream is like I have a 10Mbits stream incoming and I have to copy, run a 2nd pass on the fly for different resolution, duplicate to outgoing servers and send to user all under 4~6s delay. I am not expert on the backend side and only have some experience dealing with streaming around 2016~2018. So to me that’s incredible feat but the short timespan means they can’t crunch the output bit rate even if it’s pretty static video. Compare to youtube, if I uploaded a 20~30 minutes video in about 12GB on disk, it took them about 3~5 hours to re-encode, even if the source is already encoded with AV1. (I am not partner so I join the queue like any normal pleb on the internet.)
edit forgot to respond to the cloud GPU thing, I think AWS will be charging Twitch the same way as other company, so AWS aren’t really “losing” money if Twitch choose to use cloud instance with GPU(which would be kinda dumb). They need higher throughput for the data in/out so whatever the CPU ingest part I mentioned above is just to breakdown the stream and feed to user as quick as possible. They are not going to waste anytime to give you better quality stream with lower bandwidth cost. they just feed you whatever fits into their bandwidth budget basically.