an n150 mini pc - largely considered a very efficient package for home servers - consumes ~15w max without the gpu, and ~9w idle
a raspberry pi consumes 3-4w idle
none of that is supporting more than a couple of people streaming 4k like we’re talking about in the case of netflix
and a single hard drive isn’t even close to what we’re talking about… you’re looking at ~30w at least for the disks alone
as for internet cost, it’s likely tiny… my 24 port gigabit switch from 15 years ago sips < 6w… i can only imagine that’s pretty inefficient compared to today’s standards (and 24 port is pretty tiny for a DC, and port power consumption doesn’t scale linearly)
data centres are just straight up way more efficient per unit of processing than your home anything; it pretty much doesn’t matter how efficient your home gear is, or what the workload is unless you switch it off most of the time - which doesn’t happen in a DC
Even with 30w, it’s still lower than the 75w you mentioned.
Also, that hard drive can serve multiple purposes whereas Netflix is only for steaming movies and tv shows (not music, so you got to add Spotify usage to be fully fair).
my numbers are coming from the fact that anyone who’s replacing all their streaming likely isn’t using a single disk… WD red drives (as in NAS drives) according to their datasheet use between 6 and 6.9w when in use (3.6-3.9w at idle)… a standard home NAS has 4-6 bays, and i’m also assuming that in a typical NAS setup they’re in some kind of RAID configuration, which likely means some level of striping so all disks are utilised at once… again, i think all of these are decent assumptions for home users using off the shelf hardware
i’m ignoring sleep here, because sleep for NAS drives leads to premature failure… this is why if you buy WD green drives for your NAS for example and you use linux, you wdparm to turn off sleep to avoid constantly parking and unparking the heads which leads to significantly reduced life (afaik many NAS products do this automatically, or otherwise manage it)
the top end of that estimate for drives (6 drives) is 41.4w, and the low end (4 drives) is 24w… granted, not everyone will have even those 4 drives, so perhaps my estimate is a little off, but i don’t think 30w for drives is an unreasonable assumption
again, here’s where data centres just do better: their utilisation is spread much more evenly… the idle power of drives is not hugely less than their full speed read/write, so it’s better to have constant access over fewer drives, which is exactly what happens with DCs because they have fewer traffic spikes (and can legitimately manage drive power off for hours at a time because their load is both predictable, and smoother due just to their scale)
also, as someone else in the thread mentioned: my numbers for severs were WAY off for a couple of reasons, but basically
Back of the envelope math says that’s around 0.075 watts per individual stream for a 150w 2U server serving 2000 clients, which looks pretty realistic to my eyes as a Sysadmin.
that also sounds realistic to me, having realised i fucked up my server numbers by an order of magnitude for BOTH power use, and users served
servers and data centres are just in a class of their own in terms of energy efficiency
this is an off the shelf server with 90 bays that has a 2600w power supply (which even then is way overkill: that’s 25w per drive)… with 22tb drives (off the top of my head because that’s what i use, as it is/was the best $/byte size) that’s almost 2pb of storage… that’s gonna cover a LOT of people with that 2600w, and imo 2600w is far beyond what they’re actually going to be pulling
an n150 mini pc - largely considered a very efficient package for home servers - consumes ~15w max without the gpu, and ~9w idle
a raspberry pi consumes 3-4w idle
none of that is supporting more than a couple of people streaming 4k like we’re talking about in the case of netflix
and a single hard drive isn’t even close to what we’re talking about… you’re looking at ~30w at least for the disks alone
as for internet cost, it’s likely tiny… my 24 port gigabit switch from 15 years ago sips < 6w… i can only imagine that’s pretty inefficient compared to today’s standards (and 24 port is pretty tiny for a DC, and port power consumption doesn’t scale linearly)
data centres are just straight up way more efficient per unit of processing than your home anything; it pretty much doesn’t matter how efficient your home gear is, or what the workload is unless you switch it off most of the time - which doesn’t happen in a DC
Idk where your getting your numbers from.
Here is an article that talks about HDD read power usage being less than 10w:
https://www.solved.scality.com/high-density-power-consumption-hdd-vs-qlc-flash/
Even with 30w, it’s still lower than the 75w you mentioned.
Also, that hard drive can serve multiple purposes whereas Netflix is only for steaming movies and tv shows (not music, so you got to add Spotify usage to be fully fair).
my numbers are coming from the fact that anyone who’s replacing all their streaming likely isn’t using a single disk… WD red drives (as in NAS drives) according to their datasheet use between 6 and 6.9w when in use (3.6-3.9w at idle)… a standard home NAS has 4-6 bays, and i’m also assuming that in a typical NAS setup they’re in some kind of RAID configuration, which likely means some level of striping so all disks are utilised at once… again, i think all of these are decent assumptions for home users using off the shelf hardware
i’m ignoring sleep here, because sleep for NAS drives leads to premature failure… this is why if you buy WD green drives for your NAS for example and you use linux, you wdparm to turn off sleep to avoid constantly parking and unparking the heads which leads to significantly reduced life (afaik many NAS products do this automatically, or otherwise manage it)
the top end of that estimate for drives (6 drives) is 41.4w, and the low end (4 drives) is 24w… granted, not everyone will have even those 4 drives, so perhaps my estimate is a little off, but i don’t think 30w for drives is an unreasonable assumption
again, here’s where data centres just do better: their utilisation is spread much more evenly… the idle power of drives is not hugely less than their full speed read/write, so it’s better to have constant access over fewer drives, which is exactly what happens with DCs because they have fewer traffic spikes (and can legitimately manage drive power off for hours at a time because their load is both predictable, and smoother due just to their scale)
also, as someone else in the thread mentioned: my numbers for severs were WAY off for a couple of reasons, but basically
that also sounds realistic to me, having realised i fucked up my server numbers by an order of magnitude for BOTH power use, and users served
servers and data centres are just in a class of their own in terms of energy efficiency
here for example: https://www.supermicro.com/en/products/system/storage/4u/ssg-542b-e1cr90
this is an off the shelf server with 90 bays that has a 2600w power supply (which even then is way overkill: that’s 25w per drive)… with 22tb drives (off the top of my head because that’s what i use, as it is/was the best $/byte size) that’s almost 2pb of storage… that’s gonna cover a LOT of people with that 2600w, and imo 2600w is far beyond what they’re actually going to be pulling