Morgan Stanley estimates that data centres are currently using 5 per cent (1,050 MW) of the electricity on Australia’s power grid and that is expected to grow to 8 per cent (2,500 MW) by 2030.
Some estimates even suggest they could require up to 15 per cent of the power on the grid by then.
I have a hard time taking this seriously. Data centers are far more power efficient than what we had previously (every medium sized business having a server room). And servers in general are far more power efficient than they were even a decade ago. We can fit so much more on a single server than we ever used to be able to.
Not that you would do it in 2024, but I could happily run a large business of 1,000+ users, including their public web presence out of about 10RU. Which translates to roughly a quarter of a server rack.
But why would you do that? Then you need to worry about cooling, power, redundant power, multiple Internet links, UPS and generator backup. Suddenly 10RU is a whole rack and more. Now, you need hardware contracts, people on call 24x7 and all sorts of overheads. Then there’s stress testing on that infrastructure (eg simulating a power outage and making sure your UPS and generator carry the load). And I haven’t even touched on multiple sites for redundancy.
It’s far more efficient to host that in a data centre and let them worry about all that stuff.
I think they are more efficient than years ago yes, but some chips are consuming more power overall too. Yes what took up 40u a while ago can now fit in 10u but data centers aren’t reducing in size, they’ll just fill that 40 with more hardware
They need to fill that 40RU with more hardware to keep up with ever growing data creation demand. Think how much data your home produced a decade ago compared to now. Everything has some sort of sensor inside now recording data.