Resilient data centres: Water-cooled versus air-cooled?
Resilient data centres, technology time lapse.
I had the opportunity to visit several data centres in Europe this week. They were all pure colocation plays, primarily selling conditioned real estate (data hall space, power and cooling) to large enterprise, public sector and hyper-scale cloud providers. They had all achieved varying degrees of success that saw them fit out and scale out their data centres as more customers came on board. Their progressive build outs created a time lapse of how data centre technology had evolved over the last 10 years.
While there are always constraints on what can be done, I was impressed by the degree of focus on driving power utilisation efficiency or power usage effectiveness (PUE) down. Constant tuning, constant measuring, and even some trial and error along the way was clearly evident. One particular operator had even created a ‘PUE Scientist’, and while that job label initially appears out of place in resilient data centres, it’s exactly what is needed if one is going to squeeze every piece of efficiency out of the millions of dollars operators invest in their facilities. When I reflect on what we are doing at Intellicentre 2, our largest Sydney data centre based in Macquarie Park, we are so on this page. It requires a special mind to be able to combine the analytics with the facilities understanding to maintain the rage and passion against a back drop of all the other things that need to happen in resilient data centres, especially if it is growing strongly.
Managing Director, Aidan Tudehope from Macquarie Technology Group believes air cooled resilient data centres are better than water cooled data centres.
Doubling power prices have a silver lining.
The industry, and Australia at large has seen electricity prices step up and up, over the last 24 months. This rapid change is a real issue for Australian industry at large, but as a significant input into a data centre’s running cost, it’s going to impact here as well.
There are 2 silver linings in this otherwise bad news. Firstly, the impact is even great for a customer who runs their own in-house data centre which, because of their lack of scale drives a poor PUE (i.e. inefficiencies typically grow below 2 MW total load, but improve little above this scale), and secondly – an increased focus on PUE (better efficiency) to reduce the total power consumed. This second item parallels what happens as petrol prices increase – an immediate short-term focus, but also a longer underlying change as cars become more efficient and buyers look to buy from the more efficient providers.
It’s not just about power, or should it be?
PUE is purely a measure of power efficiency – hence the ‘P’ in PUE. It doesn’t measure the other often hidden form of natural resource consumption – water.
Depending on how a data centre operator chooses to cool their data halls, it will see either no water used or millions of litres. That may sound like an exaggeration, but let me explain with an example. A 10MW facility that uses cooling towers (as opposed to air handling units or free air) as the way they extract the heat will use up to 1,000,000L (yep, 1 million litres of water) every 2 days. Wow! That’s 347 litres every minute. This is staggering. While we might not be in drought now, the reality is that we should expect that there will be a time where we are once again. Not if, but when that happens, and we are told not to use the hose, not to wash the car etc, just think of the local water-cooled data centre consuming 347L of water every minute.
I should call out that there is a trade-off to no water use and that is a moderately higher PUE. Technology changes are reducing this impact, and hybrid versions continue help further.
Can a 10MW water-cooled data centre really be resilient?
But there is another catch for water cooled data centres. Because the cooling relies on water, you need N+1 (back up) should the mains water fail. That normally involves large on-site water tanks but with the need for 1ML every 2 days, those tanks must be massive to be meaningful (1ML = 1,000sqm). Some operators have tanks (not all), but even those that do, have the challenge on what happens when they run dry. Some water tankers I hear. That will work for a few MW facilities, but at 10MW+, that is close to impossible. Getting a cycle of tankers that can provide and release at site 1ML every 2 days.
While there is a degree of religion in water vs air cooling for resilient data centres, it’s calculations like these that saw us build Intellicentre 2 without the need for water. Zero reliance on that crucial Australian resource. It may cost more, it may take up more ground space, but in our view, you get a better outcome.
Ask your data centre provider these 2 questions –
How much water do they consume to cool their data centre? Is it truly resilient?