An ASHRAE podcast recently delved into a critical evolution within data centres: the increasing necessity of liquid cooling. Host Justin Seter guided a panel of industry experts – David Quirk, Dustin Demetriou and Tom Davidson – through the intricacies of this technology, driven by the insatiable demands of artificial intelligence (AI) and high-performance Graphics Processing Unit (GPU) applications. This is Part 7 of a nine-part series.

Complexity is precisely why ASHRAE is actively publishing guidance to drive industry consistency. Image by Vecstock/Freepik.com
Demetriou emphasised that as densities climb, cooling all components becomes increasingly challenging. He pointed out that while energy efficiency was a primary driver for cooling discussions two decades ago, the focus has shifted towards managing the sheer heat generated by these high-power-density systems, where maintaining even relatively high inlet temperatures (W40-W45) is becoming difficult. The physical density of AI interconnects further exacerbates the cooling challenge, as performance demands necessitate tightly packed systems. Demetriou also cautioned against designing data centres solely for extreme AI workloads, as the infrastructure requirements for 400-500 kilowatt racks, such as lower water temperatures, might be inefficient or unsuitable for more common 15-60 kilowatt deployments. He stressed the importance of considering diverse use cases to avoid energy penalties or performance limitations.
The conversation then transitioned to the significant impact of these new densities on data centre design. Seter prompted the panel to discuss the implications for use cases and the differences between hyperscalers building internally and colocation providers catering to a diverse clientele.
Quirk highlighted that the increased complexity stems from the growing number of stakeholders and the blurred ‘control boundaries’. Unlike the clear separation in air-cooled environments, liquid cooling creates a physical link between the IT equipment and the facility infrastructure. This necessitates a new level of collaboration and shared responsibility in design, operation, testing and commissioning. The ownership of key components like CDUs and TCS piping can vary significantly between projects, especially in colocation facilities serving diverse tenants, creating challenges in defining responsibilities and liabilities.
Demetriou emphasised the added complexity in colocation environments, where providers must support equipment from various vendors with potentially different water quality and material compatibility requirements. This contrasts with the more controlled ecosystem of hyperscale deployments. The need for a standardised approach at the facility level becomes paramount in heterogeneous colocation sites.
Seter underscored that this complexity is precisely why ASHRAE is actively publishing guidance to drive industry consistency, such as the S-class temperature standards. However, he noted the slow adoption of these relatively new standards. Furthermore, the multi-stakeholder environment often leads to the introduction of significant safety factors driven by contractual obligations between manufacturers, IT operators and colocation providers. Each party adds a layer of precaution, resulting in overall energy inefficiency. The panel agreed that achieving true standardisation and bridging the gaps between stakeholders through research and clear guidelines is crucial for improving the efficiency and reliability of liquid-cooled data centres.