An ASHRAE podcast recently delved into a critical evolution within data centres: the increasing necessity of liquid cooling. Host Justin Seter guided a panel of industry experts – David Quirk, Dustin Demetriou and Tom Davidson – through the intricacies of this technology, driven by the insatiable demands of artificial intelligence (AI) and high-performance Graphics Processing Unit (GPU) applications. This is Part 3 of a ten-part series.

Work within ASHRAE includes updates to ASHRAE Standard 37, the method of test for air conditioning units. Image by Macrovector/Freepik.com
Demetriou affirmed ASHRAE’s active involvement, noting that guidance on liquid cooling has been available since the publication of the first liquid cooling book around 2014. This early guidance, driven by the needs of national laboratories and supercomputing centres, included the Facility Water System (W) classes, aimed at standardising temperatures within facility water loops. More recently, the latest version of the cooling book and the new data centre encyclopedia have expanded on this, introducing additional guidance on the Technology Cooling System (TCS) and even new ‘S’ classes to further standardise temperatures required by IT equipment. These efforts aim to ensure a consistent approach in designing both the facility infrastructure and the IT equipment.
Beyond TC 9.9’s direct efforts, Demetriou pointed to related work within ASHRAE, such as updates to ASHRAE Standard 37, the method of test for air conditioning units, which is being expanded to include testing methodologies for liquid cooling equipment like coolant distribution units (CDUs). This aims to provide standardised ways to compare different technologies and inform sound design decisions.
Quirk added that ASHRAE TC 9.9 had recently published a technical bulletin in September 2024, a concise four-page document offering critical resiliency guidance specifically for cold plate liquid cooling applications and their deployment. He also highlighted a related article in the December ASHRAE Journal that further elaborated on these technical guidelines.
Davidson then emphasised ASHRAE’s long-standing commitment to research, highlighting a specific project, WS 1972, focused on data centre direct liquid cooling resiliency, failure modes, throttling impacts and liquid cooling energy use metrics and modeling. He explained that this research, while a multi-stage process, is actively progressing through ASHRAE’s research administration channels. Davidson also touched upon ASHRAE Standard 90.4, the energy standard for data centres, noting the surprising absence of a definition for liquid cooling in the currently published version. He highlighted the proactive engagement of the research project team with the 90.4 committee to understand the calculations behind existing energy metrics (MLC and ELC) with the goal of developing comparable energy efficiency metrics for liquid cooling solutions.
Seter then steered the conversation towards the critical aspects of efficiency and resiliency, acknowledging their paramount importance for the podcast’s audience. Focusing initially on resiliency, he inquired about the anticipated outcomes of the ongoing research project, such as standard inlet conditions, rate-of-rise tolerances, and maximum case temperatures.
Davidson elaborated on the core motivation behind the research project: the need for better-defined guidance at the critical interface between the cooling infrastructure and the IT hardware. He explained that in air-cooled data centres, the air itself acted as a buffer with a relatively slow thermal time constant, providing a greater margin of safety in case of infrastructure anomalies. However, direct liquid-to-chip cooling exhibits a much faster heat transfer rate and a significantly smaller thermal time constant, drastically reducing the safety margin.