By Eamonn Ryan
This article is derived from a presentation delivered by Michael Young, Application Engineer – Thermal Management at Vertiv and a regular contributor to RACA Journal, at FRIGAIR 2025. This is Part 1 of a two-part series.

Michael Young, Application Engineer – Thermal Management at Vertiv, at FRIGAIR 2025. ©RACA Journal
Young’s insights, shared on the topic of ‘Design of data centres for optimal energy efficiency’, explore the impact of artificial intelligence on data centre operations and the evolving strategies for effective thermal management.
The pervasive integration of Artificial Intelligence (AI) into our daily lives, from Gmail’s smart filters to YouTube’s recommendations and the transformative capabilities of ChatGPT, is driving an unprecedented surge in data centre energy consumption. This exponential growth presents a significant challenge for thermal management, demanding innovative cooling solutions and a re-evaluation of traditional data centre design.
AI-driven applications require immense computing power, directly translating into higher energy consumption. Projections indicate a staggering 5.3-fold increase in data centre energy consumption by 2032, with a threefold increase expected by 2030 alone. To put this into perspective, data centres are now being designed with capacities exceeding 40 MW, rivalling the power demands of entire industrial sectors.
This escalation is evident at the individual server level. Historically, a data centre rack consumed as little as 3 kW. With the advent of AI, high-density computing is pushing this figure dramatically, with racks now demanding 50 to 150 kW. This substantial increase in power density creates an equally formidable challenge in dissipating the generated heat, as electrical energy consumed by IT equipment is almost entirely converted into heat within the data centre environment.
The shift to liquid cooling: managing high-density heat
The sheer heat generated by AI-intensive racks makes traditional air cooling insufficient. Air simply lacks the thermal capacity and practical space to effectively cool such high-density environments. This necessitates a shift towards liquid cooling solutions.
Liquid cooling works by directly transferring heat from high-processing chips, such as GPUs (Graphics Processing Units) – the backbone of AI computations – to circulating cold fluid. This fluid flows across a cold plate attached to the chip’s back, efficiently absorbing the heat and carrying it away from the server.
