By Eamonn Ryan

At its recent Global Virtual Press Broadcast on Accelerated Innovation – Liquid Cooling Solutions, executives and industry experts emphasised that cooling technology has become the decisive factor in enabling AI at scale. This is the first instalment of a two-part series.

Liquid cooling is up to 3 000 times more effective than air at removing heat.

Liquid cooling is up to 3 000 times more effective than air at removing heat. Bedneyimages | Freepik.com

Artificial intelligence (AI) is rewriting the rules of data centre design. High-performance AI servers, often housing up to 16 GPUs, consume more than 20 times the power of standard cloud servers and generate an equivalent heat load. With cooling now accounting for as much as 40% of a data centre’s total energy budget, traditional air-based systems are no longer sufficient.

Schneider Electric, through its recent integration with Motivaire, has emerged as the leading provider of liquid cooling solutions designed to meet the unprecedented demands of AI-driven workloads.

 

Why liquid cooling is essential for AI workloads

  • The rapid evolution of GPUs and AI hardware is pushing rack densities far beyond the limits of air cooling
  • Traditional rack density: ~20kW (air manageable)
  • Current AI rack density: >100kW
  • Next-generation projections: 240kW per rack within a year

Liquid cooling is up to 3 000 times more effective than air at removing heat, as it captures thermal load directly at the chip level. Single-phase direct-to-chip liquid cooling is currently the industry standard, while two-phase systems and immersion cooling are gaining attention as AI chips grow larger and hotter.

Key industry takeaways include:

  • AI adoption is only at its beginning, and demand will accelerate rapidly
  • Data centre operators must invest urgently in liquid cooling to prepare for next-generation AI infrastructure
  • Smaller enterprises will need support from local partners and managed service providers to deploy AI effectively

 

Core components of the liquid cooling ecosystem

Coolant Distribution Units (CDUs)

  • Handle intense AI processor thermal loads by targeting heat directly at the source
  • Scale from 105kW to 2.5MW, future-proofed against silicon roadmaps
  • Certified for Nvidia’s latest hardware and already proven in exascale environments

Heat Dissipation Units (HDUs)

  • Enable cooling without central chilled water infrastructure
  • Compact design supports 100–132kW per rack, engineered specifically for Nvidia’s NVL 144 platform

Rear door heat exchangers

  • Capture the 10–30% residual heat from components not yet liquid-cooled
  • Seamlessly blend liquid and air strategies for hybrid environments

Closed-loop air-cooled chillers

  • Replace evaporative cooling with waterside free cooling
  • Reduce water consumption, lower maintenance, and use low-GWP refrigerants

Technology Cooling System (TCS) loops

  • Deliver clean, filtered coolant to GPUs at precise flow and pressure levels
  • Enhance performance by up to 20% through optimised engineering

Prefabricated modular pods

  • Provide rapid, scalable deployment of high-density AI racks
  • Integrated with cooling, power, cabling, and containment
  • Used by partners such as Compass Data Centers for hybrid cooling deployments

…continue to part two.