By Eamonn Ryan

The rapid expansion of digital infrastructure presents complex challenges, particularly concerning the demands of cooling.

Polar walkthrough IT 0257.

Polar walkthrough IT 0257. Supplied by Vertiv

Polar and Vertiv drive innovation with modular AI solution for their DRA01 data centre

Supplied by Vertiv

Polar is a European high-performance data centre operator headquartered in the United Kingdom with a clear ambition to deliver next generation data centres. The company develops, owns and operates data centres optimised for artificial intelligence (AI) and high-performance computing (HPC) workloads. Polar’s pillars are:

  • Polar uses 100% renewable energy, particularly hydroelectric power, to operate its data centres, minimising environmental impact.
  • Combining Vertiv’s modular approach and Polar’s agile development methodology, Polar is able to quickly respond to market demand and evolving technologies.
  • Polar is accelerating infrastructure to support its customers to develop the future of AI with data centres designed to handle high-density, advanced technology workloads

As a leader in AI-driven innovation, Polar focuses on scaling compute infrastructure to meet the rising demands of cutting-edge AI workloads.

The project design phase was rapidly completed in Q4 2024 in a close collaboration between Polar and Vertiv, leveraging seasoned design consultants and Vertiv’s expertise in AI infrastructure. The first modules were shipped and installed in Q1 2025 and commissioning will be completed later in the year.

Challenge

Polar places great emphasis on partnerships when developing new facilities, and engages providers early in the project.

For its DRA01 data centre, it needed a trusted adviser to transform conceptual designs into state-of-the-art solutions.

Polar required a flexible data hall design to accommodate diverse liquid cooling technologies and future expansion, while enabling continuous operation.The solution needed to seamlessly integrate with its existing infrastructure while providing robust power and cooling for mission-criticalAI applications.

Vertiv was selected thanks to its expertise in AI infrastructure and unique portfolio ecosystem, particularly its ability to factory-test technologies and provide advanced prefabricated modular (PFM) solutions for HPC applications.

Solution

In close collaboration with Polar, Vertiv delivered a comprehensive, AI-ready modular data centre. Vertiv’s prefabricated modular solutions are fully-fledged critical digital infrastructure units, seamlessly integrated with advanced power and cooling technologies to enable rapid and efficient AI deployment. Data centre components are thoroughly tested and assembled directly in the factory, significantly reducing on-site installation time and enhancing overall project reliability and quality.

The innovative installation process prioritises critical building blocks, enabling parallel progress on-site while completing remaining components in the factory – a clear advantage over traditional construction methods in terms of efficiency and speed.

The scalable setup supports high density racks of 110 kW for a total of 12 MW, which can be expanded to support future growth. The IT infrastructure is powered via modules that utilise N+1 electrical topology and dual busbar power distribution for complete redundancy. The system is equipped with a Vertiv EXL S1 UPS (uninterruptible power supply), a highly efficient and grid-interactive system, providing backup power through reliable Li-ion battery technology.

The solution also includes a concurrently maintainable chilled water network with thermal energy storage tanks, providing continuous cooling even during power loss scenarios.TheVertiv Liebert AFC glycol-free, free-cooling chillers with exceptionally low-GWP (global warming potential) refrigerant in N+1 configuration, are designed to significantly reduce carbon emissions while delivering substantial energy savings. Advanced algorithms, combined with the unit’s design, allow it to maximise free-cooling efficiency and reduce annual energy consumption compared to conventional systems.Vertiv’s liquid cooling solutions incorporate state-of-the-art redundancy and filtration features to effectively support the latest GPU technologies.

The durable solution features C3M corrosion protection and an EI60 fire rating. The double-roof design facilitates structural integrity under Norway’s challenging weather conditions, maintaining optimal safety in extreme climates.

The data centre is equipped with an advanced Building Management System (BMS) software for comprehensive monitoring and support. It also includes access control, CCTV coverage, a fire detection and suppression system, lightning protection and advanced chilled water controls.

These components are integrated as a system by Vertiv and factory tested. They provide a reliable, efficient and secure environment to support the data centre’s operational needs.

 

Outcome

The project is set to achieve significant total cost of ownership (TCO) savings compared to a traditional bricks-and-mortar approach, while providing fast deployment and easy reconfiguration and scalability.

The AI-ready modular solution is set to achieve high energy efficiencies and a PUE as low as 1.15, optimised by the cold climate and Vertiv cooling solutions. The site will provide 12 MW of HPC and AI-ready IT capacity, with the option to expand up to 50 MW.

“We are excited to collaborate with Polar on this groundbreaking project. Our prefabricated modular solutions, designed to withstand harsh conditions and optimise performance, are a perfect fit for Polar’s innovative approach to AI and HPC data centres.” — Viktor Petik, global vice president, infrastructure solutions, Vertiv.

“Partnering with Vertiv allows us to push the boundaries of what is possible in AI and HPC data centre deployment. Its expertise and advanced modular solutions enable us to achieve our goals of sustainability, technological innovation and rapid business expansion. We are confident that this collaboration will set new benchmarks in the industry and provide our customers with state-of-the-art infrastructure that meets their evolving needs.” — Tom Chubb, chief operating officer, Polar.

Evolving cooling technologies and the rise of liquid cooling

The relentless increase in compute power, especially with platforms designed for high-performance AI workloads, has profound implications for data centre cooling. Traditional air- cooling methods are increasingly insufficient to manage the immense heat generated by such dense compute environments, driving innovation in advanced cooling solutions.

One notable example is Nvidia’s NVL-576 600 kW platform. This system leverages direct-to-chip liquid cooling and closed- loop systems, demonstrating a remarkable 300x improvement in cooling efficiency. This level of advancement is crucial for managing the heat produced by next-generation AI processing.

Another significant innovation comes from Google’s fifth- generation cooling distribution unit (CDU), known as Project Deschutes. This CDU sidecar power rack is designed to support up to 1 MW per rack, utilising high-voltage DC power distribution to enhance both efficiency and cooling capacity. These developments underscore the industry’s shift towards more specialised and powerful cooling infrastructures.

Energy-efficient cooling technologies are paramount for the sustainable operation of data centres. Some of the most promising technologies include:

  • Direct-to-chip liquid cooling involves applying liquid coolants directly to the processors. This provides superior heat dissipation, enabling servers to operate at optimal performance levels and significantly reducing the energy required for cooling.
  • Hybrid rear-door heat exchangers combine the benefits of both air and liquid cooling. They are particularly effective in high-density rack environments, managing heat more efficiently by integrating heat exchangers directly into the rear doors of server racks.

 

Panel discussion at the 2025 Pan-African Data Centres exhibition and conference

A recent panel discussion at the 2025 Pan-African Data Centres exhibition and conference, featured:

  • Lee Perrin, data centre lead, MEA: GBRE
  • Michael Byrne, head of data centre design & engineering EMEA at Eaton
  • Georges Dubien, MEA sales director – data centres: Boreas and Exagate
  • Willem Weber, data centre environmental engineer at Master Power Technologies (MPT)

Perrin commenced the discussion by emphasising a shift in industry engagement. “The focus must be on deep collaboration with clients from the earliest inception stages right through to handover, ensuring technical governance throughout the entire lifecycle. This proactive approach is crucial in a landscape driven by increasingly complex demands.”

Weber recounted MPT’s contributions to SouthAfrica’s data centre landscape, including the construction and operation of 14 facilities, retrofitting 32 others, and pioneering the first high-generation plant in the country that converts methane gas into power and harvests waste heat for cooling. His team was also the first to deploy a 5.5m Kyoto cooling wheel inCenturion and achievedTier III certification. These innovations include its new flagship facility transitioning from traditional to free cooling and hot aisle containment.

Dubien highlighted their expertise in white space solutions, manufacturing everything from CRAC/CRAH units and fan wall units to rear door cooling for AI, alongside intelligent environmental monitoring systems. With 13 years in the data centre business across the Middle East and Turkey, Dubien noted the critical importance of redundancy and precision in a region where cooling is a necessity for survival, let alone for server rooms. His first visit to South Africa revealed similar challenges and opportunities, especially with the surge of AI driving conversations around high-density computing.

l-r) Georges Dubien, MEA sales director - data centres: Boreas and Exagate; Willem Weber, data centre environmental engineer at Master Power Technologies (MPT); Michael Byrne, head of data centre design & engineering EMEA at Eaton; and Lee Perrin, data centre lead, MEA

l-r) Georges Dubien, MEA sales director – data centres: Boreas and Exagate; Willem Weber, data centre environmental engineer at Master Power Technologies (MPT); Michael Byrne, head of data centre design & engineering EMEA at Eaton; and Lee Perrin, data centre lead, MEA

Navigating market drivers and regulatory realities

When asked about the current primary drivers for clients seeking data centre equipment, Perrin cautioned against immediately jumping to AI. He observed: “There appears a current pause or step back in what was a ‘gold rush’ for hyperscalers, suggesting a period of market recalibration in recent months. Regional differences are also significant; while the Middle East faces harsh, high temperatures, South Africa grapples predominantly with grid constraints.” Perrin expressed concern that the over-regulation seen in Europe, driven by ambitious net-zero and global warming policies, is proving “toxic” for the industry, hindering growth, and he hoped this wouldn’t spread to Africa, given its vast land mass and resources.

Weber concurred that South Africa, despite its own regulatory complexities, has a unique opportunity. The panel emphasised the need to “Africanise everything”, rather than slavishly imitate Europe or blindly adopt approaches from mature markets regarding sustainability trends. “South Africa’s innovation is noteworthy, with local achievements in Tier III and Tier IV certifications. Compliance is non-negotiable for attracting foreign investment. Importantly, most compliance standards, such as the Code of Conduct for data centres, actively encourage energy savings and proper operational diligence, benefiting both the bottom line and the environment.”

However, the discussion also touched on significant local challenges. Perrin acknowledged South Africa’s water constraints, despite what he perceived as a better water planning and storage system compared to Europe, where water is often “let go down the grid” due to abundance. He suggested that the industry needs to look broader, borrowing solutions from other sectors like pharmaceuticals which utilise glycol-based, closed- loop or waterless cooling systems in their critical and constrained environments. This cross-industry collaboration is essential for finding new solutions.

Weber reinforced the severity of the water issue, stating that South African power stations consume 500-600 millimetres of water per kilowatt-hour generated. “This makes considering alternatives like drilling boreholes for well water not just an option but a necessity to relieve pressure on municipal pipe systems. Beyond water, major urban centres like Johannesburg face critical shortages in power and even sewage infrastructure, further underscoring the need for diligence in resource consumption and on-site generation.”

 

The density dilemma: AI, shrinking servers and cooling evolution

Georges Dubien, MEA sales director- data centres: Boreas and Exagate

Georges Dubien, MEA sales director – data centres: Boreas and Exagate. All images by © RACA Journal

Dubien highlighted that while servers are shrinking in physical size, the heat they emit is enormous.

He stressed that simple air cooling will not suffice for the new era of computing, with cabinets now reaching up to 100 kW and even potentially 600 kW in the future, compared to typical 5-10 kW racks. Dubien detailed the evolving cooling solutions:

  • Air cooling: Still the most common, but increasingly insufficient for high densities
  • Liquid cooling: The next step, often involving direct-to-chip solutions
  • Immersive cooling: Where servers are submerged in non-conductive fluids, allowing for extreme heat dissipation. He noted a recent design he saw pushing 100 kW per cabinet using immersive cooling, requiring a completely new cabinet design for such high heat loads
  • Fan wall units: These are becoming massive, with units capable of cooling 500 kW (and R&D targeting 1 MW per unit), essentially forming a complete wall of fans to dissipate heat from hot aisle containments

This shift means new data centres need to be inherently resilient, scalable and AI-ready from the ground up, with construction times potentially extending to two years or more for two-megawatt facilities, a significant increase from the traditional six months.

Weber supported the notion that while chip cooling handles the primary heat source, significant ‘residual’ heat still emanates from components like power supplies. This necessitates a dual cooling approach, managing both the direct-to-chip cooling and the remaining ambient heat. He underscored the need for a comprehensive design methodology to make systems scalable, potentially involving chillers, cooling towers and plate heat exchangers for precise temperature and flow regulation.

Perrin acknowledged the “brain-breaking” challenge of retrofitting legacy sites with these high-density requirements. “While new builds offer the flexibility to design for such demands, existing infrastructure poses significant hurdles. The industry is effectively chasing the exponential demands from chip manufacturers and hyperscalers, leading to a constant state of design uncertainty. We’re not being told… there’s a little bit of guessing – a situation that places data centre designers and providers in a precarious position, with designs risking obsolescence almost as soon as they are completed.”

The impact on real estate is also profound. Perrin questioned how the fixed footprint of existing data centres and their surrounding infrastructure (for generators and cooling units) will cope with vastly increased IT load per square metre in the white space. This could lead to “big, empty white spaces with a few heavy blocks” as facilities hit external real estate constraints. Dubien countered that this would necessitate a complete reimagining of the cabinet itself, moving towards fully immersive designs that manage 100 kW or more within a smaller physical footprint.

Skills, collaboration and the future workforce

Michael Byrne, head of data centre design & engineering EMEA at Eaton

Michael Byrne, head of data centre design & engineering EMEA at Eaton.

The rapid evolution of cooling technologies and power demands inevitably impacts the required in-house skills. Byrne stated that Eaton’s global innovation team is in continuous ‘workshop’ mode, collaborating daily with partners on the cooling side. “While power optimisation has reached a high level, the frontier of innovation now lies primarily in cooling, exploring solutions like two-phase immersion cooling. Companies are increasingly joining forces to find these complex solutions, recognising that without addressing cooling, they cannot meet client demands for power management either.”

Weber emphasised the critical need for open-minded individuals in the workforce, as human nature often resists change. He noted a particular challenge within traditional electrical and mechanical engineering disciplines, urging them to embrace these new concepts. A skills shortage exists for those who can think ‘out of the box’ and provide innovative designs. Weber also lamented South Africa’s reliance on importing components like chillers and air conditioning units, advocating for a return of manufacturing capabilities to the country to foster local expertise.

A long-term view, spanning five years or more, is crucial for developing the specialised individuals needed to adapt to this continuously evolving environment.

Dubien echoed the sentiment that the technology world is in a constant state of flux, driven by global competition. He stressed the importance of continuous personal effort to stay updated, with his team constantly in contact with their R&D departments to remain current on new technologies in this rapidly growing market.

Dubien emphasised the critical need for thermal management monitoring within data centres. “If we don’t monitor it, we cannot manage it,” he stated, stressing the importance of predictive maintenance and ongoing team awareness to future-proof data centre designs for five to six years ahead.

 

The quest for standardisation in a rapidly evolving field

Willem Weber, data centre environmental engineer at Master PowerTechnologies (MPT).

Willem Weber, data centre environmental engineer at Master PowerTechnologies (MPT).

A question from the audience highlighted a significant industry challenge: the lack of standardisation in direct-to- chip and immersive cooling solutions, particularly concerning fittings, CDUs (Coolant Distribution Units), and overall system interoperability.

Weber recalled a past era when organisations like Eurovent set rigorous standards, not just for unit aesthetics but for certified performance so that a 100 kW unit was guaranteed to deliver 100 kW. This standard, though costly, provided clear benchmarks. Its decline in popularity has led to a market where consumers struggle to verify supplier claims, occasionally receiving products that underperform. Weber emphasised the need for data centre owners to actively push standardisation bodies for renewed efforts in this area, suggesting that formal frameworks for connections (flanged, threaded, etc.) would greatly simplify planning.

Dubien offered a pragmatic perspective, stating that new direct-to-chip and immersive cooling solutions are inherently “completely customised” and not “off-the-shelf.” Given that these technologies often originate from different regions such as the US and China, he believes a common standard is unlikely in the near future. However, he anticipated that new direct-to-chip solutions would eventually conform to a standard, providing some level of interoperability.

Perrin agreed that the current market, especially with the rapid introduction of AI, does not yet allow for complete standardisation. He foresees a “stormy phase” as a period of competitive innovation driven by diverse perspectives and designs from East and West. “While a ‘cookie-cutter’ approach would simplify things, technology is simply moving too fast. Yet, I want to point out that core components within CDUs, like stainless steel pipes, quick couplers and heat exchangers, are already standardised. I support the idea of data centre owners pushing suppliers for clear interface standards for their equipment to streamline integration.”

Ultimately, Perrin expressed optimism that as the market matures and rises to meet the original expectations for AI, it might create opportunities for localised manufacturing and a more standardised approach. The current reliance on imports from regions focused on their own local requirements could eventually give way to a more integrated and harmonised global supply chain for advanced cooling technologies.

The need for standardisation in cooling: a challenge and aspiration for the data centre industry

Supplied by BAC

As data centres evolve to meet the demands of high-density computing, the cooling industry faces a challenge: the lack of standardised approaches for emerging technologies like direct-to-chip and immersion cooling.

While these advanced methods promise unprecedented energy efficiency and scalability, their widespread adoption is hindered by fragmented standards, proprietary systems and integration complexities. The question remains—can the industry align on common standards, and is it even possible?

The cooling conundrum in high-density data centres

The exponential growth of data and AI workloads has driven the need for more powerful servers, which in turn generate significantly more heat. Traditional air-based cooling systems are reaching their limits, prompting a shift toward liquid cooling technologies such as direct-to-chip and immersion cooling. These methods offer superior thermal performance and energy efficiency, but they also introduce new variables—fluid compatibility, material standards, safety protocols and system interoperability.

At BAC, we’ve seen firsthand how these challenges manifest in real-world deployments. Our COBALT Immersion Cooling Systems, paired with a full suite of outdoor heat rejection technologies—including evaporative, adiabatic, hybrid and dry coolers—are designed to offer scalable, sustainable solutions.

While immersion cooling systems—such as those using BAC’s patented Cortex technology—are inherently simpler and more self-contained than direct-to-chip solutions, some degree of custom engineering is still required. This is often due to the reliance on legacy infrastructure and the absence of universal standards for integration. Even with our industry-leading PUE performance (<1.05), aligning with broader data centre systems can present challenges without standardized guidelines for components and interfaces.

 

Why standardisation matters

Standardisation is not just a technical issue—it’s a business imperative. Without common guidelines, operators face increased costs, longer deployment times and limited vendor interoperability. For example:

  • Direct-to-chip cooling lacks uniformity in cold plate design, fluid connectors and manifold configurations
  • Immersion cooling varies widely in tank design, dielectric fluid properties and server compatibility
  • Coolant Distribution Units (CDUs) differ significantly in terms of flow rate capacities, control logic and interface protocols, complicating integration with both facility and IT systems
  • Monitoring and control systems often operate in silos, complicating data centre management.

These inconsistencies slow innovation and create barriers for smaller players who cannot afford bespoke solutions.

 

The path forward: collaboration and innovation

Despite the hurdles, there is a growing movement toward standardisation. Industry consortia such as the Open Compute Project (OCP) and ASHRAE are working to define guidelines for liquid cooling technologies. BAC actively supports these efforts, contributing to our decades of experience in thermal management and system integration.

Our approach emphasises modularity and flexibility. By offering a range of cooling technologies that can be tailored to site-specific goals—whether optimising for water usage, energy efficiency or footprint—we help bridge the gap between innovation and standardisation. Our systems are designed to be future-ready, supporting increased server rack densities and evolving cooling requirements.

 

Is standardisation possible?

The short answer is yes—but it requires a collective commitment from manufacturers, operators and regulators. Standardisation does not mean one-size-fits-all; rather, it means establishing a common language and framework that enables interoperability, safety and performance benchmarking.

At BAC, we believe that the future of data centre cooling lies in open collaboration and shared innovation. As the industry’s cooling partner since 1938, we are committed to leading this transformation—developing technologies that not only meet today’s demands but also pave the way for a more sustainable, standardised tomorrow.

 

BAC’s role in shaping the future

BAC is actively contributing to this transformation. Our systems are designed with modularity and flexibility, enabling data centres to:

  • Scale server rack density without redesign
  • Optimise for energy or water usage
  • Integrate with existing infrastructure

We believe that open collaboration is the key to unlocking the full potential of liquid cooling. By aligning with industry groups and sharing our engineering insights, we aim to help shape the standards of tomorrow.

At BAC, we’re not just building cooling systems—we’re building the foundation for a more efficient, resilient and standardised data centre ecosystem.

Evaporative cooling: a sustainable solution for data centres

Supplied by Humidair/Condair

The data centre landscape is rapidly evolving in Africa. With a vast increase in demand, significant investments are being made to ensure that Africa is at the forefront of digital industry.

There is a serious challenge though – balancing cooling requirements with sustainability while also trying to minimise costs. This is no easy feat!

Having worked with the largest data centre organisations around the world, Condair has developed its knowledge on data centre-focused solutions. Evaporative cooling is an eco- friendly alternative to more traditional types of cooling, and can also be used as part of more complex arrangements such as hybrid cooling or even pre-cooling.

 

How does evaporative cooling work?

Evaporative cooling works by using water to absorb heat and cool the air. The process involves evaporating water into the air, which consumes heat in the process, thus cooling the surrounding environment.

 

Energy efficiency: lowering pue and operating costs

One of the most compelling advantages of evaporative cooling is its dramatic reduction in energy consumption. Unlike traditional systems that rely on energy-intensive compressors, evaporative cooling uses the natural process of water evaporation to absorb heat from the air.

Evaporative cooling can reduce cooling-related energy use by up to 75%, depending on the system design and climate.

In regions where humidity levels are relatively low for much of the year, evaporative cooling systems operate at peak efficiency. This translates into lower Power Usage Effectiveness (PUE) – a key metric for data centre sustainability – as well as significant cost savings over time.

 

Replacing pre-existing evaporative media in a data centre

Condair realises the importance of reducing long-term expenditure and developing solutions that are future-proof.

Condair now produces a ‘plug and play’ evaporative media solution that is a direct replacement to any pre-existing evaporative media you may have. This high performance, high efficiency media has been designed to enhance any evaporative technology you may have been using before.

Using this replacement technology can help prolong the usage of any pre-existing systems you might be using.

Its flexible, compressible design means that install times (and costs!) are kept as low as possible.

 

Your ultimate data centre cooling solution

Choosing the right cooling solution is essential for the longevity and performance of your data centre. While traditional mechanical systems have their place in some environments, evaporative cooling stands out as one of the most efficient, cost-effective, and environmentally sustainable options available today for data centres.

With the right systems in place, your data centre can run more efficiently and more reliably, to help you lower operating costs and run more sustainably.

Make sure you speak to an evaporative cooling expert when looking at improving the efficiency and sustainability of a data centre. 

Register for free to gain access the digital library for RACA Journal publications