By Benjamin Brits | All photos: Creative Commons

With the expansions and additions of data centres worldwide, particularly in South Africa, and driven by the increased demand for cloud-based and digital services comes the critical need for cooling in these facilities.

From humble beginnings when the first hard disk drive was invented in 1956, technology advancement has pushed the progress of data and subsequent data storage needs. From that first single commercial hard drive sold that could store a mere 5 megabytes (MB) of information, 2020 will see global data storage capacity exceed 6.8 zettabytes (ZB).

If, like me, this sounds like a made-up word to you, it is in fact the term referring to a trillion gigabytes. Further still, what is currently termed ‘theoretical capacity in storage’ is a yottabyte (YB), which is equal to 1000 ZB, and will undoubtedly find its way into ‘actual existence’ not too far into the future. So, you can learn how to spell and pronounce these words in the meantime as they work their way into our, and definitely the younger generation’s everyday lives.

Interestingly, the global storage capacity between 2019 and 2020 will grow by more than 15% and it is expected that this trend will continue, up to 20% year on year until 2025. All of the added storage capacity that is held by various data centres (DC) needs cooling, and with the heat generated from the componentry, coupled with the global drive to reduce energy consumption – efficiency will be more important than ever.

Simply, the objective here is to control a sensitive environment using as little energy as possible. Data centres often provide service for critical applications, and therefore require highly effective systems for total reliability – cooling systems ensure the correct functioning of a DC and contribute to overall energy consumption of these facilities – being cognisant too they operate 24/7/365.

Gaining insight from HVAC suppliers to the data centre industry, consulting engineers and a solutions architect at an specialist information technology service organisation, you quickly become aware of the fact that data centres have become the digital nervous system that connect so many elements of the current era, and actually fulfil many links in society that you wouldn’t even think about normally – like swiping your debit or credit card to complete a purchase. Their importance further reveals the nuances in secrecy, security and performance needs of every operation.

“Use of any of the methods mentioned are primarily driven by the building and what you have to work with.”

“DCs have become high-security facilities – some of the most secure sites around and are actually comparable to a bank vault. A lot of these centres work according co-location – simply meaning that various companies store their data at a particular facility – and you don’t necessarily want just anyone to know where your data is being stored. If these systems are compromised or are sabotaged in any way, you could lose an entire business or a significant portion of a business. Information and data today are extremely valuable. Not adequately managing cooling, humidity and filtration affects the performance of processing speed and this is one of the major functions of a DC, so, protection of the investment and optimal operating conditions are the priority while also striving for best efficiency,” says Andrew Koeslag, managing director at AIAC Airconditioning SA.

Considerations for data centre designs

When it comes to data centres, there are generally three primary points of guidance being:

  • The Tier Topology that has been developed by the Uptime Institute.
  • Power usage effectiveness – or PUE which has become the primary metric that is used for measuring the effectiveness of overall energy consumption in data centres.
  • For cooling, the recommendations according to the American Society of Heating, Refrigerating and Air-Conditioning Engineers’ (ASHRAE) Technical Committee 9.9 – Mission Critical Facilities, Data Centres, Technology Spaces and Electronic Equipment. Ranges from a class A1 to class A4.

These three points are typically the starting position for any facility and subsequent design criteria that will follow based on the kilowatts of IT componentry to be cooled or cooling per square meter, utility load, whether the facility is going to be a new development, revamp or building retrofit, facility density and rack population rate, maximum load at capacity, air flow design, and also importantly available capital of the client.

It is also recommended, depending on the tier rating, that DCs are also designed and built incorporating two electricity grids for example one from the city where the data centre is located as well as a direct connection with Eskom. These facilities, and according to the Uptime Institute tier, will be designed with reference to N, N+1 or 2N.

This refers primarily to the redundant capacity where you would see in an N+1 installation with a capacity of 100kW, 3 x 50kW cooling systems – 2 x operating to meet the 100kW cooling need and 1 x standby in case of any failure. 2N installations have duplication of the entire site and components including the building management system, generators, battery backup, entire cooling system, and so on. For bigger players in the DC space, you also find that the entire 2N centre could be duplicated at an alternate site location – depending on the nature of the data stored of course.

Cooling configurations and design factors

“IT density and power per rack are some of the minimum factors to ensure adequate cooling in a DC. As an industry design default COP of 2.5 is best practice. For the calculation of 1 x kW of power per Kw of IT, this equates to 0.4 kW of cooling required for an average system today. The reality is most data centres’ cooling is designed for full capacity and redundancy, with no design diversity for capacity, and in the absence of cooling storage or free cooling, you get over-designed infrastructure with a COP closer to 1.3. From small to large sized DCs and with over 22 years in building these facilities and looking at best practices – given the various available solutions in cooling – the ever-increasing appetite for saving energy requires inclusion of some element of free cooling through direct or indirect adiabatic cooling or hybrid models with inclusion of naturally cool fresh air into the DC during cooler periods of the year,” says Raymond Glaus, solutions architect at Dimension Data.

If you look at any DC environment, temperature and humidity control must be very precise. Most of the equipment manufacturers to the sector offer closed control technology that already comes with some type of air filtration for dust and particle control and humidity and dew point control, however, because of the fact that you will also need to comply with the legislation of the country you are operating in, you need to consider the need for people to go into, and do various maintenance within the white space, even if it’s only for a few minutes in a week.

You need to thus ensure that all the necessary building regulations are complied with that includes introduction of an amount of fresh air into the space. This requirement will determine the filtration system you want to put into place so that you are not just taking outside air into the space because any particulate matter may cause a number of instances of damage to the components. This function is typically managed through the design of the air handling unit(s) (AHU).

“Part of the ASHRAE TC 9.9 recommendations are classes A1 to A4. These are a temperature and humidity range to design your cooling system according to. Class A1 will be ranges between 18C and 27C, while the relative humidity can vary between 20% and 80%. These ranges apply to any type of data centre. The two most popular types of cooling systems currently are direct expansion (DX) and chilled water (CW). Further you can also add cooling towers – depending on the location and evaporative cooling to the list of solutions for DCs,” says Olu Soluade, managing director at AOS Consulting Engineers.

In most cases the cooling configuration and systems will be drawn from three of the considerations mentioned above, these being:

  • the type of project (new/revamp/retrofit)/location
  • the required Tier level or rating
  • the IT/cooling capacity required

The reason these particular aspects are usually the starting point is due to the fact that with a new development you could pretty much have a blank canvas to work and design according to, while a building revamp or retrofit may limit the design parameters – for example a raised floor may be impossible to install. The Tier of the DC also depends on the redundancy requirements. The kilowatts (kW) in cooling required coupled with redundancy then determines the size and choice of cooling system.

“There are also many configurations that revolve around the IT rack itself, these are your down blow or under floor cooling configurations, in-rack cooling or pod type and you also get a flooding solution which means you flood the entire data centre/hall with serviced air. Flooding is more common in hyperscale, or very large-scale applications. You will also find rear-door coupling configurations which provides minimum impact with respects to footprint. In the past the tight humidity range had a lot to do with tape drives and other legacy equipment with its associated issues in relation to humidity, but as you can see from the allowable ranges in humidity today this is much less of a factor,” adds Mikhail Poonsamy, sales engineer at AIAC Airconditioning SA.

Although each installation has its own different parameters, generally if you take the load of any space that is up to 500kW, opting for a chiller system is just going to be too costly for the client proportionately, but anything above 500kW, a CW system is the better solution because of the flexibility it allows and in adding more capacity at a later stage. The detriment on CW systems is the initial capital cost compared to DX, so if the client has a tight budget the design and choice of cooling can accommodate. Hybrid technology with both DX and CW mediums can off a mix of cost effectiveness and the efficiency whilst still meeting a high tier rating.

“Further to what has already been mentioned, configurations could also be designed according to hot/cold isles principles or contained areas where cooling is focused on a particular area in the space rather than that cooling an entire room unnecessarily, in-row cooling, air handlers and ducted solutions. Use of any of the methods mentioned are primarily driven by the building and what you have to work with – this can be an existing building or new building, open spaces or confined spaces. Suppliers then have similar equipment commonly referred to as CRAC units – computer room air conditioning or CRAH – computer room air handling. Some installations incorporate combination solutions as well. You can have a water-cooled CRAC unit with an economiser coil so when the water is cold enough it works like a chilled water unit, but if it’s not cold enough it uses a compressor to pull down. You may also get a unit that is an air-cooled DX type with a chilled water coil,” adds Frans Jooste, director at Intramech.

Jooste continues, “With the many different configurations, and what we find in particular, is that the system selection is also often heavily weighted and driven by the client – and ties into the secrecy in data centre designs. Having a unique combination or setup that can save up to thousands of kW in cooling costs monthly, makes the client more competitive on cost to the market. This can equate to millions of Rands annually across a couple of sites, so keeping the methods applied under wraps, naturally maintains certain advantages.”

While many designs are based solely on the required amount of net sensible cooling and inlet rack temperature, air leakage rates through a raised floor, rack capture index, correct airflow distribution, supply airflow from the cooling unit, correct cooling unit control philosophy and a rack population strategy must be also be included in the design.


An example of a contained rack with raised floor configuration, allowing cool air to be pushed through from the floor and circulated upwards.


 “Rack population strategy is so important as a DC is gradually populated over time. Therefore, the maximum load within the data centre is not present upon the first day of operation. The correct type of cooling strategy to deal with low part load operations must also be selected and implemented to ensure the cooling units are still able to provide an energy efficient solution whilst operating without any critical alarms,” notes Michael Young, engineer and trainer within data centre cooling at My HVAC Coach.

Since energy consumption is such an important aspect in data centre cooling, CW systems too employ the use of hybrid free cooling and hybrid adiabatic free cooling type of principals. DX systems employ compressor and fan speed modulation to account for varying loads but more advanced systems now employ water cooled DX systems that are coupled to dry coolers that can also provide an indirect type of cooling system for operations during low ambient conditions.

“As Frans Jooste alluded to, all suppliers utilise the same thermodynamic principals and cooling technology within their cooling systems but the key differentiator between each supplier is energy consumption, quality, price and redundancy of the cooling unit. Energy consumption between each brand of cooling system is influenced by the complex process of constantly matching the required cooling load of the DC with the cooling operations of the system,” adds Young.

Because of the heat load in a DC and concentration of heat, the correct amount of air to move the heat out quickly is required in order for you to be able to maintain the targeted temperature range, but further to this, the Uptime Institute specifies that when there is a power or a generator failure, the temperature rise within 15 minutes of the failure must not be greater than 5C.

“You may think 15 minutes is a lot and 5C is little, but once you exit the environment conditions, for you to be able to capture the generated heat in that time and cool it down again is going to take quite a long time. It is therefore imperative for designers to also take into account the temperature rise factor. In addition, designers must consider the changeover when a power failure does occur and that the cooling equipment will not reach capacity again instantaneously, including allowances for that inertia in a temperature rise situation in their planning,” notes Soluade.

Filtration, humidity and air quality factors

“Air is essentially what needs to be controlled and managed in data centre operation as dust can be sucked into servers and cause various malfunctions. Humidity control too is a critical design requirement. Should there not be sufficient humidity the air quality affects the volume of air required per appliance and the direct impact is static electricity that can result in electronic failure or data packet losses. Humidity and dewpoint controls are required to avoid moisture build-up on the IT appliances. Static transients with dust on IT electronics also creates a high fire risk. With static build-up and high humidity in equilibrium rust is also a reality, which combined with air quality can accelerant corrosion,” adds Glaus.

“Gases or fumes generally don’t affect the operation of the DC equipment, but some gases may be a hazard or cause the incorrect dumping of fire suppression gas within the DC. Also, DCs that use acid lead batteries need to carefully monitor hydrogen levels that are released when the batteries undergo charging mode. These hydrogen levels are often controlled by dilution through the fresh air units. Data centres that use lithium-ion batteries do not experience this problem and fresh air units are not required,” says Young.

In some instances, leaked R410A refrigerant gas has caused some fire suppression systems to incorrectly release fire suppression gas into the data centre. Therefore, it is recommended that any R410A bleed-off valves be connected and piped to an area that is outside of the white space to prevent incorrect fire suppression operations.

Technology developments

“A DC cooling system lifespan is dependent on certain servicing criteria and should last between 15-20 years – however our experience over the years has found that after about 10 years, technology has developed to the point where it becomes a ‘no-brainer’ to replace the system primarily due to efficiency improvements but also due to refrigerant changes, updated design methods, and also a need arises to remove uncertainty because the older equipment gets the higher the risk of failure,” notes Koeslag.

DC002Data centre development has boomed due to the increased demand for digital services.Data centre development has boomed due to the increased demand for digital services.

Jooste adds, ”Other factors that drive change in technology particularly in this field are that companies realise that they can operate the DCs with slightly warmer conditions by simple layout changes of the DC to allow it to be cold only where the cooling should be, so it doesn’t matter where the hot air is in the space, and then computers themselves are now far more robust and efficient than what they have been in the past. These factors immediately change the way cooling works and the specifications to work towards in the designs of future.”

“The use of various free cooling methods is another element incorporated into systems, as mentioned, and also allows the compressor and refrigerant system to either modulate or switch off according to the ambient air conditions in the DC. This automatically results in a substantial energy savings and reduces the total cost of ownership of the DC. Research is currently underway to implement a liquid-cooling type of system that submerges server components in a dielectric fluid. This concept may completely replace air-cooled systems in future,” says Young.

Glaus adds, “Another important advancement in technology, especially when working with dynamic environments is the control mechanisms that allow ramping up and down as cooling is required or not. IoT sensor-based cooling also continuously allows the correct airflows, volumes and humidity levels are provided rather than the traditional overcooling.”

Statements on locally manufactured vs imported products

“Quite simply the primary benefit of local supply is that you are supporting the much-needed job opportunities for South Africans and there is investment within the country. From a client’s perspective, why some prefer local goods is due to the availability in parts and the instant technical support. At AIAC our unique policy is that if something fails on site for whatever reason, we get the person who worked on the unit to site to evaluate and correct any errors. Who built it, can fix it – and knowing each unit in the greatest detail, all of our technical staff have gone through all the different stations in the factory before they go out into the field. Understanding the units and process inside and out, is another major advantage for us,” says Koeslag.

Poonsamy adds, “We have the option of pulling equipment from Airedale UK in Leeds, UK, however our niche offering is to assemble here locally reducing logistics. Supplying into fast track projects, is a growing requirement taking into consideration the high growth we are seeing in the sector. Deploying equipment into these environments as quick as possible has a knock-on effect on achieving operational functionality hence meeting a business case with less delay. Fast tracking enables the DC to achieve a quicker path to market with its corresponding ROI/revenue. There’s a huge value proposition to engage with localised partners other than speed of response, namely flexibility.”

“For Intramech, we supply imported equipment that is globally recognised because there is the advantage of scale of manufacture – where sheer volumes allow exceptional quality control and we can also take advantage of the continual research and development that the principles offer, as well as access to the best and latest technology available. We can also land any equipment from Italy at a competitive rates to any local manufacturer,” notes Jooste.

Conclusion

Although just a snapshot of the topic that could fill an entire publication, DCs are quite a complex beast to create, manage and ensure continual efficiency – but what could be expected when you take the time to consider the importance of data in our day to day lives – you just have to ask yourself – how much do we rely on DCs without even knowing it and how much does downtime cost any company? 


Click here for the latest issue of RACA Journal