Data centres – tips for professionals

By Ilana Koegelenberg

We sit down with two engineering professionals to pick their brains about the dos and don’ts of data centre cooling design — are you doing it right?

data004As the capacity of data centres increase, so does their need for cooling.
Image credit: CCL

The data centre market is growing rapidly across the world and even in South Africa, the industry is evolving at an incredible pace. Data centres are popping up left, right, and centre as the Internet of Things (IoT) becomes more and more part of our daily lives.

Always required but rarely thought of, HVAC is mandatory in every data centre / server room. No data centre is complete without its HVAC system. In fact, it cannot survive without its HVAC system, and a reliable one at that. With energy efficiency being a key driver to sustainability, it is essential that these data centre HVAC systems are designed (and maintained) effectively.

HVAC is very important as it is the ‘fuel’ that keeps a data centre in operation.

In such a high-risk environment, there is very little (if any) room for error. How do you design the best possible system to suit the client’s needs?

To get the answer to this question, we asked two industry experts to share their experience on the matter. One is from an engineer’s point of view, and the other the end user’s.

Michael Young (MY) is a trainer, coach, and engineer in the HVAC industry. He graduated from the University of the Witwatersrand in the field of mechanical engineering (BSc Mechanical Engineering) in 2008 and qualified as a Professional Engineer (Pr Eng) in 2013. Young is passionate about promoting knowledge and helping other young engineers grow within the industry through his training workshops, monthly publications in the RACA Journal, and coaching sessions.

data001HVAC plays a crucial role in data centre cooling. Here is an airflow model of a data centre with fresh air recirculation.
Image credit: Autodesk

Tanja Gutgesell (TG) also graduated from the University of Witwatersrand with a BSc Mechanical Engineering degree and currently works at FNB. She is responsible for the mechanical infrastructure (mainly HVAC) of FNB’s data centres and some of the offices. She is passionate about thermal management in a data centre and seeks to drive projects to perfection.

1. Why is HVAC important when designing a data centre?

MY: During the operations of a data centre, the servers generate heat by converting electrical energy into thermal energy. The generation of thermal energy causes the air temperature within the data centre to rise, and passing this hot air over the main IT processor can cause permanent damage to the equipment. Therefore, as a safety mechanism, the servers are set to shut down when the server exceeds a certain operating condition. HVAC is responsible for cooling the ambient air temperature within a data centre, so the servers always operate within an acceptable operating temperature. HVAC is very important as it is the ‘fuel’ that keeps a data centre in operation.

TG: The function of a data centre is to house IT server equipment. Servers produce a lot of heat that needs to be managed, otherwise servers will shut down due to overheating. HVAC is important as it prevents servers from shutting down by providing them air at sufficient airflow and temperature, as well as removing heat from the IT space.

It is important to remember that there is no ‘best’ cooling technology within the market. There is only the most feasible solution that meets the client’s requirements. 

2. What HVAC options are available when designing a data centre and how do they compare?

MY: When designing a data centre, a direct expansion (DX) option, chilled water, or evaporative cooling options are available. Each of these types of cooling technologies have their advantages and disadvantages depending on the location of the data centre and available resources. It is important to remember that there is no ‘best’ cooling technology within the market. There is only the most feasible solution that meets the client’s requirements.

TG: There are many HVAC options available. Currently, the heat exchange between the IT equipment and HVAC is usually controlled and provided by computer room air conditioning (CRAC) units. These CRACs cool down air with either chilled water or refrigerant, which is then supplied to the data centre. Neither of the cooling medium is wrong or right, it depends on the infrastructure.

DX CRACs run with refrigerant and the thermodynamic process cycles between compressor and expansion valve (indoor unit) and condenser (outdoor unit). DX CRACs usually have a limit on their refrigerant pipe runs. Chilled water (CHW) CRACs require cooling towers or chillers to cool their medium. If the decision is based purely on capex, the typical recommendation is DX units for small heat loads and CHW CRACs for larger loads.

The HVAC options are greater for colder environments (that is, Northern Hemisphere), where outside air from the environment can be used to cool the data centre. This solution is more efficient, but care must be taken to filter the air and provide proper humidity levels.

3. Which factors should be taken into account when designing the cooling solution for a data centre?

MY: The most important factor to consider is the operating conditions of the data centre, available space on site, ease of maintenance, availability of spares, and the overall requirements of the client. These factors will often indicate the most feasible type of cooling technology that can be used as a viable solution. 

The most important factor to consider is the operating conditions of the data centre, available space on site, ease of maintenance, availability of spares, and the overall requirements of the client.

TG: It is important to realise that data centre cooling does not equal comfort cooling, but rather takes a precision cooling approach. Humidity, temperature, and airflow are very important for any data centre. The first step is to understand the heat load (not only the IT equipment, but also people, light, heat leakage, fans, motors, and the like). Thereafter, it is vital to understand the infrastructure; such as footprint, raise floor height (if any), hot or cold aisle containment, moisture and air leakage into or out of the room, dimensions of the room and outdoor space for chillers, condensers, and so on. Air and CHW water quality is another factor that is important when designing the cooling solution.

4. How important is energy efficiency and why?

MY: The world has evolved whereby communication, banking, and even shopping has migrated to the digital world. The infrastructure that supports all these different applications all originate from the data centre. So, if you asked how much power a data centre consumes, you may be surprised to receive an answer that runs into the megawatts. This is why energy efficiency is so important, especially when it comes to thermal management, as a study showed that cooling consumed an average of 35–40% of the electrical energy that is supplied to a data centre. So, the new drive within the data centre industry is to be able to operate a data centre in the most energy-efficient manner.

TG: The cooling solution for a data centre consumes the most electrical power. This electrical power consumption results in a monthly cost to the end user. An energy-efficient cooling solution will reduce the opex and will be more environment friendly. 

data003A raised floor can allow for greater flexibility for cooling data centre hot spots.
Image credit: Stulz

5. What are common mistakes engineers make when designing HVAC systems for data centres?

MY: The most common mistake engineers make is the incorrect specification of an operating condition that is not realistic or possible. An example is the specification of operating a CRAC unit with a return air temperature of 37°C without installing any containment. Another example is where it is specified to operate a CRAC unit with a return air temperature of 24°C dry bulb (db) / 50% relative humidity (RH) and have the unit supply air at 17°C db with a sensible heat ratio (SHR) of 1, which is not possible thermodynamically.  

TG: Some engineers are too desktop driven and overlook obstacles that the environment imparts. Others rely too much on computer programs, rather than going back to thermodynamic basics. Several engineers only consider their field (mechanical, electrical, IT, civil) as important and disregard other fields of engineering. This isolated engineering approach mostly leads to a chaotic design and can result in failure when integrating the system. 

Sometimes reliability and availability of HVAC are disregarded and will fall short. The most overlooked element in the data centre is the airflow requirements. It must always be ensured that the volume of air produced equals what the IT equipment requires. 

6. Where are we going in terms of technology? What are some local trends?

MY: The trend is slow but South Africa is beginning to integrate energy-saving technology into their designs. It is a small step, but data centres are now beginning to implement hot and cold aisle containment and integrate cooling systems with modulating capabilities. The DX system with modulating capabilities is still a popular type of cooling technology, but as energy costs begin to rise, alternative technologies are now being considered, with the final decision being based on the outcomes of a cooling technology feasibility report.

TG: Using the outside air for cooling is becoming more popular, especially for winter and night-time use. Technologies are becoming more efficient; an example is the use of magnetic levitation chillers instead of screw compressors. Other trends in data centre cooling is storing cold chilled water that was produced at night (less energy required due to low ambient temperatures) and using it at daytime when ambient temperatures are high. 

7. Where is the rest of the world headed? What are the international trends?

MY: The African continent is still very favourable for the DX solution due to the shortage of skills and available resources. The European continent uses a mixture of cooling technologies as the climate and available resources make it favourable to use evaporative cooling and chilled water systems. The trend varies from region to region, but the ultimate goal is to implement energy-saving technology that reduces the total cost of ownership as well as ensure data centre uptime.

TG: IT equipment is becoming denser every year, and thus cooling equipment must keep up with higher demands of cooling. Liquid-based cooling, especially for high-density IT equipment, is one of these trends. Liquid is usually more effective with heat transfer than air is. Examples of liquid-based cooling are direct liquid contact with the heat sink and immersion cooling in non-conductive liquid. 

data002Server racks deployed in a data centre feature different heat loads according to the application.
Image credit: Stulz

8. Some tips for designing data centre cooling systems …

MY:

  • Do not only consider capital costs when making the final decision on a cooling system for a data centre. Total costs of ownership, availability of spares, available support, and uptime of the data centre is more important than capital costs.
  • Ensure the requirements of the cooling unit are thermodynamically possible. If the laws of physics and thermodynamics are not obeyed, the cooling unit will never perform.
  • Conduct a feasibility study to determine which is the best type of cooling technology to meet the client’s requirements.

TG:

  1. Understand the heat load.
  2. Understand data centre trends (high-density racks).
  3. Understand the tier level of the data centre (reliability and availability).
  4. Understand the location.
  5. Understand efficiency (power and water).
  6. Understand the infrastructure (raised floor, IT containment, space, power, and so on).
  7. Understand the psychrometric properties of the cooling design (temperature, humidity, and airflow).
  8. Communicate to other fields of engineering.
  9. Understand the building management system (BMS) logic, especially for HVAC.

A supplier’s point of view

Design is but one aspect of data centre cooling, though. Installation and maintenance are key to ensuring your data centre’s HVAC system runs as it should. Having a good product is crucial, but so is making sure it is running as it should.

More and more the suppliers are getting involved in the installation and commissioning of the HVAC systems for data centres. We sit down with Andrew Koeslag (general manager) and Mikhail Poonsamy (technical manager) of AIAC Air Conditioning, as well as William Myer (sales manager) at Stulz to share their experience in particularly the local market.

Challenges

One of the biggest problems on data centres is the sizing of equipment, explains AIAC. It’s generally the smaller data centres that are problematic. Often, there are problems with oversizing equipment as a result of a client / end user providing inaccurate information on their heat loads. This is very inefficient. If a unit is undersized, controlling the temperature becomes a challenge.

Another challenge is the fact that many clients do not want chilled water in their data centres as they fear the risk of the water among so many electronics. They want the chilled water system without the chilled water, but it’s in actual fact a low-risk set-up as all the connections are above the floor.

data005The Office 365 data centre in the UK. Cooling is vital as companies cannot afford any downtime caused by machines overheating.
Image credit: Novosco

There is also an issue with maintenance across the board. Often the products are installed incorrectly and then the brand gets a bad reputation. To counteract this, AIAC started implementing commissioning as part of their services, doing this themselves to ensure the product was installed correctly. This has brought their warrantee claims down significantly over the years.

There seems to be a gap in the market in terms of DX solutions — another challenge, explains AIAC. Generally, the industry is used to either installing small splits for the small data centres and chilled water for the bigger projects. But what about the medium-sized projects that require large split systems? Even though companies like AIAC have this solution available, the challenge is with contracting companies not being equipped to handle such installations. For instance, not having copper pipe brazing equipment for the larger pipe sizes. There is an awkward in-between for the larger DX units. Yet another reason AIAC has decided to get involved in the commissioning of their units.

What has changed?

AIAC started doing data centre installations in South Africa in 1986 already. Since then a lot has changed in terms of the equipment. Mainly, the efficiencies of equipment have improved greatly.

Also, the loads on the racks have increased significantly as the computing power of the machines have advanced. The computing power is more concentrated, increasing the footprint of the HVAC units as they try and keep up with the heat load. To counter the ever-increasing footprint of the HVAC system, temperatures have been raised and better control is exercised to ensure you do not end up with an HVAC installation that takes up more space than the actual computer racks. This means companies such as AIAC has had to look at re-engineering products to have smaller footprints to ensure the kW cooling per square metre solution is viable.

Reliability

When it comes to data centres, the end user is far more involved than with other commercial projects. They tend to first pick a product for the HVAC system and then appoint the team around that product, unlike conventional projects where the team has more freedom in terms of product selection. For data centre installations, reliability becomes more important than simply the price of equipment. When considering the high risk of downtime of equipment and the loss of revenue this would result in, they would rather spend the money to ensure they have the best possible product that is dependable. Redundancy is key.  

For this reason, many data centre companies do not like a building management system (BMS), explains the AIAC team. They see it as an additional thing that can go wrong. Keep it simple, keep it running — that’s the rule. That is why they tend to go for individual unit monitoring instead of BMS solutions.

Trends

As mentioned, footprints play a large role these days and free cooling is also becoming more popular.

Another huge change is the importance of humidity in design. Humidity is no longer a factor as there is no actual static effect on the data centre equipment. Instead of focusing on humidity, designs now look at power usage effectiveness (PUE), says AIAC.

Floor void pressure has also become important now — more so than humidity. Pressure control is critical. Your floor becomes a plenum. The pressure difference drives the computer fans, so it is important that the computers have just enough positive pressure for the fans to work optimally. There must be a positive pressure so you get the correct airflow and so it pulls correctly. It is a very fine line: If you fail the fan, you fail the server and you fail the DC.

Often, there are problems with oversizing equipment as a result of a client /end user providing inaccurate information on their heat loads. This is very inefficient.

Another major change is the switch to supply temperature control instead of return air control, like you have with conventional HVAC systems. All they care about is how much pressure you are giving at what temperature. They do not care about what is coming back. What is coming back must just be as hot as possible to increase your efficiencies and your window for free cooling. This is easy on chilled water, but on DX you really need to know what you are doing to provide supply air control and pressure control coupled with free cooling. It is software intensive and it all comes down to the controls and the safeties installed. The software and how you manage the equipment are crucial.

Some companies are also using clever solutions for containing their aisles, for example butcher curtains — an inexpensive yet effective option.

Internationally, there is a move to on chip and on rack cooling. This is expensive but super-efficient. Due to its price, it is yet to gain popularity in South Africa.

 


Click here to read the September 2018 issue of RACA Journal

 

 

 

 


 

 

PRODUCT OF

IMD logo Whitesmall

Interact Media Defined (IMD), is one of South Africa’s leading multi-media magazine publishers READ MORE

PRIVACY & COOKIE POLICIES

Cookie policy
Privacy & Cookie policy
Privacy policy

Talk to us

JHB T : +27 (0) 11 579 4940
CPT T : 0861 727 663
E : admin@interactmedia.co.za

13A Riley Road, Bedfordview,
South Africa 2007

© Interact Media Defined