The following two-part article is written by Wojtek Piorko, managing director for Africa at Vertiv. This is Part 1.
There’s no doubt, across any industry, that Artificial intelligence (AI) is here, and it is here to stay. The use cases for AI are virtually limitless, from breakthroughs in medicine and enhanced farming techniques, to high-accuracy fraud prevention and personalised education.
It is heartening to see that there is opportunity for great development within Africa. In fact, a paper published in late 2023 by Access Partnership stated that AI is already being used with significant effect in Africa, to help address challenges such as predicting natural disasters, like floods and earthquakes, as well as in the protection of endangered species on the continent, improving food security, and improving maternal health outcomes.
The paper notes that a preliminary assessment by Access Partnership estimates that AI applications could support up to USD136 billion worth of economic benefits for just four sub-Saharan countries (Ghana, Kenya, Nigeria and South Africa) by 2030, based on current growth rates and scope of analysis. ‘To put this in perspective, this figure is higher than Kenya’s current GDP and represents 12.7% of the 2022 GDP for these four economies,’ it says.
Making the move to high-density
AI is already transforming people’s everyday lives, with local use of technology like ChatGPT, virtual assistants, navigation apps and chatbots on the upswing. And, just as it is transforming every single industry, it is also beginning to fundamentally change data centre infrastructure, driving significant changes in how high-performance computing (HPC) is powered and cooled.
To put this into perspective, consider the fact that a typical IT rack used to run workloads from five to 10 kilowatts (kW), and racks running loads higher than 20 kW were considered as high-density. AI-chips, however, can require around five times as much power and five times as much cooling capacity[1] in the same space as a traditional server. So, we’re now seeing rack densities of 40 kW per rack, and even more than 100 kW in some instances.
This will require extensive capacity increases across the entire power train; from the grid to chips in each rack. It also means that, due to traditional cooling methods not being able to handle the heat generated by GPUs running AI calculations, the introduction of liquid-cooling technologies into the data centre white space, and eventually the enterprise server room, will be a requirement for most deployments.