How AI revolutionizes data center power and cooling

Vlad Galabov, OMDIA Director of Digital Infrastructure, spoke during the Data Center World 2025s Analyst Day. Image: With permission from Data Center World

AI will run more than 50% of global data center capacity and more than 70% of the wagering option, according to Reddia’s Director of Digital Infrastructure Vlad Galabov, who said that massive productivity gains across industries driven by AI will burn this growth. In a speech under the Data Center World 2025’s Analyst Day, Galabov made a number of other predictions about the industry:

  • Nvidia and Hyperscalers’ 1 MW-Per-Rack ambitions are probably not realized in another couple of years before engineering innovation catches up on power and cooling requirements.
  • By 2030, over 35 GW data center power is expected to be self-generated, making off-grid and back-meter solutions no longer optional for those who want to build new data centers as many utilities are struggling to supply the necessary power.
  • Data Center Annual Capital Expenses (CAPEX) investments are expected to reach $ 1 trillion globally by 2030, up from less than $ 500 billion at the end of 2024.
  • The strongest area of ​​CAPEX is physical infrastructure, such as power and cooling, where spending increases at a rate of 18% per year.

“As calculation densities and racing densities climb, the investment in physical infrastructure accelerates,” said Galabov. “We expect a consolidation of server counting, where a small number of scaled systems are preferred over a scaled server strategy. The cost per byte/calculating cycle also falls.”

Data Center Power Capacity explodes

Galabov highlighted the explosion AI has caused in data center power needs. When the AI ​​wave began at the end of 2023, the installed power capacity in data centers around the world was less than 150 GW. But with 120 kW rack design on the immediate horizon and 600 kW racks only approx. Two years away, he predicts nearly 400 GW of cumulative data center capacity by 2030. With new data center capacity additions approaching 50 GW per year. Years at the end of the decade, it won’t be long before half of a terawatt becomes the norm.

But not everyone will survive the wild west of the AI ​​and DC market. Many start-up-DC campus developments and neoclouds will not build a long-term business model as some lack expertise and business knowledgeable to survive. Focus not on a single provider, warned Galabov as some are likely to fail.

More Data Center World 2025 -Cover: Nvidia’s Vision for AI -factories

AI Driver Liquid Cool Innovation

Omdia’s most important analyst Shen Wang laid out cooling consultations of the AI ​​wave. Air cooling hit its limit around 2022, he said. Consensus is that it can deliver up to 80 watts per day. Cm2, with a few suppliers who claim they can take air cooling higher.

In addition to this interval, single-phase direct-to-chip (DTC) has cooled-where water is brought or a liquid to cold plates that sit directly on the top of computer chips to remove heat-. Enfase DTC can go as high as 140 W/cm2.

“Enfase DTC is the best way to cool chips right now,” Wang said. “By 2026, the threshold for single -phase DTC is exceeded with the latest racks.” That’s when two-phase fluid cooling should start to see a ramp-up in the adoption speeds. Two-phase cooling runs liquids at higher temperatures to the chip, causing them to turn to steam as part of the cooling process, thereby increasing cooling efficiency.

“Advanced chips in 600 watts and over the range sees the heaviest adoption of fluid cooling,” Wang said. “By 2028, 80% of chips in this category will use fluid cooling, up from 50% today.”

Leave a Comment