Working with our Vertiv Sales team enables complex designs to be configured to your unique needs. If you are an organization seeking technical guidance on a large project, Vertiv can provide the support you require.

Learn More

Many customers work with a Vertiv reseller partner to buy Vertiv products for their IT applications. Partners have extensive training and experience, and are uniquely positioned to specify, sell and support entire IT and infrastructure solutions with Vertiv products.

Find a Reseller

Already know what you need? Want the convenience of online purchase and shipping? Certain categories of Vertiv products can be purchased through an online reseller.


Find an Online Reseller

Need help choosing a product? Speak with a highly qualified Vertiv Specialist who will help guide you to the solution that is right for you.



Contact a Vertiv Specialist

Artificial intelligence (AI) is here, and it is here to stay. “Every industry will become a technology industry,” according to NVIDIA founder and CEO, Jensen Huang. The use cases for AI are virtually limitless, from breakthroughs in medicine to high-accuracy fraud prevention. AI is already transforming our lives just as it is transforming every single industry. It is also beginning to fundamentally transform data center infrastructure.

AI workloads are driving significant changes in how we power and cool the data processed as part of high-performance computing (HPC). A typical IT rack used to run workloads from 5-10 kilowatts (kW), and racks running loads higher than 20 kW were considered high-density – a rare sight outside of very specific applications with narrow reach. IT is being accelerated with GPUs to support the computing needs of AI models, and these AI-chips can require about five times as much power and five times as much cooling capacity1 in the same space as  a traditional server. Mark Zuckerberg announced that by the end of 2024, Meta will spend billions to deploy 350,000 H100 GPUs from NVIDIA. Rack densities of 40 kW per rack are now at the lower end of what is required to facilitate AI deployments, with rack densities surpassing 100 kW per rack becoming commonplace and at large scale in the near future.

This will require extensive capacity increases across the entire power train from the grid to chips in each rack. Introducing liquid-cooling technologies into the data center white space and eventually enterprise server rooms, will be a requirement for most deployments as traditional cooling methods will not be able to handle the heat generated by GPUs running AI calculations. Investments to upgrade the infrastructure needed to power and cool AI hardware are substantial and navigating these new design challenges is critical.

The Transition to High-Density

The transition to accelerated computing will not happen overnight. Data center and server room designers must look for ways to make power and cooling infrastructure future-ready, with considerations for the future growth of their workloads. Getting enough power to each rack requires upgrades from the grid to the rack. In the white space specifically, this likely means high amperage busway and high-density rack PDUs. To reject the massive amount of heat generated by hardware running AI workloads, two liquid cooling technologies are emerging as primary options:

  1. Direct-to-chip liquid cooling: Cold plates sit atop the heat-generating components (usually chips such as CPUs and GPUs) to draw off heat. Pumped single-phase or two-phase fluid draws off heat from cold plate to send it out of data center, exchanging heat but not fluids with the chip. This can remove about 70-75% of the heat generated by equipment in the rack, leaving 25-30% that air-cooling systems must remove.
  2. Rear-door heat exchangers: Passive or active heat exchangers replace the rear door of the IT rack with heat exchanging coils through which fluid absorbs heat produced in the rack. These systems are often combined with other cooling systems as either a strategy to keep room neutrality or a transitional design starting the journey into liquid cooling.

While direct-to-chip liquid cooling offers significantly higher density cooling capacity than air, it is important to note that there is still excess heat that the cold plates cannot capture. This heat will be rejected into the data room unless it is contained and removed through other means such as rear-door heat exchangers or room air cooling. For more detail on liquid cooling solutions for data centers check out our white paper.

High-Density Designs for Retrofits and New Builds

To simplify high-density infrastructure design and deployment, Vertiv™ has introduced Vertiv 360AI, which includes a complete portfolio of power, cooling, and service solutions that solve the complex challenges arising from the AI revolution. The platform includes a wide range of comprehensive designs supporting up t 132 kW per rack for a diverse set of use cases, from pilot testing and Edge inferencing to an AI Factory.

 

Design for new builds

Rack density Rack count GPU count Design ID Cooling technology
NA EMEA ASIA
20kW 18 248 RD002 RD002E RD002A Air
40kW 10 248 RD003 RD003E RD003A Air
40kW 10 248 RD004 RD004E RD004A Air
73kW 88 2304 RD006 RD006E RD006A Liquid + Air
73kW 110 2880 RD007 RD007E RD007A Liquid + Air
132kW 36 1152 RD014 RD014E RD014A Liquid + Air
132kW 54 1728 RD015 RD015E RD015A Liquid + Air
132kW 72 2304 RD016 RD016E RD016A Liquid + Air
300kW - - RD300 RD300E RD300A Liquid
500kW - - RD500 RD500E RD500A Liquid

Design optimized for retrofits

Rack density Rack count GPU count Design ID Cooling technology
NA EMEA ASIA
40kW 4 128 4X160R 4X160RE 4X160RA Air
70kW 1 64 1L70R 1L70RE 1L70RA Liquid + Air
100kW 1 88 1L100R 1L100R 1L100RA Liquid + Air
100kW 4 368 4L400R 4L400RE 4L400RA Liquid + Air
100kW 4 368 4XL400 4XL400 4XL400A Liquid + Air
100kW 5 460 5L500 5L500 5L500A Liquid + Air
100kW 12 1104 12XL1200 12XL1200 12XL1200A Liquid + Air
100kW 14 1288 14L1400 14L1400 14L1400A Liquid + Air

 

These designs offer multiple paths for system integrators, colocation providers, cloud service providers, or enterprise users to achieve the data center of the future, now. Each specific facility may have nuances with rack count and rack density dictated by IT equipment selection. As such, this collection of designs provides an intuitive way to definitively narrow down to a base design, and tailor it exactly to the deployment needs.

When retrofitting or repurposing existing environments for AI, our optimized designs help minimize disruption to existing workloads by leveraging available cooling infrastructure and heat rejection where possible. For example, we can integrate direct-to-chip liquid cooling with a rear-door heat exchanger to maintain a room-neutral cooling solution. In this case, the rear-door heat exchanger prevents excess heat from escaping into the room. For an air-cooled facility looking to add liquid cooling equipment without any modifications to the site itself, we have liquid-to-air design options available. This same strategy can be deployed in a single rack, in a row, or at scale in a large HPC deployment. For multi-rack designs, we have also included high amperage busway and high-density rack PDUs to distribute power to each rack.

These options are compatible with a range of different heat rejection options that can be paired with liquid cooling. This establishes a clean and cost-effective transition path to high-density liquid cooling without disrupting other workloads in the data room. Check out our AI Data Room Solutions to learn more.

While many facilities are not designed for high-density systems, Vertiv has extensive experience with helping customers develop deployment plans to transition smoothly to high density for AI and HPC.

Management estimates: Comparison of Power Consumption & Heat Output at a rack level for 5 Nvidia DGX H100 Servers & 21 Dell PowerStore 500T & 9200T Servers in a standard 42U rack based on Manufacturer Spec Sheets

Related Articles

Vertiv™ AI Hub Image

Campaign

Vertiv™ AI Hub
Partner Login

Language & Location