The page you're viewing is for English (Canada) region.

Working with our Vertiv Sales team enables complex designs to be configured to your unique needs. If you are an organization seeking technical guidance on a large project, Vertiv can provide the support you require.

Learn More

Many customers work with a Vertiv reseller partner to buy Vertiv products for their IT applications. Partners have extensive training and experience, and are uniquely positioned to specify, sell and support entire IT and infrastructure solutions with Vertiv products.

Find a Reseller

Already know what you need? Want the convenience of online purchase and shipping? Certain categories of Vertiv products can be purchased through an online reseller.


Find an Online Reseller

Need help choosing a product? Speak with a highly qualified Vertiv Specialist who will help guide you to the solution that is right for you.



Contact a Vertiv Specialist

The page you're viewing is for English (Canada) region.

“Historically, infrastructure was considered an expense, a cost center. Now it's a value creator and driver of intelligence. Our job is to give organizations the scalable, efficient systems they need to support AI workloads.”

Greg Stover, Vertiv Global Director of High-Tech Development

On Day 3 of DCD’s AI Week, Kat Sullivan, Head of Channels Compute at DCD, sat down with Ali Heydari, Director of Data Center Cooling Infrastructure at NVIDIA, and Greg Stover, Global Director of High-Tech Development at Vertiv, to discuss what “AI scale” really means, why mechanical and electrical systems (MEP) are now strategic, and how digital twins and reference designs are accelerating deployments.

What is “AI scale” and how do we define it?

Ali Heydari: AI scale is when we move beyond traditional IT workloads—email, video, social media—and build mission-focused AI factories producing intelligence at scale. Historically, data centers topped out at a few hundred megawatts. Now, we’re talking gigawatts.

Rack densities that were 10-20 kW for decades are suddenly 120, 200, even 600 kW and beyond. This isn’t about density for density’s sake—it’s about optimizing for performance per watt, tokens per joule, enabling close GPU-to-GPU communication and inference at unprecedented speeds.

Greg Stover: Scale will come in two forms: giant AI factories, and also smaller-scale deployments for inference and edge AI. We need reference designs that scale from a single rack to gigawatts, with modular, repeatable building blocks.

The NVIDIA Partner Network (NPN) is about enablement and scalability. It brings together the full OT and IT stack to create reference architectures that deliver proven performance, reduce risk, and accelerate time-to-deployment.

How has data center infrastructure evolved in the AI era?

Greg Stover: Ali and I had a very exciting day yesterday. We were in a meeting with industry leaders, and the CEO of the most valuable company in the world—Jensen Huang—came in and told us something critical: MEP matters. Historically, infrastructure was considered an expense, a cost center. Now it’s a value creator and driver of intelligence. It was exciting to hear Jensen say that, because it really validates how critical our work is for enabling the AI revolution.

Ali Heydari: Absolutely. It’s not just a data center anymore, it’s an AI factory. Just like any factory, it produces tokens (intelligence). What Jensen highlighted was the cost: if 20% of the power budget is going to MEP inefficiencies, that’s power not being used for intelligence generation. We are scaling to levels where data centers, once measured in hundreds of megawatts, are now tens of gigawatts. Efficiency becomes mission-critical. This is why we’re working with partners like Vertiv to rethink every aspect of cooling and power.

How are digital twins used in modern data center design?

Ali Heydari: Digital twins are critical for this journey. Historically, designing a large data center could take six to eight months just to produce a bill of materials. With digital twins, we can use AI and simulation to design optimized systems in days, even hours. These are not just pretty 3D models—they’re physics-based, high-fidelity simulations that let us optimize everything from chip to atmosphere in real time. This is essential for speed, accuracy, and energy efficiency.

Greg Stover: Digital twins let us eliminate guesswork and compress time-to-market. They allow us to model full systems—power and cooling chains together—so we can optimize as a single integrated solution.

Can you share examples of new approaches coming from these collaborations?

Ali Heydari: One is a Department of Energy–backed project we’re working on with Vertiv, universities, and industry partners. It’s a containerized megawatt-scale data center using two-phase refrigerant-based cooling and advanced CDU designs. The target PUE is as low as 1.05. That level of efficiency is only achievable when you rethink the full thermal chain and validate it through simulation.

Greg Stover: That’s where partnerships matter. No single company can build AI-scale infrastructure in isolation. What NVIDIA is doing with the NPN is powerful because it brings OT and IT together—chip designers, power engineers, cooling specialists, operators. That collaboration allows us to develop reference designs that aren’t theoretical, but validated and deployable.

What practical guidance would you give to operators facing AI-driven demand?

Ali Heydari: Start with the workload, not legacy assumptions. Plan for higher density than you think you’ll need. And design with liquid cooling in mind—it’s no longer optional for racks over 100 kW.

Greg Stover: Treat infrastructure as strategy. Use reference designs instead of custom builds. And remember that efficiency is value. Every kilowatt saved in MEP flows back into GPUs, which is the output that actually matters.

What’s next for AI-scale infrastructure?

Greg Stover: MEP matters. It’s no longer an expense—it’s a strategic enabler of AI. To succeed, we need collaboration, alignment, and reference designs that make it easy for organizations to adopt AI infrastructure, whether upgrading legacy facilities or building greenfield AI factories. We have the tools—single-phase, two-phase, air, hybrid—and the partnerships. The goal is to create solutions that maximize ROI, minimize waste, and scale from one cabinet to gigawatts.

Watch the complete discussion: DCD AI Week – Redefining density for AI scale

PARTNERS
Overview
Consultants Corner
Partner Login

Language & Location