The page you're viewing is for English (India) region.

Working with our Vertiv Sales team enables complex designs to be configured to your unique needs. If you are an organization seeking technical guidance on a large project, Vertiv can provide the support you require.

Learn More

Many customers work with a Vertiv reseller partner to buy Vertiv products for their IT applications. Partners have extensive training and experience, and are uniquely positioned to specify, sell and support entire IT and infrastructure solutions with Vertiv products.

Find a Reseller

Already know what you need? Want the convenience of online purchase and shipping? Certain categories of Vertiv products can be purchased through an online reseller.


Find an Online Reseller

Need help choosing a product? Speak with a highly qualified Vertiv Specialist who will help guide you to the solution that is right for you.



Contact a Vertiv Specialist

The page you're viewing is for English (India) region.

DCD AI Week: Energy independence strategies for AI beyond the grid

"Utility grids are designed for steady baseload conditions only. We have to protect the load, the utility, and each other. Because as sites get bigger, they can destabilize the grid."

Peter Panfil, Vertiv VP for Global Power

On the second day of DCD’s AI Week, Peter Panfil, Vice President of Global Power at Vertiv, shared his insights on how AI-driven GPU growth reshapes energy management in data centers. From powertrain challenges and dynamic workloads to Bring Your Own Power (BYOP) strategies and future-ready GPU deployments, he explained how the industry is evolving to deliver energy holistically, from the utility grid to the chip.

Energy independence is a big topic right now, especially with all the AI-driven growth and talk about density. How are AI-driven workloads changing how we're thinking about power?

Peter Panfil: With AI, we start the data center design at the chip. Before, we started with the data center's available power, such as the utility transformer and generator plant, and worked downstream to size the UPS, the distribution path, and stripe it out to the racks.

AI is a predefined compute set, so it defines the modular "chunk." That chunk drives the rest of the design. For example, liquid cooling can never go down, so all designs start with thermal. The way you deliver cooling to the chip also defines how you configure that GPU deployment. That configuration tells you what your end state will be and how you need to provide power to it.

Normalizing that GPU or pod configuration gives you speed and prevents you from having many bespoke designs. Once you know the pod size, you figure out how to get the power to that pod at the needed levels.

AI factories are driving GPUs to exascale, but stepping back, it’s also about power challenges and what utilities are facing. What are the power providers concerned about?

Energy providers are concerned about the size, scale, always-on power, and dynamic growth of AI data centers. GPU deployments create highly dynamic power draw patterns that start at idle, spike, then settle, with thousands running in parallel. Utility grids are designed for steady baseload conditions only. We have to protect the load, the utility, and each other. Because as sites get bigger, they can destabilize the grid.

The ecosystem—utilities, data center operators, and equipment makers—is working together to model entire sites and share data so utilities know what's coming. Because getting a new grid connection can take 36 to 48 months, adopt "bring your own power" strategies —gas turbines, natural gas generators, even small modular reactors—to bridge the gap or for permanent control. Hydrogen-ready generators let customers deploy when and where they need, control how much power they draw, and shift to backup or add capacity when the utility comes online.

Regarding GPU deployments, are there any myths that still exist around them?

Early on, people thought a GPU was just a faster CPU. From a compute perspective, a GPU might have 20 to 50 times the power of a CPU, but they thought they could deploy GPUs like CPUs or spread racks around to manage thermal and power. That doesn't work because GPUs are deployed as compute pods, acting as huge parallel computers, requiring concentrated liquid cooling with valving, piping, and cooling units nearby. One early enterprise GPU deployment adapted its CPU design to flexibly support liquid-cooled GPUs and CPUs.

Let's talk about operations of vastly different sizes. How do those myths differ between enterprises and hyperscalers?

Enterprises often need flexibility between CPU and GPU workloads. Hyperscalers deploy GPUs at much larger scales—1.5 to 100 megawatt AI factory chunks—where they commit fully to matching power and cooling to GPU needs. While enterprises handle mixed workloads, hyperscalers plan for dedicated GPU blocks. We are now focused on mapping GPU families and making future-ready and fungible designs for both CPUs and GPUs. That's the reality we're working with today.

AI and GPU growth is creating new challenges across the full powertrain. How will data centers evolve to manage energy holistically from the grid to the GPUs?

We see customers defining normalized blocks that can be built in a factory and deployed at speed and scale, supported by a full range of Bring Your Own Power technologies from day one. Even as we push these deployments to scale, a good grid citizen remains essential. Vertiv works to orchestrate how BYOP resources integrate with utility power to deliver an end-to-end solution from the chip to the source, avoiding stranded capacity. Think of a GPU as a high-performance fighter jet; it needs an aircraft carrier to land, and the critical infrastructure industry has to provide that landing place.

Watch the full conversation: Energy independence strategies for AI beyond the grid - DCD

PRODUCTS & SERVICES
Overview
PARTNERS
Overview
Consultants Corner
Partner Login

Language & Location