NVIDIA GTC: How AI factories scale faster with repeatable, factory-assembled blocks for power, cooling, and infrastructure.
AI has created a step change in how digital infrastructure must be designed, manufactured, and deployed.
AI factories have now entered a new era where extreme densification and the ability to scale to gigawatt capacity are key requirements. This forces a complete rethink across every layer of the data center from shell to rack, to the fluids that keep GPUs running.
AI facilities are becoming more repeatable, standard modular building blocks: prefabricated shells and prefabricated rows and aisles. This is the shift from bespoke and onsite construction to truly industrialized infrastructure at a factory level. We're driving new design architectures, new scale, and new densification, all simultaneously.
The move to higher voltage DC architectures to reduce conversion losses and support extreme power densities is going to be upon us. Adaptive, resilient liquid cooling loops will act as the facility’s circulatory system. At the same time, the push towards onsite energy autonomy — battery energy storage systems, microgrids, and alternative generation — is accelerating.
All of these elements come together to require engineered building blocks and system designs while the technology is changing, growing, scaling, and evolving.
In AI factories, coolant flow is the lifeblood of the facility. A disruption in flow doesn't cause a minor temperature spike. It causes immediate GPU de-rating or shutdown. Performance depends on two managed variables: flow, having enough volume at the right rate and balance, so every rack, every row, and every device position receives precisely what it needs, when it needs it. The difference between a functional loop and a high-performance AI engine comes down to these two variables.
Supporting future AI workloads will require new architectures, new design disciplines, and new approaches to operating models. The technologies enabling this shift from DC power to advanced liquids to distributed energy are already emerging. The organizations that succeed will be those that industrialize, standardize, modularize, and replicate to take AI factories to the next level.
As we defined in our recent Vertiv Frontiers report, dynamic digital twin environments, powered by Universal Scene Description (OpenUSD) and NVIDIA Omniverse™ libraries are also key to delivering this change. It's easy to consider a digital twin as just a visual. A digital twin is governed engineering and operational truth. We view our pivotal contribution as industrializing the infrastructure layer. And when we think about how we deliver that, we think about NVIDIA Omniverse and digital twins and the parametric configuration-controlled building blocks: think powertrain and thermal chain, represented as digital assets that can be configured, validated and visualized. What that means is fewer unknowns, fewer redesign loops, and a faster path from concept to build to commission.
Overall, the most important element is we are moving from an era of fast builds to an era of repeatable engineering. AI factories designed and built as systems. Getting this right is the key to scaling AI efficiently, responsibly, and at the speed the market now demands.

Scott Armul will be presenting ‘Scaling the AI Factory with Full-Stack, Digitally Orchestrated Infrastructure’ at NVIDIA GTC 2026 on Tuesday, March 17, 2026. Join Vertiv at NVIDIA GTC 2026: Building AI-ready infrastructure.
Learn how AI factories are moving beyond bespoke builds toward repeatable, system‑engineered infrastructure, where standardized building blocks, advanced liquid cooling, higher‑voltage architectures, and digital twins as governed engineering truth come together to accelerate scale with confidence. Watch the video.