Two data center leaders share how modular design, flexible power planning, and evolving cooling options keep facilities aligned with the rapid shifts in GPU technology.
Artificial intelligence (AI) hardware is advancing faster than current data center design cycles. New platforms push rack densities higher, introduce heat patterns legacy cooling can’t control, and require power that doesn’t always arrive when deployments do. Teams now have to make faster, more adaptable decisions about cooling, layout, and power capacity to keep new builds aligned with changing graphics processing unit (GPU) requirements. The gap between hardware pace and facility readiness is narrowing, and designs need to anticipate that shift from the start.
Two leaders working in these conditions shared how they’re adapting during “Evolving the Datacenter in the Age of AI,” a fireside chat at the Vertiv™ AI Solutions Roadshow in Santa Clara on October 7 and Atlanta on October 9. Vertiv VP for Enterprise Sales Tony DeSpirito spoke with Yuval Bachar, founder and CEO of ECL, and John Dumler, Senior Vice President of Data Center Design and Engineering at DC Blox. ECL builds off-grid, hydrogen-powered modular capacity, while DC Blox scales regional sites within existing utility and construction limits. Their environments differ, but both are adjusting to the same fast-moving demands of AI data centers.
Their discussion highlighted the choices that determine whether new capacity keeps up with fast-moving GPU requirements. Yuval and John surfaced where flexibility matters most—power planning, cooling strategy, and the modular elements that let teams adjust when hardware changes mid-build. For operators planning AI-ready sites, their perspectives offer a direct look at how experienced teams design for movement rather than a fixed load profile.
Early signs of AI-driven design shifts
Tony DeSpirito, Vertiv: AI workloads have changed expectations for performance and capacity. What signaled to you that a major shift was underway?
Yuval Bachar, ECL:
We saw it before the current wave of large models. In 2021, we designed for 75 kilowatts (kW) per rack. People told us no one would ever use those high-density racks. But the opposite happened—GPU clusters (see Figure 1) arrived with higher loads than expected, and the growth curve made it clear that air-cooled white space alone couldn’t support what was coming. Once we moved to full liquid cooling, we saw how far legacy layouts were from the thermal and power behavior of high-density compute.

Figure 1. Side by side comparison of traditional IT and accelerated IT, where the traditional IT stack is only composed of CPUs, while accelerated IT requires GPUs and CPUs. Source: Vertiv
John R. Dumler, DC Blox:
The acceleration was real. Five or 10 kW per rack was common not long ago, and 35 kW felt high. Then AI racks appeared that drew as much power as a full legacy room. That kind of increase forces changes to electrical distribution, cooling, and deployment pacing. Customers also revise hardware plans more quickly than before, so the room has to accommodate these changes.
Legacy data center design limits
Tony: What showed you that the old design model no longer matched the requirements?
Yuval:
The first indicator was how often customers changed hardware definitions during construction. We saw platforms switch from one generation to the next within a nine-month window. The second was thermal behavior at higher loads. Even with roughly 110 kW handled by direct-to-chip liquid cooling, they still push 35 to 40 kW of air out the sides, back, and top—nothing like the front-to-back airflow older rooms were built for. The team placed rear-door heat exchangers (RDHxs) in unconventional positions to pull residual heat out where standard layouts couldn’t.
John:
For us, it was the mismatch between what customers expected to deploy and what they actually brought. Some planned heavy liquid cooling. Others wanted a hybrid mix. We needed electrical and mechanical headroom from day one so we could adjust without redesigning the room.
Power limits in high-density builds
Tony: Power availability has become a defining factor in deployment speed. How are you addressing that constraint?
Yuval
We moved off the grid because utility timelines no longer match deployment timelines. Hydrogen fuel cells have enabled us to add capacity in controlled increments—a single container delivers about 2.6 megawatts (MW) today. To replace diesel generators, we developed lossless hydrogen storage. The control layer around the fuel cells handles transients, so the system behaves like a stable grid. That stability is what makes high-density, off-grid operation feasible.
John:
Most regional operators stay on the grid, but the delays are significant. Meanwhile, AI customers work on much shorter cycles. We overbuild upstream systems to support denser loads while waiting for additional utility capacity. There is a cost trade-off, but the alternative is a stranded facility that cannot support the next hardware cycle.
Designing for fast hardware cycles
Tony: Hardware requirements change every nine to 12 months. How do you design facilities that stay relevant through those shifts?
Yuval:
We use repeatable building blocks between one and a half and five megawatts. Each block includes a full liquid-cooling loop. When customers update hardware plans, we adjust the next block without affecting the rest of the site.
John:
Flexibility is the goal. We add enough mechanical and electrical capacity to support different rack configurations. If the room has the right pathways and heat-rejection capacity, you can adjust without structural changes.
Cooling strategies for high-density racks
Tony: Liquid and hybrid cooling are now central to high-density deployments. How do you see that evolving over the next decade?
Yuval:
Full liquid cooling is the baseline for the systems we deploy. Once racks reach around 100 kW, air alone won't be enough. We use isolation at each cabinet and a control system that manages flow and temperature across the loop. The designs are built for these loads from the start.
John:
Most operators will run hybrid environments for a while. Newer sites use slab floors with trenches or overhead distribution. I do not expect one method to dominate. The approach depends on the customer mix and the scale of the deployment.
Partnering for high-density growth
Tony: What do you expect from partners as you scale into these conditions?
Yuval:
We work with partners that can support high-density cooling at volume. Vertiv’s coolant distribution unit (CDU) systems run our Mountain View site. They provide the engineering depth and manufacturing scale needed for a modular build cycle
John:
We look for consistent performance. When we face a problem, we need someone who has seen a similar issue before. Vertiv’s systems have been reliable across our deployments. That predictability reduces downstream risk, which matters more than unit cost.
Engineering for rapid change
AI data centers must now operate under moving targets. Racks draw more power and shed more heat, and hardware definitions can change before construction finishes. These conditions shape every decision about power, cooling, and layout. ECL and DC Blox showed a common approach: design for variation rather than a fixed load and build electrical and mechanical capacity that can absorb change without reworking the site. That mindset is what keeps a facility usable as GPU cycles become shorter, and densities increase.
Teams planning new capacity need to leave room for movement. Power paths, cooling systems, and rack layouts should support different densities, timelines, and cooling methods without major structural changes. The right partners can help by delivering high-density systems at volume and supporting adjustments when hardware plans shift mid-project.
The demands will continue to rise as GPU platforms evolve, and the gap between hardware pace and facility pace will narrow further. Designs that stay flexible will last longer than those built around a static set of assumptions. Operators who plan for change, not a single end state, will be able to bring capacity online at the speed AI workloads require.
To see how these design principles translate into real deployments, visit the Vertiv™ AI Hub for additional context and technical guidance.