When liquid cooling becomes mission-critical, service strategy can’t wait until deployment. NVIDIA and Vertiv unpack how standardization turns 10 to 15-year infrastructure into adaptable systems.
Artificial intelligence (AI) data centers are hitting a service inflection point. As rack densities climb toward multi-megawatt levels and liquid cooling replaces air as the standard, thermal management has shifted from component selection to system integration, where service strategy determines performance, scalability, and uptime.
The critical question: when do service teams enter the conversation? Operators who involve lifecycle experts during design, rather than after deployment, gain infrastructure that scales predictably across 10- to 15-year horizons.
During Vertiv's Management & Operations Innovation Day 2025, hosted by DatacenterDynamics (DCD), NVIDIA’s Dr. Ali Heydari and Vertiv’s Jaclyn Schmidt outlined how standardization across American society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE), Open Compute Project (OCP), and hyperscalers is enabling modular approaches where piping infrastructure stays fixed while coolant distribution units (CDUs) and servers upgrade around it. The conversation revealed how “shift left” manufacturing, predictive maintenance, and system-level thinking separate resilient AI data centers from fragile ones.
Alex Dickins, Content Director, DatacenterDynamics (DCD): How are you designing infrastructure today that can support rack densities five to 10 years from now?
Dr. Ali Heydari, Data Center Technologist, NVIDIA:
We designed the technical cooling system (TCS) loop to handle the extreme rack densities of today and the future. The TCS loop is the piping network that delivers liquid from the CDUs to the racks. Proper pipe sizing from the start—eight to 10 inches for mains and two to four inches for drops—supports flow rates for densities ranging from 100 kilowatts (kW) to multi-megawatts.
The loop remains in place for 10 to 15 years while racks, CDUs, and servers can be upgraded as technology evolves. A five-megawatt CDU occupies only slightly more footprint than a one-megawatt CDU, enabling a modular, Lego-style approach that keeps major infrastructure intact while scaling capacity.
“Gigawatt-scale AI factories will enable discoveries that were extremely difficult in the past. Meeting the challenges of maintenance, serviceability, and speed-of-light deployment unlocks that potential.”
-DR. ALI HEYDARI—NVIDIA Data Center Technologist
Alex: Beyond working fluids, how is the data center liquid cooling ecosystem maturing with standards and best practices?
Dr. Ali Heydari, NVIDIA:
We standardized 25% propylene glycol (PG-25) as the working fluid, replacing air in high-density deployments. This alignment across the ASHRAE, OCP, and major cloud service providers removed concerns around immersion fluids, oils, or deionized water. Once the fluid is fixed, focus shifts to AI data center design and infrastructure commissioning.
We use computational fluid dynamics (CFD), flow network modeling, and pipe sizing tools to optimize each design. We then stress-test it with dummy loads to simulate megawatt-level heat, pump failures, and other potential failure modes before going live.

[Watch the full conversation: Mastering liquid cooling services for AI environments]
Alex: What does modular, high-density infrastructure mean for service strategy?
Jaclyn Schmidt, Liquid Cooling & High-Density Service Offering Manager, Vertiv:
With AI data centers, we need to view liquid cooling as a single, integrated system rather than separate components. CDUs, secondary fluid networks, cold plates, and racks interact continuously, so liquid cooling maintenance covers the full lifecycle: design, installation, commissioning, and ongoing performance tuning.
Separate installation and commissioning activities can fragment the cooling loop, reducing the system’s ability to respond to thermal load changes through coordinated control and balanced flow. High-density environments expose these gaps because thermal fluctuations are sharper and more frequent.
Early system-level planning aligns design and deployment, enabling cleaner integration and predictable loop performance across the system’s operational life.
“Building resilience into design, deployment, and maintenance strategy starts long before the first day of deployment. Service and lifecycle teams need to be involved when you're planning the infrastructure.”
-JACLYN SCHMIDT—Vertiv Liquid Cooling & High-Density Service Offering Manager
Alex: Customers demand speed and scale, and the industry keeps pushing the term “future-ready.” From a service standpoint, what does future-ready cooling actually mean?
Jaclyn Schmidt, Vertiv:
Future-ready cooling centers on adaptability and modularity. Systems must operate effectively today and evolve with accelerating AI workloads. Flexibility across CDUs, racks, servers, and supporting infrastructure allows operators to handle higher densities and changing thermal profiles without major redesign. Aligning closely with technology roadmaps from partners like NVIDIA enables AI data center designs to anticipate upcoming requirements. A unified service strategy combines flexibility, performance, and maintainability, allowing operators to scale confidently as workloads grow.
Alex: How do you keep service ecosystems aligned with rapidly changing requirements?
Jaclyn Schmidt, Vertiv:
We capture insights through site visits, field engineers, and direct customer conversations. This feedback informs serviceability and lifecycle strategy for systems, not just individual components. Industry standards improve predictability, but consistent, high-quality delivery remains the differentiator. Reference designs guide deployments worldwide, and Vertiv’s global footprint allows these designs to be supported anywhere. Partnerships with technology leaders, such as NVIDIA, help us anticipate future workloads and integrate those needs into our service planning.
Alex: Where will the next wave of innovation come from—technology or services?
Jaclyn Schmidt, Vertiv:
Both are evolving quickly. Services now play a central role alongside technological advances. AI-assisted service systems and predictive maintenance allow operators to anticipate issues before they occur. At the same time, innovations such as two-phase cooling and new materials continue to push performance. Services must advance in parallel, using approaches like “shift left,” prefabricating secondary fluid networks off-site for consistent quality, faster deployment, and predictable integration on-site.
Alex: What action should operators prioritize today?
Jaclyn Schmidt, Vertiv:
Involve service and lifecycle teams during the design phase. Embedding liquid cooling maintenance and serviceability into the infrastructure from day one builds a resilient system that operates efficiently immediately and adapts over years. Early collaboration maintains smooth deployment, high uptime, and a future-ready facility.
Watch the full discussion on why service teams must shape infrastructure design from day one
Gain deeper insight into designing, commissioning, and maintaining high-density AI data centers. Discover how integrated service strategies, predictive maintenance, and modular liquid cooling systems enable scalable and reliable performance.

[Watch the full conversation: Mastering liquid cooling services for AI environments]