Working with our Vertiv Sales team enables complex designs to be configured to your unique needs. If you are an organization seeking technical guidance on a large project, Vertiv can provide the support you require.

Learn More

Many customers work with a Vertiv reseller partner to buy Vertiv products for their IT applications. Partners have extensive training and experience, and are uniquely positioned to specify, sell and support entire IT and infrastructure solutions with Vertiv products.

Find a Reseller

Already know what you need? Want the convenience of online purchase and shipping? Certain categories of Vertiv products can be purchased through an online reseller.


Find an Online Reseller

Need help choosing a product? Speak with a highly qualified Vertiv Specialist who will help guide you to the solution that is right for you.



Contact a Vertiv Specialist

Understanding fluid networks in liquid-cooled data centers: A complete guide

As data center power densities continue to climb driven by AI, high-performance computing (HPC), and advanced processors, traditional air cooling is reaching its practical limits. Liquid cooling emerged as the solution for managing heat loads exceeding 30-50 kW per rack, but implementing it requires understanding a critical component: the fluid network.

A fluid network is the complete system of pipes, manifolds, pumps, and heat exchangers that transport coolant throughout a data center to remove heat from IT equipment. Much like the electrical distribution system delivers power from utility connections to individual servers, the fluid network carries heat from the graphic processing units (GPUs) and central processing units (CPUs) to the heat rejection or reuse systems outside. Understanding this system's three main layers ─ in-rack manifolds, row manifolds, and facility loop ─ is essential for anyone planning, designing, or operating liquid-cooled infrastructure.

The primary loop (facility water system)

The primary loop, also called the facility water system (FWS) by the American Society of Heating, Refrigeration, and Air-Conditioning Engineers (ASHRAE), is the backbone of any liquid cooling system. This network connects the data center's heat rejection or reuse equipment, typically chillers, cooling towers, or dry coolers, to the computer room air handlers (CRAHs) and coolant distribution units (CDUs) positioned throughout the facility.

In most modern deployments, this primary loop circulates facility water or a water-glycol mixture at temperatures ranging from 62.6°F to 113°F (17°C to 45°C) as outlined by ASHRAE W-Classes W17 to W+, depending on climate and equipment specifications. The loop includes large-diameter piping (often 4 inches or larger), isolation valves, expansion tanks, and pumps that maintain the pressure and flow rates necessary to serve multiple CDUs simultaneously.

The primary loop operates independently of IT equipment, providing the separation needed to protect sensitive servers from potential quality issues in facility water. This separation is crucial for reliability and maintenance. Facility-side work can often proceed without impacting production IT loads.

Row distribution (secondary fluid network)

Between the facility loop and individual server racks sits what's commonly called the secondary fluid network (SFN), or technology cooling system. This intermediate layer is where the CDU, the bridge between facility infrastructure and IT equipment, distributes conditioned coolant to server rows or pods.

CDUs perform several critical functions: they use heat exchangers (typically liquid-to-liquid) to transfer heat from the secondary loop to the facility loop, filter and condition the coolant that directly contacts IT equipment, regulate temperature, pressure, and flow rate, and monitor fluid quality and system performance.

From each CDU, the secondary fluid network extends through row-level manifolds that run overhead or under the floor along server racks. These manifolds typically feature 4-inch or 6-inch stainless steel or polypropylene headers with standardized connection points for individual racks. The network includes supply and return lines, flow-control valves, maintenance isolation points, and quick-disconnect fittings for easy rack connections, with drip pans for leak protection, used as industry best practice.

This row-level distribution system must balance multiple requirements:

  • Deliver consistent cooling across all connected racks.
  • Allow individual rack servicing without system shutdown.
  • Accommodate future density increases and rack additions.
  • Maintain clean fluid paths to protect sensitive cold plates.
  • Integrate with monitoring systems for visibility.

In-rack manifolds

The final distribution layer brings the coolant from the row manifolds directly to components within each rack. In-rack manifolds are compact distribution assemblies mounted vertically at the rear of the rack as a pair, featuring supply and return connections to the row manifold, branch lines to individual servers or cooling zones, and quick-connects compatible with server manufacturers' cold plate designs.

Modern in-rack manifolds support various cooling strategies that utilize row-based or in-rack CDUs for direct-to-chip cooling for extreme workloads. The design must accommodate different server configurations and allow hot-swappable server replacement capabilities without draining the system.

Design considerations for complete fluid networks

Whether you're retrofitting an existing facility or designing new construction, understanding fluid networks is the first step toward successful liquid cooling deployment. Successfully implementing a fluid network requires attention to several factors: material compatibility throughout the system, proper fluid selection, adequate redundancy at each level to maintain uptime during maintenance, commissioning procedures ─ including flushing, pressure testing, and fluid quality verification ─ and ongoing monitoring of temperature, pressure, flow rates, and fluid chemistry.

The right approach balances immediate cooling needs with future scalability, integrates with existing mechanical infrastructure, and simplifies operations through thoughtful design.

Ready to explore fluid network solutions for your facility? Learn more about our complete liquid cooling infrastructure offerings.

Related Articles

PORTALS
Partner Login

Language & Location