The page you're viewing is for French (Canada) region.

Travailler avec un représentant du fabricant Vertiv permet de configurer des conceptions complexes en fonction de vos besoins uniques. Si vous êtes une organisation à la recherche de conseils techniques sur un projet d’envergure, Vertiv peut vous fournir le soutien dont vous avez besoin.

En savoir plus

De nombreux clients travaillent avec un partenaire revendeur Vertiv pour acheter des produits Vertiv destinés à leurs applications informatiques. Les partenaires disposent d’une formation et d’une expérience approfondies et sont particulièrement bien placés pour spécifier, vendre et assurer le soutien de solutions informatiques et d’infrastructure complètes avec les produits Vertiv.

Trouver un revendeur

Vous savez déjà ce dont vous avez besoin? Vous recherchez la commodité de l’achat en ligne et de l’expédition? Certaines catégories de produits Vertiv peuvent être achetées auprès d’un revendeur en ligne.


Trouver un revendeur en ligne

Besoin d’aide pour choisir un produit? Parlez à un spécialiste Vertiv hautement qualifié qui vous guidera vers la solution qui vous convient.



Contacter un spécialiste Vertiv

The page you're viewing is for French (Canada) region.

As data center power densities continue to climb driven by AI, high-performance computing (HPC), and advanced processors, traditional air cooling is reaching its practical limits. Liquid cooling emerged as the solution for managing heat loads exceeding 30-50 kW per rack, but implementing it requires understanding a critical component: the fluid network.

A fluid network is the complete system of pipes, manifolds, pumps, and heat exchangers that transport coolant throughout a data center to remove heat from IT equipment. Much like the electrical distribution system delivers power from utility connections to individual servers, the fluid network carries heat from the graphic processing units (GPUs) and central processing units (CPUs) to the heat rejection or reuse systems outside. Understanding this system's three main layers ─ in-rack manifolds, row manifolds, and facility loop ─ is essential for anyone planning, designing, or operating liquid-cooled infrastructure.

The primary loop (facility water system)

The primary loop, also called the facility water system (FWS) by the American Society of Heating, Refrigeration, and Air-Conditioning Engineers (ASHRAE), is the backbone of any liquid cooling system. This network connects the data center's heat rejection or reuse equipment, typically chillers, cooling towers, or dry coolers, to the computer room air handlers (CRAHs) and coolant distribution units (CDUs) positioned throughout the facility.

In most modern deployments, this primary loop circulates facility water or a water-glycol mixture at temperatures ranging from 62.6°F to 113°F (17°C to 45°C) as outlined by ASHRAE W-Classes W17 to W+, depending on climate and equipment specifications. The loop includes large-diameter piping (often 4 inches or larger), isolation valves, expansion tanks, and pumps that maintain the pressure and flow rates necessary to serve multiple CDUs simultaneously.

The primary loop operates independently of IT equipment, providing the separation needed to protect sensitive servers from potential quality issues in facility water. This separation is crucial for reliability and maintenance. Facility-side work can often proceed without impacting production IT loads.

Row distribution (secondary fluid network)

Between the facility loop and individual server racks sits what's commonly called the secondary fluid network (SFN), or technology cooling system. This intermediate layer is where the CDU, the bridge between facility infrastructure and IT equipment, distributes conditioned coolant to server rows or pods.

CDUs perform several critical functions: they use heat exchangers (typically liquid-to-liquid) to transfer heat from the secondary loop to the facility loop, filter and condition the coolant that directly contacts IT equipment, regulate temperature, pressure, and flow rate, and monitor fluid quality and system performance.

From each CDU, the secondary fluid network extends through row-level manifolds that run overhead or under the floor along server racks. These manifolds typically feature 4-inch or 6-inch stainless steel or polypropylene headers with standardized connection points for individual racks. The network includes supply and return lines, flow-control valves, maintenance isolation points, and quick-disconnect fittings for easy rack connections, with drip pans for leak protection, used as industry best practice.

This row-level distribution system must balance multiple requirements:

  • Deliver consistent cooling across all connected racks.
  • Allow individual rack servicing without system shutdown.
  • Accommodate future density increases and rack additions.
  • Maintain clean fluid paths to protect sensitive cold plates.
  • Integrate with monitoring systems for visibility.

In-rack manifolds

The final distribution layer brings the coolant from the row manifolds directly to components within each rack. In-rack manifolds are compact distribution assemblies mounted vertically at the rear of the rack as a pair, featuring supply and return connections to the row manifold, branch lines to individual servers or cooling zones, and quick-connects compatible with server manufacturers' cold plate designs.

Modern in-rack manifolds support various cooling strategies that utilize row-based or in-rack CDUs for direct-to-chip cooling for extreme workloads. The design must accommodate different server configurations and allow hot-swappable server replacement capabilities without draining the system.

Design considerations for complete fluid networks

Whether you're retrofitting an existing facility or designing new construction, understanding fluid networks is the first step toward successful liquid cooling deployment. Successfully implementing a fluid network requires attention to several factors: material compatibility throughout the system, proper fluid selection, adequate redundancy at each level to maintain uptime during maintenance, commissioning procedures ─ including flushing, pressure testing, and fluid quality verification ─ and ongoing monitoring of temperature, pressure, flow rates, and fluid chemistry.

The right approach balances immediate cooling needs with future scalability, integrates with existing mechanical infrastructure, and simplifies operations through thoughtful design.

Ready to explore fluid network solutions for your facility? Learn more about our complete liquid cooling infrastructure offerings.

Articles reliés

Langue et localisation