With cooling systems consuming up to 40% of an AI data center's total energy, pumped two-phase (P2P) direct-to-chip cooling is transforming the industry with its efficiency, reducing cooling energy consumption by up to 82%.
The increasing demand for high-performance computing (HPC) workloads has necessitated advanced liquid cooling solutions. Modern artificial intelligence (AI) chipsets now feature thermal design power (TDP) ratings exceeding 1,000 watts (W), such as the Gaudi HL-2080 at 600 W, AMD MI300X at 750 W, and the NVIDIA Blackwell GPU at 2000 W. To address the growing thermal requirements of these components, chipmakers, universities, and manufacturers like Vertiv collaborated to develop P2P direct-to-chip liquid cooling.
Research presented at the American Society of Mechanical Engineers International Technical Conference and Exhibition (ASME InterPACK) 2024 validates the commercial viability and technical efficacy of P2P direct-to-chip cooling for high-power density chips. The system reliably manages thermal load in AI data center environments under varying conditions. Detailed performance metrics and technical specifications follow.
Refrigerant-to-liquid pumped two-phase test apparatus.
Source: Maturation of Pumped Two-Phase Liquid Cooling to Commercial Scale-Up Deployment
What is P2P direct-to-chip cooling?
Pumped two-phase direct-to-chip cooling involves placing cold plates directly on the primary heat sources of HPC servers. The heat absorption process of P2P direct-to-chip cooling follows a transport system that operates as a closed loop, using phase change to remove heat. Coolant distribution units (CDUs) pump liquid or refrigerant through these cold plates, where the coolant absorbs heat from the server and evaporates. The resulting vapor then returns to the CDU, where it condenses back into liquid through a heat exchange process. Effective P2P direct-to-chip cooling requires precise alignment of the silicon power map with the cold plate, a design aspect managed by IT manufacturers.
Operational process
From CDU to cold plates
A CDU pump circulates refrigerant through the loop, starting at the CDU via a liquid row manifold for uniform distribution. The refrigerant flows through liquid rack manifolds, hoses, and quick disconnects (QDs) to individual IT components. Inside the cold plate arrays attached directly to the heat-generating components, the refrigerant absorbs heat from the chips and changes phase from liquid to vapor. This phase change process, known as flow boiling, allows the refrigerant to capture heat.
Return to CDU condenser for recirculation
The mixture of liquid and vapor refrigerant moves through two-phase rack manifolds and return lines to the condenser in the CDU. In the condenser, the refrigerant releases the absorbed heat to the primary cooling medium, which can be an aqueous solution or conditioned air. The refrigerant then condenses back into a liquid state. The liquid refrigerant returns to the pump, completing the closed loop.

Learn more
Vertiv™ 360AI includes a complete portfolio of power, cooling and service solutions that solve the complex challenges arising from the AI revolution.
Types of P2P direct-to-chip cooling systems
P2P direct-to-chip cooling systems come in two main types: refrigerant-to-air (R2A) and refrigerant-to-liquid (R2L). Both use CDUs to circulate refrigerant but differ in their heat transfer mechanisms to the external cooling medium. Each type suits different AI data center environments and cooling needs.
Refrigerant-to-air (R2A)
R2A systems (see Figure 1) use microchannel coil condensers with variable-speed fans to transfer heat to the primary fluid. R2A CDUs offer up to 40 kilowatts (kW) of cooling capacity in a 600 millimeters (mm) rack, enabling the operation of high-density HPC servers in air-cooled data center environments. These systems provide a transition period for implementing full liquid cooling in AI clusters.
Figure 1. Prototype of a Vertiv™ R2A P2P direct-to-chip liquid cooling system
Source: Maturation of Pumped Two-Phase Liquid Cooling to Commercial Scale-Up Deployment
Refrigerant-to-liquid (R2L)
R2L systems (Figure 2) use brazed plate heat exchangers and aqueous solutions with chilled water control valves. These systems can be integrated with facility cooling systems for more extensive cooling needs. Operators implement R2L systems for high-power density environments due to liquid’s superior thermal transport propertie.
Figure 2. Example of a Vertiv™ R2L P2P direct-to-chip liquid cooling system.
Source: Maturation of Pumped Two-Phase Liquid Cooling to Commercial Scale-Up Deployment
Maturation of P2P direct-to-chip liquid cooling
As part of the research titled "Maturation of Pumped Two-Phase Liquid Cooling to Commercial Scale-Up Deployment," Vertiv and Intel conducted comprehensive evaluations to assess the readiness of P2P direct-to-chip liquid cooling for commercial deployment.
The study indicates that P2P direct-to-chip liquid cooling has achieved a technology readiness level (TRL) of 7 and a commercial readiness level (CRL) of 2. These ratings reflect the system’s successful demonstration in an operational environment and its progress toward small-scale commercial deployment.
Key tests showed that P2P direct-to-chip liquid cooling can dissipate up to 170 kW of IT heat load while maintaining a maximum case temperature of 56.4°C at a volumetric flow rate of 0.48 liters per minute per kilowatt (LPM/kW). Furthermore, the system demonstrated effective operation under high-pressure conditions using refrigerants such as R-515B and R-134a.
Distribution and flow regulation
The researchers regulated refrigerant flow to each cold plate array, maintaining a pressure range of 2 to 32 pounds per square inch differential (PSID). Flow regulators allowed consistent pressure across each circuit, enabling proper flow distribution despite varying IT loads. The researchers monitored and adjusted the system to prevent issues like dry-out, which occurs when the liquid entering the cold plates becomes superheated. The cold plates captured and transported heat by flow boiling, maintaining a leaving vapor quality of less than 0.75 to avoid hot spots in the silicon.
CDU stability across varying IT loads
The researchers tested the stability of the CDU over a range of IT loads from 0-100%, including transient and asymmetric loading (see Figure 3). They kept a constant liquid pump flow rate of 39 gallons per minute (gpm) during testing. The CDU handled rapid changes in IT load, adjusting quickly to enable consistent cooling performance. The return working fluid temperature stayed within the design range, even at high IT loads up to 170 kW.
Figure 3. Results of step load increases from zero to 42kW, 84kW, 136kW, and 170kW. The power steps are approximately one rack at a time. The working fluid supply flow rate was constant between 36 and 39 gpm maintaining design liquid supply to each cooling loop with flow regulators operating in design 2-32 PSID regulation range.
Source: Maturation of Pumped Two-Phase Liquid Cooling to Commercial Scale-Up Deployment
Startup and hot-swapping
The researchers demonstrated the system’s ability to operate stably with both zero and full server populations, including successful hot-swapping of servers. This process involved removing and installing servers while the system was running. They followed detailed procedures, such as pressure decay leak checks, vacuum evacuation, and vapor charging, to prevent air or non-condensable gas seepage. During these tests, the liquid pump maintained a constant flow rate of 39 gpm, and the system handled transitions smoothly.
Safe operation under abnormal conditions
The researchers verified safe operation during pump switch-over and loss of heat rejection scenarios. They established high-pressure cut-out (HPCO) set points at 190 pounds per square inch gauge (psig) for R2L and 210 psig for R2A. Tests simulated pump failures and cooling loss to confirm the system could shut down safely or switch to backup systems without releasing refrigerant or causing damage. The researchers monitored the system’s response to highpressure conditions and activated the HPCO before reaching the pressure relief valve (PRV) set point to prevent refrigerant release.
Learn more
Explore the latest advancements in P2P direct-to-chip cooling for high-power density chips by reading the detailed study presented at the ASME InterPACK 2024 conference here:
Evaluation of P2P direct-to-chip cooling in AI data centers
Vertiv, NVIDIA, and Binghamton University evaluated a P2P direct-to-chip liquid cooling system's ability to manage and dissipate heat in high-density rack environments (see Figure 4) as part of the study “Advancing in Data Centers Thermal Management: Experimental Assessment of Two-Phase Liquid Cooling Technology.” The system features a Vertiv™ in-row R2L CDU with a cooling capacity of 160 kW, integrated with row and rack manifolds and server cooling loops. Testing included both hydraulic and thermal assessments to validate system performance.
Figure 4.The experimental setup used in the study (a) P2P CDU, multi-racks, and PSU (b) TTV and cooling loops.
Hydraulic testing
Hydraulic tests were conducted at a constant refrigerant supply temperature (RST) of 22°C, starting at 20 LPM per rack (or 4 LPM per cooling loop (CL)) and increasing to 36 LPM per rack (or 7.2 LPM per CL). The row manifold demonstrated efficient flow distribution, with a maximum pressure drop of 0.23 pounds per square inch (psi). However, the rack manifold and cooling loops exhibited higher pressure drops, reaching 7.6 psi at full heat load, due to the presence of flow regulators and QDs.
Thermal testing
Thermal tests characterized various parameters under different heat loads, including pressure drop, saturation temperature (Tsat), Delta T subcooling (ΔTsub), cold plate thermal resistance, vapor exit quality, and heater case temperature. The system achieved a maximum heater case temperature of 56.4°C and a maximum vapor exit quality of 58%. The cold plate thermal resistance was calculated to be 0.012°C/W at Tsat, indicating efficient heat transfer.
Pressure drops
The row manifold maintained a low-pressure drop of 0.23 psi, while the rack manifold and cooling loops experienced higher pressure drops, peaking at 7.6 psi at full heat load, primarily due to flow regulators and QDs.
Saturation temperature (Tsat)
The Tsat of the refrigerant is influenced by the chilled water source. Adjustments to Tsat are necessary to maintain the heater case temperature below specified thresholds, allowing efficient operation and preventing overheating.
Delta T subcooling (ΔTsub)
The ΔTsub entering the cold plates is affected by the pressure drop on the return side. As the heat load increases, the pressure drop rises by 4.8 times, increasing the subcooled temperature difference by 2.8 times. This highlights the importance of managing pressure drops to maintain optimal thermal performance.
Cold plate thermal resistance
The thermal resistance of the cold plates was measured under various heat loads. At a full load of 10 kW per CL, the maximum thermal resistance recorded was 0.012°C/W, indicating efficient heat transfer from the cold plates to the refrigerant.
Heater case temperature
The system maintained a maximum heater case temperature of 56.4°C, confirming its ability to keep electronic components within safe operating limits.
Vapor exit quality
The researchers measured vapor exit quality from the cold plates to evaluate phase change efficiency. At 10 kW per CL, the maximum vapor quality reached 58%. At 1 kW per CL, the exit vapor quality dropped to 5%, with 95% of the refrigerant remaining in the liquid phase. This variation demonstrates the system’s effective adaptation to varying heat loads
Learn more
Explore the latest advancements in P2P direct-to-chip cooling for high-power density chips by reading the detailed study presented at the ASME InterPACK 2024 conference here:
Adapt to growing demands
As data centers evolve to support higher power densities, efficient heat dissipation from high-power components has become critical. Vertiv, in collaboration with industry partners, is conducting research and testing on P2P direct-to-chip cooling systems to address these thermal challenges. These studies validate the effectiveness of P2P cooling technology, paving the way for future data center cooling innovations.
Vertiv designs advanced cooling solutions that directly address the thermal management needs of high-performance systems. Implementing P2P direct-to-chip cooling enables efficient heat dissipation using cold plates placed directly on primary heat sources. This innovative approach not only enhances cooling efficiency but also supports the scalability and reliability of high-density deployments.