Delivering future-ready digital infrastructure requires intense focus, system-level integration, and ecosystem collaboration.
We are in a period of unprecedented acceleration. The pace of change is unlike anything the industry has seen, opening the door to breakthroughs once considered out of reach and pushing the frontiers of innovation in every direction.
Pushing those limits, however, cannot be done unilaterally. It requires an ecosystem.
That collaborative effort includes knowledge sharing; part of our contribution is the Vertiv™ Frontiers report which explores the macro forces and technology trends we see reshaping future digital infrastructure. Together we believe they provide a framework for anticipating future innovation
Macro forces
Powerful macro forces, fueled by the rise of AI and accelerated compute, are influencing every layer of digital infrastructure, spanning technologies, architecture, and industry segments.
Technology trends
In response to these macro forces, we identified five key trends set to impact specific technology and market segments.
Looking at those macro forces first with a little more depth
Extreme densification
This is the defining macro force, the effects of which are felt across the entire data center and technology landscape. AI has transformed what is required for chip and rack density to unlock the performance needed for this level of compute. We are seeing rack densities that quickly transformed from six to 10kW yesterday, to 140kW racks to support today's models; quickly advancing towards 600 plus kW racks and beyond. We're now moving into the age of the MW rack. That has tremendous implications for the underlying infrastructure in the data center space.
Gigawatt scaling at speed
The most fascinating development that has gained momentum over the recent past is the onset of the gigawatt campus. That scale and that breadth of equipment is hard to fathom until you see it in person. We're talking about sites that require hundreds of chillers and hundreds of power systems, rows that are 40 feet plus of multiple hundred-kilowatt racks stitched together. That gigawatt scale requires a different level of thinking. It requires a different level of site design and planning, and ultimately, a different delivery model.
Data center as a unit of compute
Data centers designed for AI, whether that's training or inference, inherently perform better when they are designed and thought about as an entire system. A laptop is a complete turnkey system that is designed not only around a CPU, but it has memory, power infrastructure and cooling that are packaged into a system optimized to work together. We're seeing a similar approach around the entire data center infrastructure. We need to take a more systematic approach and move away from point components and individual product thinking.
Silicon diversification
A lot of what we have seen to date is data centers being built specifically for training AI models. We are now entering the next pivot with the proliferation of inference which can take a range of different forms. We will see enterprise level models, inference in the cloud, and inference at scale. That pivot is opening the doors for customers thinking about various types of silicon. We are seeing additional use of Tensor Processing Units (TPUs), custom ASICS, and a lot of in-house developed silicon that is either application specific or takes a different approach to compute performance.
"Densification is driving different thinking. That means we need to undo and rethink certain paradigms that we have held sacred for the last couple of decades."
Powering up for AI
Densification is driving different thinking. That means we need to undo and rethink certain paradigms that we have held sacred for the last couple of decades. To physically deploy the amount of electricity, we must think through different architectures and topologies within power. That will enable us to unlock some of the physical barriers to data center densification. This will boil down to considering different AC power and higher voltage DC architectures within the white space and the overall data center facility.
Distributed AI
We will start to see the deployment for AI inference, which can mean a variety of different applications. It can be a small one or two rack deployment in an enterprise. It can be an on-prem type of deployment at a local hospital or a local school district, or it can be a multi-megawatt type of cloud deployment. Data centers will start to look and feel different. Those types of infrastructure deployments will be characterized by purpose-built infrastructure made for that application.
Energy autonomy accelerates
One of the critical bottlenecks we have as an industry is power availability for data centers. The demand for data center capacity far outstrips our ability to deploy pure utility power, both in the US and globally. The data center segment overall is very creative and very resilient. One of the primary mechanisms for us to overcome any constraints in utility power availability is behind the meter power solutions. We're starting to see increased momentum around onsite power generation. Whether that's using natural gas or even nuclear or other mechanisms, it will be very closely coupled to the data center infrastructure itself.
Digital twin-driven design and operation
One of the ways in which we can accelerate as an industry is by leveraging a lot more of the design process in a virtual world. Digital twins enable a physics-based simulation of hardware development, prototype analysis, and even entire facility design around our reference architectures. Digital twin technology allows simulation in a digital format before we ever need to put physical infrastructure together. It gives us a much better path to accelerated deployment.
Adaptive, resilient liquid cooling
Liquid cooling has grown tremendously, and it's been exciting to be at the forefront of that innovation. Liquid cooling is now becoming the basis of design for high density deployments. We're also just scratching the surface on the capability and intelligence within an entire liquid cooling system as opposed to just a coolant delivery unit. I think we're poised for significant evolution in systems thinking for liquid cooling.
From macro forces to infrastructure reality
To sum up, from a Vertiv perspective, we almost feel an obligation to help carry the banner for the change that will be required; the innovation that will be necessary to go from today’s performance capabilities to where we need to be over the next five years and beyond.
It won't happen without a very collaborative, partner-oriented effort to change how we design systems, to rearchitect how we think about data centers, and to drive incredible innovation at an infrastructure level. It will require an intense focus on our roadmap, engineering investments and technology innovations to unlock and make possible what once seemed almost impossible.
