The page you're viewing is for Korean (Korea) region.

Vertiv 영업담당자에게 문의하시면 고객의 고유한 요구에 맞게 복잡한 설계를 구성할 수 있습니다. Vertiv는 대규모 프로젝트에 대한 기술 지침이 필요한 조직에 필요한 지원을 제공할 수 있습니다.

자세히 보기

많은 고객이 Vertiv 리셀러 파트너와 협력하여 IT 애플리케이션을 위한 Vertiv 제품을 구매합니다. 파트너는 다양한 교육을 받고 전문 경험을 보유하고 있으며 Vertiv 제품을 통해 전체 IT 및 인프라 솔루션을 지정, 판매, 지원할 수 있는 독보적인 위치에 있습니다.

리셀러 찾기

필요한 것이 무엇인지 이미 알고 계십니까? 온라인 구매 및 배송의 편리함을 원하십니까? 특정 범주의 Vertiv 제품은 온라인 리셀러를 통해 구매할 수 있습니다.


온라인 리셀러 찾기

제품 선택에 도움이 필요하십니까? 여러분에게 적합한 솔루션을 안내할 수 있는 우수한 Vertiv 전문가와 상담하십시오.



Vertiv 전문가에게 문의하기

The page you're viewing is for Korean (Korea) region.

The AI era is defined by pushing the frontiers of what's possible

5 min. Read
Delivering future-ready digital infrastructure requires intense focus, system-level integration, and ecosystem collaboration.

We are in a period of unprecedented acceleration. The pace of change is unlike anything the industry has seen, opening the door to breakthroughs once considered out of reach and pushing the frontiers of innovation in every direction.

Pushing those limits, however, cannot be done unilaterally. It requires an ecosystem.

That collaborative effort includes knowledge sharing; part of our contribution is the Vertiv™ Frontiers report which explores the macro forces and technology trends we see reshaping future digital infrastructure. Together we believe they provide a framework for anticipating future innovation

Macro forces

Powerful macro forces, fueled by the rise of AI and accelerated compute, are influencing every layer of digital infrastructure, spanning technologies, architecture, and industry segments.

Technology trends

In response to these macro forces, we identified five key trends set to impact specific technology and market segments.

Looking at those macro forces first with a little more depth

Extreme densification

This is the defining macro force, the effects of which are felt across the entire data center and technology landscape. AI has transformed what is required for chip and rack density to unlock the performance needed for this level of compute. We are seeing rack densities that quickly transformed from six to 10kW yesterday, to 140kW racks to support today's models; quickly advancing towards 600 plus kW racks and beyond. We're now moving into the age of the MW rack. That has tremendous implications for the underlying infrastructure in the data center space.

Gigawatt scaling at speed

The most fascinating development that has gained momentum over the recent past is the onset of the gigawatt campus. That scale and that breadth of equipment is hard to fathom until you see it in person. We're talking about sites that require hundreds of chillers and hundreds of power systems, rows that are 40 feet plus of multiple hundred-kilowatt racks stitched together. That gigawatt scale requires a different level of thinking. It requires a different level of site design and planning, and ultimately, a different delivery model.

Data center as a unit of compute

Data centers designed for AI, whether that's training or inference, inherently perform better when they are designed and thought about as an entire system. A laptop is a complete turnkey system that is designed not only around a CPU, but it has memory, power infrastructure and cooling that are packaged into a system optimized to work together. We're seeing a similar approach around the entire data center infrastructure. We need to take a more systematic approach and move away from point components and individual product thinking.

Silicon diversification

A lot of what we have seen to date is data centers being built specifically for training AI models. We are now entering the next pivot with the proliferation of inference which can take a range of different forms. We will see enterprise level models, inference in the cloud, and inference at scale. That pivot is opening the doors for customers thinking about various types of silicon. We are seeing additional use of Tensor Processing Units (TPUs), custom ASICS, and a lot of in-house developed silicon that is either application specific or takes a different approach to compute performance.

Powering up for AI

Densification is driving different thinking. That means we need to undo and rethink certain paradigms that we have held sacred for the last couple of decades. To physically deploy the amount of electricity, we must think through different architectures and topologies within power. That will enable us to unlock some of the physical barriers to data center densification. This will boil down to considering different AC power and higher voltage DC architectures within the white space and the overall data center facility.

Distributed AI

We will start to see the deployment for AI inference, which can mean a variety of different applications. It can be a small one or two rack deployment in an enterprise. It can be an on-prem type of deployment at a local hospital or a local school district, or it can be a multi-megawatt type of cloud deployment. Data centers will start to look and feel different. Those types of infrastructure deployments will be characterized by purpose-built infrastructure made for that application.

Energy autonomy accelerates

One of the critical bottlenecks we have as an industry is power availability for data centers. The demand for data center capacity far outstrips our ability to deploy pure utility power, both in the US and globally. The data center segment overall is very creative and very resilient. One of the primary mechanisms for us to overcome any constraints in utility power availability is behind the meter power solutions. We're starting to see increased momentum around onsite power generation. Whether that's using natural gas or even nuclear or other mechanisms, it will be very closely coupled to the data center infrastructure itself.

Digital twin-driven design and operation

One of the ways in which we can accelerate as an industry is by leveraging a lot more of the design process in a virtual world. Digital twins enable a physics-based simulation of hardware development, prototype analysis, and even entire facility design around our reference architectures. Digital twin technology allows simulation in a digital format before we ever need to put physical infrastructure together. It gives us a much better path to accelerated deployment.

Adaptive, resilient liquid cooling

Liquid cooling has grown tremendously, and it's been exciting to be at the forefront of that innovation. Liquid cooling is now becoming the basis of design for high density deployments. We're also just scratching the surface on the capability and intelligence within an entire liquid cooling system as opposed to just a coolant delivery unit. I think we're poised for significant evolution in systems thinking for liquid cooling.

From macro forces to infrastructure reality 

To sum up, from a Vertiv perspective, we almost feel an obligation to help carry the banner for the change that will be required; the innovation that will be necessary to go from today’s performance capabilities to where we need to be over the next five years and beyond.

It won't happen without a very collaborative, partner-oriented effort to change how we design systems, to rearchitect how we think about data centers, and to drive incredible innovation at an infrastructure level. It will require an intense focus on our roadmap, engineering investments and technology innovations to unlock and make possible what once seemed almost impossible.


블로그 포스트 Artificial intelligence Data center innovation Digital first design Extreme densification Frontiers Gigawatt-scale campuses

VertivTM AI Hub

Infrastructure designed to stay multiple compute generations ahead, starting now.

Learn more
PORTALS
개요
파트너 로그인

언어 & 지역