The page you're viewing is for Japanese (Japan) region.

Vertivの製造担当者と連携すると、複雑な設計をお客様独自のニーズに合わせて構成することが可能になります。大規模なプロジェクトに関する技術的ガイドが必要な組織に対して、Vertivは十分なサポートを提供いたします。

詳細を見る

Vertiv再販業者パートナーと連携して、多くのお客様がITアプリケーション用のVertiv製品を購入しています。パートナーは、広範なトレーニングと経験を積み、ITおよびインフラに関するソリューション全体をVertiv製品で明確にし、販売、サポートを行うという独自の立場を構築しています。

再販業者を見つける

ご自身に必要なものについて特定済みですか? 便利なオンラインショッピングおよび配送をご希望ですか? 特定のカテゴリーのVertiv製品は、オンライン再販業者を通じて購入可能です。


販売代理店のECサイトを検索

製品選択にサポートが必要ですか? Vertivの有能なスペシャリストにご相談ください。お客様に最適なソリューションをご提案いたします。



Vertivのスペシャリストに問い合わせる

The page you're viewing is for Japanese (Japan) region.

The innovation and rate of growth in artificial intelligence (AI) and high-performance computing (HPC) applications are staggering. Autonomous cars, fraud detection, business intelligence, affinity marketing, personalized medicine, Alexa, Siri, smart cities and the Internet of Things (IoT) are just a few of the commercial and consumer-driven applications exploding in use on a global scale. Underlying these offerings are dense computing platforms and IT infrastructures that require highly specialized data center environments in order to perform reliably and scale efficiently.

Legacy on-premise or colocation data centers, built years ago to support general-purpose computing servers, are becoming obsolete in certain markets due to the intense power and cooling requirements of the HPC and AI servers. These servers are built upon processing-intensive components such as graphics processing unit (GPU) cards and dual central processing unit (CPU) architectures. They routinely draw between 500 watts and one kilowatt (kW) of power per server rack unit deployed. This means a fully-packed HPC or AI server cabinet, with 45 rack units available, can easily draw 25-30 kW or more of critical power when running multiple workloads, generating extreme heat.

Houston, we have a problem!

Legacy data center environments have been engineered with air-based systems to cool 100-150 watts of power per square foot, equating to 3-5 kW of critical draw per cabinet. This amount of power and cooling is insufficient to support the rollout of AI and HPC servers and represents a massive potential bottleneck in AI innovation.

Robust power and cooling capabilities in the data center are essential to delivering upon all the promise of AI. Without a modern data center environment, AI hardware becomes expensive to deploy, difficult to scale, and risks significant performance degradation and operational reliability. HPC and AI footprints cannot be optimized unless cabinets can be fully packed from top-to-bottom with all server rack units utilized, and there is sufficient power and cooling available to accommodate and satisfy these dense workloads.

Colovore and Vertiv have teamed up for six years now to address these challenges. Together our companies enable the growth of AI and HPC applications by delivering a data center environment optimized for high-density IT infrastructure deployments.

Based in Silicon Valley, Colovore’s nine-megawatt (MW) colocation facility is built upon Vertiv’s electrical and mechanical infrastructure, utilizing liquid as the primary cooling medium through rear-door heat exchangers (RDHX). Every cabinet, wall to wall, features 35 kW of power and cooling capacities, which enables robust, yet efficient and scalable operations. This allows AI footprints to be provisioned in the smallest, most compact manner, which yields significant operating and capital expense savings and dramatically improves scalability. Scaling within a data center cabinet is far superior, for many reasons, compared to scaling out by requiring more and more cabinets, floor space, and associated infrastructure.

Founded in 2014, Colovore was determined to provide high-density colocation services from the outset. Therefore, we designed our facility around liquid cooling. After evaluating a number of solutions, we standardized on the Vertiv Liebertâ DCD passive rack door and the results have been impressive: six years in service, 100 percent uptime to date, and 1.1 power usage effectiveness (PUE).

Our customers range from Silicon Valley startups to Fortune 500 enterprises, with deployment sizes of 10 kW to multiple MW of capacity. Vertiv’s cabinets and rear-door thermal management capabilities have ensured we can reliably support the deployment of thousands of HPC and AI servers drawing intense power in each cabinet and generating extreme heat.

While we all marvel at the rapid uptake of consumer and professional applications leveraging real-time AI and HPC processing and analytics capabilities, it is important to understand that the workhorses of these applications — the dense and powerful computing platforms themselves — require specialized environments. The Colovore and Vertiv teams have combined our expertise to deliver these environments to the AI industry, and we are excited about continuing to meet those unique requirements going forward.

Read the Case Study

今すぐVertivブログを購読して、つねに最新の技術トレンド、テクノロジー、ニュースをフォローしましょう。
PARTNERS
概要
Partner Login

言語と地域