The page you're viewing is for French (Canada) region.

Travailler avec un représentant du fabricant Vertiv permet de configurer des conceptions complexes en fonction de vos besoins uniques. Si vous êtes une organisation à la recherche de conseils techniques sur un projet d’envergure, Vertiv peut vous fournir le soutien dont vous avez besoin.

En savoir plus

De nombreux clients travaillent avec un partenaire revendeur Vertiv pour acheter des produits Vertiv destinés à leurs applications informatiques. Les partenaires disposent d’une formation et d’une expérience approfondies et sont particulièrement bien placés pour spécifier, vendre et assurer le soutien de solutions informatiques et d’infrastructure complètes avec les produits Vertiv.

Trouver un revendeur

Vous savez déjà ce dont vous avez besoin? Vous recherchez la commodité de l’achat en ligne et de l’expédition? Certaines catégories de produits Vertiv peuvent être achetées auprès d’un revendeur en ligne.


Trouver un revendeur en ligne

Besoin d’aide pour choisir un produit? Parlez à un spécialiste Vertiv hautement qualifié qui vous guidera vers la solution qui vous convient.



Contacter un spécialiste Vertiv

The page you're viewing is for French (Canada) region.

“AI is becoming as fundamental as electricity or water—it’s infrastructure. If you don’t have it, you can’t compete.”

Martin Olsen, VP for segment strategy & deployment for data centers at Vertiv

On the final day of DCD AI Week, Stephen Worn of DCD hosted a discussion on the challenges of building and operating AI-capable data centers. He was joined by Martin Olsen, Vertiv’s VP for Segment Strategy and Deployment for Data Centers, and Jim McGregor, founder and principal analyst of TIRIAS Research. The conversation focused on managing high-density workloads, optimizing operational efficiency, and adapting existing infrastructure to meet growing demand.

How is AI reshaping the tech industry and society?

Jim McGregor: AI is more than a technology shift—it’s a societal one. It’s changing how people learn, work, and live. For years, AI was in the background, quietly powering things like battery management in mobile devices. Now it’s embedded in daily routines through large language models, AI agents, and enterprise applications. The demand growth is staggering. We’ve seen token generation rise from the trillions to projected quadrillions by 2030, and that doesn’t even include growth in images, video, and gaming. This demand is driving massive pressure on data centers.

What’s the main challenge with deploying AI infrastructure at scale?

Martin Olsen: The challenge is speed and scale. Traditional deployment is too slow for AI. You can source GPUs in six to nine months, but the infrastructure—power, cooling, and networking—still takes 24 to 36 months to deliver. Data centers need to be treated like utilities, with compute delivered as a packaged unit. At Vertiv, we think in terms of the “unit of compute,” where power, cooling, networking, and IT come together in modular, prefabricated systems. This industrialization cuts deployment timelines from years to months.

What’s holding data centers back from meeting AI demand?

Jim McGregor: It’s not GPUs anymore. The real bottleneck is the data center itself. We’ve built a culture of customization—every facility is unique. That makes projects slow and complex. We need to think differently, standardizing and modularizing so that building a data center feels closer to assembling pre-engineered components than starting from scratch.

How do modular and prefabricated systems help?

Martin Olsen: Prefabrication takes work out of the field and into the factory. Instead of relying on months of installation, commissioning, and fit-out, operators receive standardized building blocks that are tested and ready to deploy. This approach reduces time, improves quality, and creates a more predictable path to scaling. In many ways, the data center has to be thought of like a server—appliance-like and productized at a very large scale.

Jim McGregor: Exactly. Think of it as Lego-like building blocks: IT pods, e-houses, power skids, racks. Standardization makes deployment faster and upgrades easier.

How should data centers prepare for future AI workloads?

Jim McGregor: Flexibility is key. New GPU generations arrive every year, but other processors are coming too—quantum and neuromorphic systems. Nobody wants to build a billion-dollar facility tailored to one chip that becomes obsolete in two years. Data centers must be modular and designed for hybrid environments. Incremental upgrades and refreshes need to be possible without downtime.

Martin Olsen: That’s why interfaces matter—electrical, mechanical, structural, and digital. They need to be tightly integrated, efficient, and compartmentalized so operators can refresh a portion of infrastructure while the rest keeps running. With densities rising to 600 kW or more per rack, any inefficiency or delay can carry serious financial risk.

How do rising densities and power demands change design strategy?

Martin Olsen: Densities are scaling quickly, with line of sight to 600 kW per rack and beyond. That requires new approaches to power distribution, like moving to 800-volt or even 1,500-volt DC architectures. Fewer conversions mean higher efficiency. On the thermal side, we’ve already shifted to liquid cooling, but as densities rise further, we’ll need new methods to handle the heat.

Jim McGregor: This is why the traditional server rack design will have to evolve. Pizza-box servers won’t cut it. Future designs may be denser, brick-like compute modules or entirely new pod structures.

What about power supply—can utilities keep up?

Martin Olsen: No, utilities can’t keep pace. That’s why on-site generation and microgrids are critical. Natural gas turbines with a roadmap toward hydrogen, combined with advanced storage, create energy sovereignty. You still use the grid, but you augment it with local capacity to maximize uptime and scalability.

Jim McGregor: We’re already seeing hyperscalers and colocation providers invest in their own generation for this reason. It’s the only way to reliably scale.

Can existing data centers handle AI?

Jim McGregor: Not most of them. Over 90 percent of existing data centers can’t handle the weight, density, or power needs of racks like NVIDIA’s NVL72. Many raised floors can’t support the load. Retrofitting can help, but much of the new AI capacity will come from new builds.

Martin Olsen: That doesn’t mean everything has to be greenfield. Many facilities can be retrofitted or augmented with microgrids. But AI is forcing operators to think in terms of digital twins—designing and testing facilities virtually before they’re built, and then carrying that model through design, deployment, operation, and optimization.

What are the key takeaways for building AI-ready data centers?

Jim McGregor: The only constant is change. From chips to racks to facilities, everything is evolving faster than ever. Demand isn’t slowing. Operators must embrace modularity, flexibility, and a willingness to rethink traditional models.

Martin Olsen: Three things stand out. First, industrialization—treat data centers as standardized units of compute. Second, digital twins—make facilities born-digital and carry that fidelity across their lifecycle. Third, energy sovereignty—develop microgrids and on-site generation to keep pace with power demand.

View the full session: The AI data center of the future: Where are we headed? - DCD

PARTNERS
Survol
Consultants Corner

Langue et localisation