Artificial intelligence is reshaping data center demands across Latin America. The rapid growth of high-density workloads requires a new approach to power, cooling and scalability in critical infrastructure. Learn the key design imperatives to build an AI-ready environment.
AI is one of the technology areas with the greatest economic projections, driving changes in the way companies operate and innovate, creating new business opportunities and transforming industries. According to IDC's FutureScape report by 2027, the top 5,000 companies in Latin America will allocate more than 25% of core IT spending to AI initiatives, leading to a double-digit increase in the rate of product and process innovation.
In the region, this represents an opportunity for data center operators to respond to these needs. Research from Morgan Stanley predicts that the power demand of generative AI will grow at an annual rate of 70% through 2027. This makes power management one of the biggest challenges for AI-enabled data centers, where ensuring consistent power availability and quality is critical for maintaining efficiency and operational reliability.
Rethinking infrastructure design for success
The design strategies and processes that have been employed for decades also need to be updated. To meet these challenges, Vertiv has developed AI-specific design principles to meet new workload and density requirements:
- Design power and cooling holistically: A holistic approach to infrastructure is required to meet AI’s simultaneous power and cooling demands. By employing highly efficient integrated technologies like direct-to-chip liquid cooling alongside advanced power infrastructure, holistically designed solutions enhance overall efficiency, enable scalability, and validate that AI workloads are not throttled or slowed by infrastructure limitations
- Make effective use of available power: AI is projected to create unprecedented growth in data center power consumption. AI racks must use every watt of power as efficiently as possible, necessitating designs that eliminate stranded power by aligning AI clusters to data center capacity blocks and leveraging the latest advances in equipment efficiency. Real-time monitoring and optimization of power distribution through out-of-band management helps eliminate inefficiencies and optimize resource usage. In Latin America, data centers with more than 50 MW of capacity will be opened or built in 2025, so the more efficient the systems that serve this higher demand, the lower the power consumption of this processing.
- Balance TCO, redundancy, and blast radius: Maximizing the value of AI infrastructure requires a careful analysis of total costs, redundancy, and the potential scope of damage that could occur in the event of a failure (blast radius). Achieving the proper balance optimizes capital investment, risk management, scalability, and reliability. In the event of system failures, remote out-of-band management can reduce recovery times from hours to minutes.
- Prepare for AI workload surges: AI workloads can have significant variances in their resource requirements, leading to dynamic computing demand. Infrastructure must be designed to accommodate dynamic workloads through buffer capacity and the use of advanced system-level controls.
- Leverage liquid- and air-cooling technologies: Combining liquid and air cooling technologies allows the strengths of each technology to complement each other. This results in a solution that is flexible to address varying cooling needs across workloads and is efficient and scalable.
- Design for the future: Anyone designing for AI today must have an eye on the future. While AI is delivering value across multiple industries, it is still in the earliest phases. Vertiv is planning for a future in which the computing capacity of a 1 MW data center will be packed into a single rack.
Simplify the transition to high density
Some companies have run into difficulties due to a lack of space for high-density racks, or have found limitations on the electrical side that require a structural redesign. One success story implemented by Vertiv is that of Colovore, a Silicon Valley data center sp ecifically design to support high-density loads associated with AI, machine learning and big data. With up to 50 kW of capacity per rack and a pay-per-kW model, Colovore has optimized its infrastructure to maximize energy and thermal efficiency in a limited space.
To facilitate the implementation of AI in your data center it is necessary to consider modular systems that allow for growth as the IT load increases, this includes using chillers with high water temperatures and free cooling systems, UPS systems with expansion capabilities in their power modules and electrical distribution systems through busway, among others.
To simplify this move to high density, Vertiv offers Vertiv™ 360AI, solutions designed to enable enterprises to run AI systems even in environments that are not ready for high density. These solutions combine power and cooling with remote management and lifecycle maintenance to deliver a complete solution that is easy to deploy, with up to 50% less deployment time compared to typical infrastructure installations.
Learn more about the solutions that can prepare your data center to be AI Ready here.