Digital Transformation

How AI Is Driving Data Center Transformation - Part 1

Data Center Customers Demand Specialized Infrastructure to Process AI Workloads
How AI Is Driving Data Center Transformation - Part 1
Image: Shutterstock

With the introduction of generative AI and ChatGPT at the end of 2022, enterprises began exploring AI use cases and running pilots for private LLMs - with a few LLMs already in production. Conventional, server-grade CPUs are not suitable for AI processing, causing data centers to either adapt or currently undergo infrastructure adjustments to accommodate the demands of processing AI workloads. But, fitting GPUs in server racks has led to new challenges, including excessive heat generation from GPUs and increased power requirements. This calls for new data center innovation and redesign.

See Also: How to Attract and Nurture Talent in a Tough Hiring Environment

The increasing adoption of AI in enterprises cannot be only attributable to the introduction of ChatGPT and generative AI. Businesses are exploring ways of using AI to increase efficiency in business processes, improve decisions, and enhance existing products and services.

AI workloads in production have already been deployed, according to 34% of the 2,443 global respondents who participated in the 2024 CIO and Technology Executive Survey from Gartner. Furthermore, 22% of respondents said they plan to deploy AI workloads in production in the next 12 months (see image).

Source: Gartner

"Almost every customer we speak to has an AI agenda. They are either running or intend to run pilots. Some want to scale. Others have identified use cases for which they want to deploy GPUs or AI clouds. And yet others want to consume AI services from the cloud," said Sumit Mukhija, executive director and CEO at ST Telemedia Global Data Centres (India). "We are at a nascent stage where [business leaders] want to understand what AI can do for their businesses. I see AI as an extension of the cloud and digital adoption."

The data center business was showing steady growth at 13.5% annually, before the emergence of AI, Doug Adams, CEO and President, NTT Global Data Centers and Submarine Cables, told ISMG. Industry forecasts predict a substantial increase with 23% growth for the global data center industry, he said.

"The landscape has rapidly evolved in the past six months to a year, with the advent of the new era of AI technology," Adams said. "This technological shift has propelled the industry into a remarkable growth trajectory, a significant leap influenced primarily by AI."

The demand for data centers is driven by generative AI and other AI workloads, said Jon Lin, EVP and GM of data center services, Equinix. "This [demand] is seen across most of the major verticals, but it is particularly pronounced across financial services, healthcare, gaming, e-commerce applications, education, manufacturing and technology," he said.

Each sector uses generative AI for specific applications. Financial institutions leverage AI for fraud detection and risk analysis, while healthcare institutions use AI for diagnostics and personalizing treatments. In manufacturing, AI enhances operational efficiency, including machine learning robotics in factories and logistics. The technology sector employs AI for various innovative applications, both operational and customer-facing.

What's Driving Demand?

The AI transformation trend in enterprises is taking a path similar to that of cloud computing in 2012 and 2013 - when infrastructure was offered as a service through a pay-as-you-use model. Data center providers, including Equinix, ST Telemedia Global Data Centres (India), NTT Global Data Centers and Yotta Data Services, are preparing to offer advanced data center infrastructure and AI services such as GPU as a service to cater to the bespoke needs of their customers. To accomplish this, they will need to invest in adapting their data centers for processing AI workloads.

Most enterprises lack the financial resources to make infrastructure investments on a large scale. For instance, the flagship Nvidia H100 GPU (14,592 CUDA cores, 80GB of HBM3 capacity, 5,120-bit memory bus), which is designed for generative AI, costs $30,000 as of August 2023.

There is also a scarcity of tensor core GPUs as data centers, including Yotta Data Services, are placing bulk orders. It has been widely reported that Nvidia's manufacturing partner TSMC can barely meet the demand for GPUs.

Presently, enterprises depend on hyperscalers (Microsoft, AWS and Google) and specialized AI companies such as H2O.ai, OpenAI, Clarifai and AlphaSense, among others, to process their AI workloads. Some, including Hewlett Packard Enterprise and Equinix, have partnered with Nvidia. This may lead to extremely high long-term total cost of ownership and result in compliance challenges. This has increased the demand for regional data centers, which may also substantially reduce costs.

"To harness the potential of generative AI, enterprises need adaptable, scalable hybrid infrastructure in their local markets to bring AI supercomputing to their data," said Charles Meyers, president and CEO of Equinix.

Enterprises also want to have complete control over their AI models and establish data privacy in compliance with policy and regulation. Some governments insist on data residency and the need to use local data centers to store and process data.

"As enterprises increasingly seek to harness AI for competitive advantage, they often need a non-public environment where they can retain full control over their sensitive and proprietary data," said Dave McCarthy, research vice president, cloud and edge infrastructure services, IDC.

Equinix encourages enterprises to start building their private AI and is pitching private AI services in partnership with Nvidia DGX.

In response, local infrastructure providers are building more data centers and purpose-building them for AI services. For instance, India's Yotta Data Services will launch its Shakti-Cloud AI platform in February 2024. It will include various PaaS services, including foundational AI models and applications to help Indian enterprises create powerful AI tools and products.

Yotta Data Services has already placed a large order for Nvidia H100 Tensor Core GPUs, a powerful GPU for AI and HPC workloads, and plans to go operational with 16,384 GPUs by June 2024. It has committed an investment of INR 16,000 crore (approx. $1.92 billion) to build Shakti Cloud - a platform that will be powered by GPU clusters.

The demand for GPUs and associated power and cooling infrastructure in data centers is coming from almost every industry, but the immediate demand is from startups and system integrators, said Sunil Gupta, co-founder, MD and CEO of Yotta Data Services. A few large enterprises that already have AI workloads in production will also require specialized data center infrastructure. Additionally, the government wants to partner with private service providers, he said.

"As of now, I do not see demand from CIOs who are just exploring business use cases. CIOs will depend on startups or system integrators to develop their LLMs," Gupta said.

Regarding service delivery, the scarcity of GPUs and the need for enterprises to scale are driving demand for specialized infrastructure in data centers. This also poses certain challenges to infrastructure providers, who have to find innovative ways to cope.

*The second part of this feature will explore how data center providers are coping with challenges and adapting their infrastructure for AI workloads.

*With inputs from Sandhya Michu, Senior Assistant Editor - CIO.inc, ISMG


About the Author

Brian Pereira

Brian Pereira

Sr Executive Editor - CIO.inc, ISMG

Pereira has nearly three decades of journalism experience. He is the former editor of CHIP, InformationWeek and CISO MAG. He has also written for The Times of India and The Indian Express.




Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing cio.inc, you agree to our use of cookies.