Overcoming Legacy Tech Constraints for AI IntegrationOnly 17% Organizations Have AI-Ready Infrastructure, Highlighting Readiness Need
Despite the prevalence of generative AI discussions within enterprise technology organizations, its integration is hampered by outdated infrastructure. The incapacity of existing IT systems to handle generative AI workloads poses a significant short- to mid-term roadblock to innovation.
Cisco's 2023 global AI Readiness Index reveals a glaring gap between intent and preparedness. Only 17% of organizations have their infrastructure fully ready for AI adoption, with 53% either unprepared or having limited preparedness. This global trend underscores the pressing need for infrastructure readiness.
Generative AI's demands, including computational power, fast data processing, extensive storage and advanced networks, exacerbate the struggle. In the realm of GPU-centric generative AI models, the limitations of outdated infrastructure in scaling and inflexible data architectures become formidable barriers.
Cost Implications of Legacy Infrastructure
Utilizing legacy systems for generative AI has potential cost implications. Training and deploying computationally intensive models on non-optimized infrastructure can increase costs and limit functionalities.
"Integrating legacy systems with generative AI may necessitate substantial modifications, impacting costs significantly," said Hugo Huang, public cloud alliance director at Canonical, in a Harvard Business Review (HBR) post.
The degree of upgrades or modifications in computing and tooling infrastructure depends on the specific generative AI use case being implemented. Therefore, organizations must find efficient ways to either create better integrations or adopt entirely new infrastructure capabilities that enable the required outputs.
Infrastructure Components for Generative AI
For organizations aiming to fully leverage generative AI capabilities, a comprehensive assessment of existing infrastructure is essential to identify deficiencies in each component, encompassing compute resources, processing power, storage and data systems, network, cybersecurity, and power.
Processing power stands out as a critical component for generative AI workloads, demanding high-performance GPUs. According to Cisco's AI Readiness Index, only 24% of companies possess robust GPU infrastructure for current and future AI tasks. Meanwhile, 70% have just-enough or limited GPU resources for ongoing projects or experimental purposes. To sustain and expand generative AI plans, these companies will need to augment their GPU capabilities.
Infrastructure preparedness issues extend to I/O limitations, impacting data exchange efficiencies. Organizations express lower confidence in the scalability and adaptability of in-house networks for handling the complexity and high data volumes of AI workloads, according to the report.
The foundation of IT infrastructure suitable for generative AI encompasses power infrastructure designed to handle heightened consumption during the training of large language models and processing massive datasets. In terms of power consumption, 44% of organizations are well-prepared with dedicated infrastructure to optimize AI deployment.
The cybersecurity overlay is crucial for safeguarding AI models and the extensive data processed by generative AI. While organizations employ advanced encryption to protect data used in AI models, they often lack the capability to detect and prevent attacks on AI models.
The Way Forward
Assessing the infrastructure essential for leveraging AI necessitates an accompanying IT audit to gauge the extent of legacy sprawl, enabling prioritized upgrades.
The generative AI strategy typically commences with a comprehensive infrastructure overhaul, with cloud technology playing a pivotal role in facilitating high-performance dynamic infrastructure, incorporating advanced computing capabilities such as GPUs and Tensor Processing Units.
Optimizing power consumption and reducing energy costs can be achieved through the upgrade of power infrastructure with cost-efficient and sustainable technologies. Advanced storage and networking solutions have become indispensable for managing voluminous data and ensuring high-speed data transfer, crucial for model training.
However, achieving a fundamental infrastructure overhaul is not a straightforward path. Huang, in his HBR post, recounts an experience where a client attempted to reduce training expenses by grouping multiple GPUs for distributed training, assigning distinct datasets to each GPU concurrently. The limited bandwidth for inter-GPU communication became a bottleneck, slowing down the training process. This inefficiency was addressed by implementing a solution allowing direct memory-to-memory transfer between GPUs, bypassing the standard networking path.
To navigate unforeseen challenges, organizations must strategize for the unplanned - anticipating and accommodating unforeseen infrastructure needs and deviations. Swift adaptation to evolving infrastructure requirements will be a critical factor in harnessing the full potential of generative AI technology.