Insight and analysis on the information technology space from industry thought leaders.

The 4 Checkboxes of Scalable AI in the Enterprise

Discover the key strategies enterprises need — talent, efficiency, infrastructure, and cohesion — to scale generative AI projects and sustain long-term competitive advantage.

Industry Perspectives

November 18, 2024

5 Min Read
checkboxes being checked off
Alam

By Tzvika Zaiffer, Spot by NetApp

Many established enterprises interpret the race to deploy generative AI applications and capabilities as a sprint. Get to market first with a superior customer service chatbot, or a coding copilot that transforms application delivery, or an insight engine that accurately foresees new trends and opportunities — that's the "win" condition. Leverage those generative AI advantages to gain traction, and try to do it before competitors can get their vision of the gate.

Reality hasn't proved quite so cut and dry. While emerging startups can be AI-native from day one, adjusting for AI within larger, established businesses isn't as easy. Enterprises pacing for the sprint are failing to heed the lessons of past revolutions where new technology was suddenly requisite to compete. For example: The transition to the cloud and the subsequent emergence of Kubernetes each demonstrated the many, many issues that large organizations face when the time comes to scale their crucial new technology deployments. Those companies that charge forth without the right strategic foresight can stumble and lag as scale becomes essential. Those who anticipate the marathon ahead achieve stable long-term practices and a lasting edge.

Boiling down the attributes of enterprises most capable of highly effective generative AI scaling, these businesses embed the talent, the efficiency, the infrastructure, and the cohesion required to launch and maintain generative AI initiatives beyond their initial phases. Here are your four non-negotiables.

Related:AI Basics: A Quick Reference Guide for IT Professionals

Talent

The talent available across an enterprise's AI developer, data engineer, and data scientist roles defines its velocity in completing successful and scalable AI projects. Given the relative newness of generative AI as a field, expertise and veteran experience in actually implementing AI workload best practices and taking projects across the finish line is in high demand (even in an unsteady tech job market) — and can be hard to come by.

Recruitment and retention of this particular talent is a considerable challenge, and it becomes an even more serious issue as large businesses require additional personnel to scale. The enterprises that prevail in this battle for talent are often those that can equip them with tools they actually want to use, and streamline their roles with a clear focus on delivering great AI products. If one potential workplace requires AI/ML and data teams to spend significant parts of their days on maintaining the cloud operations foundational to its AI models and applications, and another makes sure to keep that less-fulfilling work off their plates, the choice isn't hard. Enterprises that create the most inviting environments for AI talent will have an advantage in getting and keeping it (and get the superior AI results to show for it).

Related:AI Quiz 2024: Test Your AI Knowledge

Efficiency

Enterprises are in a tough spot when it comes to AI costs, with gargantuan investments required to enable potential dividends later. That said, many are thus far undeterred: IDC's March 2024 Future Enterprise Resiliency & Spending Survey found that most enterprises are committed to AI spending no matter the economic conditions, because the long-term competitive benefits of the technology are so crucial.

At the same time, massive AI-related bills dampen the potential for generative AI projects to deliver a positive return on investment and become sustainable anytime soon, and the runway for these projects may not be as long as current enthusiasm suggests. The will to continue to scale AI investments may become hard to muster if it means doubling down on worrisome near-term losses.

Enterprises that keep their AI expenses in check with cloud cost controls — such as automated provisioning, cost and size-optimized instances, and active utilization of real-time cost insights — will have the efficiency to explore AI with far less scrutiny from CFOs and others watching the bills and see positive numbers sooner. Those that support AI teams with FinOps experts will enable particularly valuable capabilities for getting the most bang out of their resources and spending.

Infrastructure

The right cloud and data infrastructure can make the difference between an enterprise ready to scale its AI workloads at will and one that runs up against frustrating reliability and performance limitations. The larger the AI model, the more the GPU, memory, and compute resources are required. Enterprises must also determine if on-prem, hybrid cloud, public cloud, or some mix of those is the right infrastructure for their particular needs.

Relating back to efficiency, expert infrastructure management (or lack thereof) is a determining factor in whether AI workloads have the resources they need and whether enterprises can afford them at scale. For example, cases where DevOps and Operations teams try to back AI/ML and big data projects with manual processes alone are sure to miss the mark on realizing an infrastructure's potential and keeping costs in check. Enterprises that fully execute on cloud infrastructure optimization via automation, and further automate with methods like MLOps, put themselves in position to scale AI projects without the growing pains.

Cohesion

Finally, launching successful AI projects often requires new processes and perspectives: in short, change. That change comes far easier when AI teams and those in adjacent functions such as DevOps and Operations teams, product managers, finance, FinOps, and other stakeholders are in sync and all pulling in the same direction. Achieving that synergy takes clear leadership, culture change, and a willingness to break down silos while ensuring each team can contribute efficiently. It's enterprises that keep internal teams cohesive and maintain practices they know how to scale that can ramp up AI projects best.

Cloud and Personnel Best Practices Set the Stage for Scalable AI

Launching an AI project in hopes of a mass audience is a bit like nursing a baby hippo: You want it to take off, grow, and flourish — but are you ready for what it will become? Have you anticipated the massive changes ahead, and prepared your team and your processes accordingly? On the technical side, enterprises readying to successfully scale AI projects should focus on the powerful cloud infrastructure and efficient operations necessary to support AI workloads as they expand. On the human side, building an environment magnetic to AI talent and nurturing internal cohesion across teams will result in a culture able to make AI goals a reality. Enterprises that tick all these boxes will be leading lights as the AI technology revolution continues to unfold.

About the author:

Tzvika Zaiffer is the Solutions Director at Spot by NetApp. Tzvika is responsible for crafting and evangelizing Spot's infrastructure optimization value proposition. He has extensive experience in software, telecom, and security, and is a results-driven leader who uniquely combines strategic thinking with a hands-on approach.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like