The AI Hype Is Now Very Real for Businesses
While the AI hype is largely justified, organizations must cautiously prioritize governance, safety, and their unique requirements.
Is the AI hype really justified?
According to a recent survey conducted by Domino Data Lab, 90% of data science executives, leaders, practitioners, and IT platform owners believe that the buzz around generative AI is indeed well-founded. Additionally, 55% of survey respondents anticipate AI to make a significant impact on their business within the next one to two years. The survey, conducted by Domino Data Lab at its Rev4 conference, gathered insights from 162 participants.
One of the most notable impacts of generative AI on businesses will likely be the significant investment required to tailor AI technologies to their specific needs. Companies will face an important decision between developing their own generative AI technologies in-house or adopting third-party commercial offerings.
When weighing the benefits and drawbacks of these choices, organizations must consider several factors, noted Bradley Shimmin, chief analyst for AI platforms at research firm Omdia. “Every organization will have to assess their financial model, technical expertise, and the risks they are able to take,” Shimmin said. “Then it will come down to which horn you want to hang your hat on – a simple development model that offers less control or a more tedious sentiment analysis.”
Large enterprises might have the option to adopt pre-existing generative AI solutions such as OpenAI. However, the majority, 94% of the respondents surveyed by Domino Data Lab, believe that companies will need to make investments in generative AI rather than solely relying on features offered by independent software vendors and other business partners. More than half of respondents said their organizations plan to create differentiated customer experiences on top of third-party foundational models, while 39% believe their organizations must develop generative AI models.
As Ruben Shaubroeck, a senior partner of McKinsey & Company, sees it, there are three categories of adoption categories that organizations can consider: Takers, Shapers, and Makers.
Takers work closely with third parties and integrate ready-made, off-the-shelf generative AI offerings into their workflows with little to no customization.
Shapersaugment existing generative AI models for their specific organizational requirements. They use proprietary data and insights to finetune these models.
Makersdevelop and train their own generative AI tools. These tools are tailored precisely to the organization’s unique needs, Shaubroeck said.
Kjell Carlsson, head of data science strategy and evangelism at Domino Data Lab, recommended a “Shaper” approach for companies. The approach involves experimenting with the generative AI capabilities of their business applications while also building upon third-party proprietary models.
“[Organizations] should be thinking about operationalizing Gen AI models from the get-go,” Carlsson said. “Many of the most popular Gen AI models, such as ChatGPT, GPT4, PaLM 2, etc., are too large to cost-effectively put into production, run with sufficiently low latency, or finetune so that they are sufficiently accurate to use without significant risks.”
Carlsson explained that popular AI models can be useful for experimentation or demonstrating possibilities but are limited in practical applications. “They are effectively dead ends,” Carlsson said. “[Organizations] should instead be building the scalable platforms necessary to ingest, finetune, put into production, and monitor foundation[al] models from any source, and the capabilities to govern this process from end to end.”
AI Risks, Challenges, and Disruptions
Generative AI can potentially enhance the robustness of AI models, thereby helping to mitigate certain AI-related risks. However, generative AI brings its own set of risks. In a recent Omdia survey, IT leaders expressed several concerns, including privacy exposure, security vulnerabilities, increased exposure to various risks, and lack of technical expertise, Shimmin said. Interestingly, respondents are now reporting a new fear: potentially being displaced by generative AI offerings.
Another prevalent concern is the difficulty of governance. In the Domino Data Lab survey, 76% of C-level/VP execs pointed to governance as a key hurdle in using generative AI. Specific concerns included the potential for private data leaks (70%), bad decision-making (35%), damage to company reputation (27%), and regulatory fines (25%).
To mitigate these risks, organizations must implement strong governance capabilities throughout the entire process of developing and deploying generative AI models. The hype around generative AI has created a sense of urgency, often leading to quick adoption without the proper safety measures in place. McKindsey’s 2023 State of AI report highlighted that just 21% of organizations reporting AI adoption have “established policies governing employee use of generative AI,” Shaubroeck noted. Furthermore, the excitement surrounding the integration of generative AI into organizations can breed an environment of competition and secrecy.
According to Arun Chandrasekaran, vice president analyst at Gartner, organizations are becoming more secretive about their AI architectures and may not be taking sufficient steps to mitigate the risks or prevent the potential misuse of these powerful services. “Organizations need to examine and mitigate both internal and external risks caused by generative AI and create a robust AI governance strategy,” Chandrasekaran said.
Enterprises can benefit from AI without compromising on safety and control by ensuring that visibility and governance are prioritized throughout the AI lifecycle, Carlsson said. This entails tracking, monitoring, and governing each connected stage in the AI development process:
Training dataset snapshots: Maintain records of every training dataset used in AI model development.
Version control: Track every version of the code used to analyze the data and train the model.
Library and framework tracking: Document every library or framework that is used in the development process.
Model versioning: Record versions of the AI model as it evolves.
Output monitoring: Continuously monitor and document the output generated by the AI model.
“Only after these capabilities are in place are additional tools, such as for explainability, bias mitigation, or fairness monitoring impactful, and only then are processes such as assessments using responsible AI frameworks or approvals by AI ethics councils valuable,” Carlsson said.
While it’s widely agreed that the hype around AI is warranted and will shape how organizations function in coming years, tech workers are also bracing for disruptions within their industries, Carlsson added. These disruptions are expected to change how companies compete with one another. For example, pharmaceutical and biotech companies are already using generative AI to create new candidate proteins for treating chronic illnesses like diabetes and heart disease.
“The best way to defend against these disruptive threats is, in the words of Clay Christensen [who coined the term “disruption” with this meaning], to 'disrupt yourself' and leverage these new technologies,” Carlsson said.
About the Author
You May Also Like