Insight and analysis on the information technology space from industry thought leaders.
Zombie AI: The Growing Threat of Ungoverned AI in Business
Unregulated "Zombie AI" poses a threat to performance, reputation, and compliance, underscoring the urgent need for scalable, automated governance to safeguard and maximize AI investments.
November 4, 2024
By Kjell Carlsson, Domino Data Lab
As businesses increasingly deploy AI and ML in more and more parts of the organization, a hidden risk is emerging from the shadows — "Zombie AI." These are risky AI applications that were developed sloppily without rigorous governance and that continue to operate without meaningful oversight and control. They may look like a triumph of grass-roots ingenuity and rapid innovation, but in reality each carries a disease that will doom the organization to decaying performance and eventually debilitating financial, reputational, and legal damage.
Organizations need to take immediate action to prevent the proliferation of Zombie AI before they overwhelm and doom their AI initiatives and cause broader harm to their competitiveness. The cure lies in implementing AI governance best practices that identify the business, legal, and ethical risks upfront; ensure that they are tracked and monitored throughout development and deployment; validate quality and performance; and drive ongoing actions to mitigate risk and ensure the health of their AI applications.
The Swarming Mobs of Zombie AI
Zombie AI stems from a lack of governance. Without governance, it is too easy for projects to ignore potential risks, use sensitive data, leverage insecure libraries, rack up uncontrolled infrastructure costs, and push subpar and unreliable models into production. Once deployed, they then operate without monitoring and oversight, often driving critical decisions in areas like customer service, fraud detection, and supply chain optimization. The performance of these models will inevitably decay — because the documentation and reproducibility needed to continuously improve them was not created — and sooner or later a catastrophic failure will occur when the model encounters one of its unmitigated vulnerabilities.
Case Study: Zillow's Home-Buying Algorithm
A canonical example of Zombie AI are the models used by Zillow to value homes that the company would buy and sell directly. These models did not adequately address the risk of rapid shifts in the housing market. Combined with insufficient oversight of the performance of these models, the result was that Zillow systematically overpaid for homes. Ultimately, Zillow had to write off $304 million in Q3 2021, close its Zillow Offers home buying division, and lay off 25% of its workforce.
Much like "real" zombies, which are easy to defeat individually but unstoppable en masse, the risks from Zombie AI are compounded by the rapid growth in AI within organizations. The damage caused by underperforming models was minimal when production AI and ML models were few or were contained in low-impact use cases. They could be adequately overseen by the small teams that built them. Today, these teams are increasingly the victims of their own success, with little ability to oversee and control the litany of AI projects pursued by the company, and these models now power mission-critical applications across the business.
The challenge of Zombie AI has also gotten harder because of the sheer speed of AI innovation and increasing efforts to regulate AI. Advances in generative AI have brought with them a proliferation of evolving use cases, technologies, and immature offerings as well as a host of new hallucinations, privacy, security, cost, and, occasionally, ethical risks. The ensuing wave of AI regulations (in 2024, 35 U.S. states enacted some form of new AI legislation, and the EU enacted the EU AI Act) further increases the legal and regulatory risks.
All in all, the risks from Zombie AI have never been higher. Organizations are developing and deploying AI and ML solutions at a dramatically accelerated pace. They are applying AI in new, untested ways, using a fractured ecosystem of new, immature technologies, while needing to navigate a growing regulatory environment. Zombie AI can now inflict vastly more visible damage, in the form of reputational harm and regulatory penalties, but also unseen damage in the form of worsening business performance and a vicious cycle of stagnation and underinvestment in transformative technologies like AI.
Stopping the Spread: The AI Governance Cure
The antidote to Zombie AI is rigorous, scalable AI governance across the lifecycle of AI applications — from development through deployment and maintenance. However, many organizations struggle to implement effective governance. Most efforts stop at setting high-level principles and frameworks without drilling down into the specific actions necessary to manage risk. Governance is not just a matter of principles, councils, or audits — it requires action across every part of the AI lifecycle.
Leading AI teams in highly regulated sectors like financial services and biopharma offer a starting point for AI governance. These teams have built sophisticated governance processes that focus on key activities across the AI lifecycle, from risk assessments and access control to continuous monitoring and remediation.
Like these advance teams, all organizations looking to govern AI need:
Unified Visibility: At a minimum, AI projects must be logged, tracked, and monitored continuously across their lifecycle. Organizations should implement systems that provide visibility into model performance, risks, and compliance across all projects and deployed models, whether in the cloud or on-premises.
Auditability and Reproducibility: Teams must be able to reproduce the conditions under which the AI projects were developed and deployed. This requires capturing detailed information on the data, code, and processes used in model development and making these available for continuous improvement and remediation efforts should the models fail.
Access Management: Governing who has access to data, models, code, and infrastructure is critical for managing risks related to privacy, security, cost, and often legal compliance. Organizations must have automated controls in place to manage access and prevent unauthorized use.
Policy Management and Enforcement: Organizations need to ensure that AI models comply with evolving regulations. This requires automation to align policies with regulatory frameworks like the EU AI Act and to enforce these policies consistently across the organization.
In the most advanced teams, these governance practices are already in place, but even they struggle with the manual effort, time, and cost of governance, especially with a growing portfolio of AI use cases. The real challenge for all organizations lies in scaling these processes to govern hundreds of AI projects in fragmented environments across the entire organization. The solution lies in automation — reducing the manual effort that currently bogs down even the best AI teams — while enabling human attention and control.
Defeating Zombie AI with Scalable Governance
Zombie AI is an ever-growing risk to organizations, one that is becoming more acute as AI adoption accelerates and one that businesses can no longer afford to ignore. To stop its spread, organizations must move beyond high-level frameworks and ethics committees to implement governance from the ground up to ensure that their AI applications are safe, accurate, and compliant.
The blueprint for successful AI governance already exists, thanks to the advanced AI teams in regulated industries. What remains is for organizations to apply these practices more broadly and to leverage automation to make governance scalable. By doing so, they will not only mitigate the risks posed by Zombie AI but will also unlock the full potential of their AI investments, driving greater innovation, trust, and impact across their business.
About the author:
Kjell Carlsson is head of AI strategy at Domino Data Lab.
About the Author
You May Also Like