What It Takes To Make AI Safe and Effective

Here's how and why organizations that apply rigorous artificial intelligence trust, risk and security management (AI TRiSM) move more valuable AI models into production.

Gartner Blog Network

October 31, 2022

3 Min Read
AI in business illustration
Alamy

Are you ready for an AI Bill of Rights? The recent U.S. blueprint aims to protect society from harmful AI, reminding all developers and users of AI models that they need to build safeguards into their AI models and strategies. A rigorous approach to AI TRiSM is needed. 

Gartner defines AI TRiSM as a framework that supports AI model governance, trustworthiness, fairness, reliability, robustness, efficacy and privacy. It includes solutions, techniques and processes for model interpretability and explainability, privacy, model operations and adversarial attack resistance for its customers and the enterprise.

“IT leaders must spend time and resources on supporting AI TRiSM. Those who do will achieve improved AI outcomes in terms of adoption, business goals and both internal and external user acceptance," says Gartner Distinguished VP Analyst, Avivah Litan. “AI threats and compromises (malicious or benign) are continuous and constantly evolving, so AI TRiSM must be a continuous effort, not a one-off exercise.”

Gartner expects that by 2026, organizations that operationalize AI transparency, trust and security will see their AI models achieve a 50% result improvement in terms of adoption, business goals and user acceptance.

Related:White House Unveils Cybersecurity Strategy to Keep IoT Devices Safe

AI has become more prevalent. Gartner survey results have indicated that organizations have deployed hundreds or thousands of AI models that some IT leaders can’t explain or interpret. The lack of knowledge and understanding can have serious consequences. The impact when AI models misperform is amplified when dependencies increase.

Organizations that don’t manage AI risk are much more likely to experience negative AI outcomes and breaches. Models won’t perform as intended, and there will be security and privacy failures, financial and reputational loss, and harm to individuals. AI that is carried out wrongly can also cause organizations to make poor business decisions.

AI TRiSM Implications and Operations

AI regulations are increasing, but even before protections are mandated, it is important to implement practices that ensure trust, transparency and consumer protection. IT leaders need to apply new AI TRiSM capabilities to ensure model reliability, trustworthiness, privacy and security.

Don’t wait until models are in production to apply AI TRiSM. It just opens the process to potential risks. IT leaders should familiarize themselves with forms of compromise and use the AI TRiSM solution set so they can properly protect AI.

AI TRiSM requires a cross-functional team to work together. This includes staff from the legal, compliance, security, IT and data analytics teams. Set up a dedicated team if possible, or a task force if not, to gain the best results. Ensure appropriate business representation for each AI project.

Benefits include improving the business outcomes their organization derives from its use of AI, rather than simply complying with regulations.

In short:

  • AI TRiSM capabilities ensure model reliability, trustworthiness, security and privacy.

  • To attain better outcomes in terms of AI adoption, achieved business goals and user acceptance, organizations need to manage AI trust, risk and security. 

  • Consider AI TRiSM a solution set to properly protect AI.

This article originally appeared on the Gartner Blog Network

About the Author

Gartner Blog Network

The Gartner Blog Network has expert views on today’s technology and business topics and trends. 

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like