How to Overcome Barriers that Lead to AI Failure
At this week’s AI Summit, AI leaders from IBM and Google examined the factors that contribute to AI failure and shared best practices for getting past those obstacles.
NEW YORK--Business leaders see tremendous competitive advantage in artificial intelligence, due to the promise it holds to greatly improve customer satisfaction, retention and loyalty; increase revenues; and save money. Yet, only five percent of companies are actually using AI extensively, said Beth Smith, IBM’s general manager for Watson AI, at this week’s AI Summit. And, according to Gartner, AI failure claims about 85 percent of AI projects.
Why is there such a gap between hype and reality?
Experts at AI Summit, held here, had answers to that question, as well as guidance to help companies overcome obstacles that lead to AI failure.
Smith, citing research from IBM’s 2018 Institute for Business Value AI study as well as other studies, pointed to four main barriers to AI adoption: lack of skills, regulatory constraints, data quality and trust, with lack of skills cited by more than 60 percent of business leaders. “Even if I have the skills, can I really trust what is happening? Can I trust that black box to be sure that I understand what is happening in it and can I trust that it complies with my processes, my guidelines, my procedures, as well as whatever regulatory constraints there are?” she asked. Smith said that customers also often get stuck on the issue of data quality and data usability. “We know that 80 percent of the data that enterprises have is inaccessible [for use by AI systems]: unorganized and not ready to be easily leveraged.”
Cassie Kozyrkov, chief decision scientist for Google Cloud, took another tack in explaining why projects succumb to AI failure: “Who here knows how a microwave works well enough to build me a new one from scratch?” she asked. Just two people in the audience raised their hands. “You have no idea how they work, yet you use them anyway.”
“That’s OK,” she continued, “because you’re not going to trust a microwave by reading its wiring manual. You’re going to trust it because of what you need it to do for you. And then you’re going to check to make sure that it delivers that.”
Kozyrkov’s analogy illustrated her point that businesses don’t need to understand how AI works in order to use it. Yet many businesses are approaching AI projects by hiring people who can, in essence, build microwave ovens from scratch.
“The reason a lot of businesses end up failing here is actually that they don’t know what business they’re in.”
Businesses spend considerable effort on the research side of AI, while what’s really needed are applied-side efforts, or projects focused on how to make use of AI, she said.
Both presenters shared best practices to get past these issues.
IBM’s Smith pointed to three requirements for a successful AI implementation: data readiness, organizational readiness and value readiness. The accessibility of the data—how organized it is—shapes the implementation time line and end user experience, she said. Organizational readiness, meanwhile, starts with executive support. “There are a lot of grassroots projects that [succeed] because leaders in the company recognize the benefit.” And, she said, “It is as critical to have the technical people as it is to have the subject matter experts and the business owners or the business knowledge workers be a part of it.”
Skills, the lack of which can torpedo an AI project before it gets off the ground, are critical to success. Companies need skills ranging from deep data science skills to skills among business leaders to understand how AI can be leveraged, Smith said. And new roles will emerge as AI projects take hold. Sjhe cited the example of an IBM customer in the UK that has established a conversational analyst role. That job requires a person who “understands the business value they’re trying to accomplish, the interaction that they’re trying to have with clients and understands the business well enough to tie all that together to … advance what their virtual agent is able to do,” she said.
Value readiness, finally, defines business outcomes. “It’s important to understand the outcomes you’re going after. … And accept the fact that you need to move fast, fail fast, learn fast and build from there.”
IBM has developed a prescriptive approach, which it calls the AI Ladder, to help get companies to AI success. The first step, collection, revolves around the idea of making data simple and accessible. The second step, organize, involves having a trusted analytics foundation. The third, analyze, calls for scaling insights with machine learning everywhere in the business. And the fourth, infuse, calls for deploying trusted AI-driven business processes. “Trusted means a few things: You can explain it, you can trace it, you have the lineage of it, you have governance around it. You have all those things that are rattling around in your head when you’re saying, Wait a minute: How can I put this black box into my business processes?”
Google’s Kozyrkov, for her part, laid out six guidelines for avoiding common pitfalls that lead to AI failure:
Know what business you’re in. “If I leave you with one take-home I leave you with today, it’s that you need to know whether you are cooking or building microwaves. If you don’t know, you’re going to have some serious problems. And the business leaders are going to say, ‘The data scientists are useless,’” she said.
Never forget the basics of learning and teaching. Sometimes, data scientists “get all mathemagical” about AI, she said. “Common sense goes out the window, and I don’t know why that is. But what applies in the basics of learning still applies here.”
Use enough examples. Referring to the difference between traditional information technology, which operates based on instructions, and machine learning, which operates on examples from which it teaches itself, she said: “We are in the game of expressing our wishes and explaining ourselves with examples, so of course we’re going to need good examples to explain ourselves with examples,” she said. “And we’re going to need enough of them.” The more complex the task, the more examples that are required.
Use relevant examples. Kozyrkov said she tells Google engineers: “The world represented by your data is the only world you can expect to succeed in. What’s in your textbooks is the only thing your students will learn.”
Testing keeps you safe. “Don’t test your systems using the same examples that you used to teach them. Have new data [available that you can use to check whether your AI system] actually does the job for you,” she said.
Hire for a diversity of skills and perspectives. “It’s really a team sport. It’s not just a standard idea of an AI nerd who will make your projects happen. You’re going to need decision makers, data scientists, ethicists, program and project managers, statisticians, reliability folks, engineers (software and AI engineers, analysts—a diverse team.” Among those team members, the decision maker is the most important. “It’s the decision maker who sets the exam, who says how high the bar is,” she said. “What all these big data technologies are are huge levers. There’s no such thing as technology that’s free of humans. It’s just an echo of the wishes of decision makers. We are building a proliferation of magic lamps. It’s not the lamp or genie who’s dangerous, it’s an unskilled wisher.”
Editor’s note: The AI Summit is owned by Informa, which also publishes ITProToday.com.
About the Author
You May Also Like