Ethical AI Use Stymies Executives but Consumers Demand It

A new report finds that the enterprise push to implement artificial intelligence is causing ethical headaches.

Terri Coles, Contributor

October 24, 2020

4 Min Read
conceptual image of artificial intelligence
Getty Images

This year will be remembered for a lot of things, and the ramp-up of automation as a means to adjust to pandemic operations looks like it will be one of them. But the need to pivot to artificial intelligence-rooted solutions hasn’t eliminated the ethical concerns many clients and consumers have about the technology, a new report from Capgemini found.

“Naturally, the [COVID-19] pandemic has forced many companies to adjust how they operate and look toward automation as a solution to reduce in-person contact and interactions,” said Dan Simion, Capgemini’s vice president of AI and analytics.

The ethical questions around artificial intelligence are many and complicated. Some of these ethical AI questions are primarily a concern in the research or academic sphere. However, there are many ethical concerns around the use of AI that are important for enterprises to consider if they are using AI products and services — or offering them to clients and customers.

We’ve seen it happen repeatedly that an emerging technology will advance more quickly than the regulatory and ethical frameworks surrounding its use. Artificial intelligence is no exception. The result is policies that vary by company or jurisdiction — or don’t exist at all — and that leaves executives in a potentially vulnerable position because of lack of protection and/or information.

Given the situation, and that it is changing so quickly, executives might want to leave the ethical AI considerations to someone else. But getting AI right is not just the ethical path forward, it’s also the best one for business, according to the new survey from Capgemini Research Institute, even (or especially) during a year when COVID-19 upended normal.

“2020 was a disruptive year,” Simion said. “There has been a significant need for businesses to leverage AI to try and solve the new problems that the pandemic created. With this significant increase in leveraging AI solutions, a lot of the ethics around AI came into question.”

Consumer Confidence and Loyalty

A lot of the AI-enhanced solutions that can help us get through this time rely on personal data, Simon said, and it’s important that clients and consumers feel confident about the ethics around their use. For the new report, Capgemini surveyed 1,580 executives in 510 organizations and over 4,400 consumers around the world, in countries including the United States, the United Kingdom, China, Germany and France.

The report's findings show that consumers and citizens want companies to consider ethical use of AI. More than half of respondents said they would place more trust in an organization with ethical AI interactions, Capgemini found — and they would share their experience with friends and be more loyal to the company.

“Customers that previously were loyal will want to have that same level of trust with a brand, even when they are experiencing an AI-enabled interaction,” Simion said.

Conversely, almost 40% of respondents said they would complain to the company and expect an explanation if they experienced unethical behavior around AI. And 40% of consumers believe they’ve been impacted by an AI ethical issue, the report found.

“One of the issues we’re seeing, and this is highlighted in the report, is that if your customers are losing trust in AI, then the adoption of AI will suffer and eventually the customers will go elsewhere,” Simion said. “That’s the biggest problem.”

Regulatory Concerns

Despite the risks and concerns the survey results illustrate, AI implementation continues. In fact, the push for that implementation is one of the reasons these ethical problems exist, executives told Capgemini — pressure to urgently implement AI was identified by the executives as the top reason for ethical issues from artificial intelligence’s use.

There is not yet a national regulatory framework around AI ethics in the United States, though some jurisdictions have brought in rules for the use of related technologies such as facial recognition and data analysis. Some industry players have pushed back against regulations like the California Consumer Privacy Act (CCPA), but consumers and citizens made it clear to Capgemini that they expect the government to act. Seventy-six percent of consumer respondents said that governments, independent bodies and regulators should introduce principles around AI use.

“Right now, companies should work to find solutions in order to be able to comply with any new regulations that may come into place,” Simion said. “If they can proactively work to find ways to meet future standards, then when those standards are implemented, they will already be compliant.”

About the Author

Terri Coles

Contributor

Terri Coles is a freelance reporter based in St. John's, Newfoundland. She has worked for more than 15 years in digital media and communications, with experience in writing, editing, reporting, interviewing, content writing, copywriting, media relations, and social media. In addition to covering artificial intelligence, machine learning, big data, and other topics for IT Pro Today, she writes about health, politics, policy, and trends for several different publications.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like