5 Ways to Prevent AI Bias

As artificial intelligence gains traction in critical business processes, organizations must learn how to prevent AI bias.

Poornima Apte, Contributor

September 23, 2022

3 Min Read
5 Ways to Prevent AI Bias
Alamy

Artificial intelligence is increasingly involved in weighty businesses processes like assessing creditworthiness and sifting through resumes to determine ideal candidates. As a result, AI and its outcomes are understandably coming under the microscope. The key question worrying implementers: Is the AI algorithm biased?

Bias can creep in through multiple ways, including in sampling practices that ignores large swaths of the population, and confirmation bias, where a data scientist only includes those datasets that conform to their worldview.

Here are a several ways data scientists are addressing the problem.

1. Understand the Potential for AI Bias

Supervised learning, one of the subsets of AI, operates on rote ingestion of data. By learning under “supervision,” a trained algorithm makes decisions on datasets that it has never seen before. Following the “garbage in, garbage out” principle, the quality of the AI decision can only be as good as the data it ingests.

Data scientists must evaluate their data to ensure it is an unbiased representation of the real-life equivalent. To address confirmation bias, the diversity of data teams is also important.

2. Increase Transparency  

AI remains challenged by the inscrutability of its processes. Deep learning algorithms, for example, use neural networks modeled after the human brain to arrive at decisions. But exactly how they get there remains unclear.

Related:How Inclusive Machine Learning Can Benefit Your Organization

“Part of the move toward ‘explainable AI’ is to shine light on how the data is being trained and how you’re using which algorithms,” said Jonathon Wright, chief technology evangelist at Keysight Technologies, a testing technology provider.

While making AI explainable won’t entirely prevent biases, understanding the cause of a bias is a critical step. Transparency is especially important when enterprises use AI programs from third-party vendors.

3. Institute Standards

When deploying AI, organizations should follow a framework that will standardize production while ensuring ethical models, Wright said.

Wright pointed to the European Union’s Artificial Intelligence Act as a game-changer in the effort to scrub the technology free of bias.

4. Test Models Before and After Deployment

Testing AI and machine learning models is one way to prevent biases before releasing the algorithms into the wild.

Software companies, specifically built for this purpose, are becoming more commonplace. “It’s where the industry is going right now,” Wright said.

5. Use Synthetic Data

You want datasets that are representative of the larger population, but “just because you have real data from the real world does not mean that it is unbiased,” Wright noted.

Indeed, AI learning biases from the real world is a risk. To address this issue, synthetic data could be viewed as a potential solution, said Harry Keen, CEO and co-founder of Hazy, a startup that creates synthetic data for financial institutions.

Synthetic datasets are statistically representative versions of real data sets and are often deployed when the original data is bound by privacy concerns.

Keen emphasized that the use of synthetic data to address bias is “an open research topic” and that rounding out datasets – for example, introducing more women in models that vet resumes – might introduce a different kind of bias.

Synthetic data is seeing the most traction in evening out “lower dimensional structured data” like imagery, Keen said. For more complex data, “it can be a bit of a game of Whack-a-Mole, where you might solve for one bias but introduce or amplify some others. ... Bias in data is a bit of a thorny problem.”

Yet it is a problem that must be solved, given that the technology is growing at an impressive annual rate of 39.4%, according to a Zion Market Research study.

About the Author

Poornima Apte

Contributor

Poornima Apte is a trained engineer turned writer who specializes in the fields of robotics, AI, IoT, 5G, cybersecurity, and more. Winner of a reporting award from the South Asian Journalists’ Association, Poornima loves learning and writing about new technologies—and the people behind them. Her client list includes numerous B2B and B2C outlets, who commission features, profiles, white papers, case studies, infographics, video scripts, and industry reports. Poornima reviews literary fiction for industry publications, is a card-carrying member of the Cloud Appreciation Society, and is happy when she makes “Queen Bee” in the New York Times Spelling Bee.

https://www.linkedin.com/in/poornimaapte/

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like