Artificial Intelligence Is Coming for Hiring, and It Might Not Be That Bad

Instead of relying on people’s feelings to make hiring decisions, companies use machine learning to detect the skills needed for certain jobs. The AI then matches candidates who have those skills with open positions. The companies claim not only to find better candidates, but also to pinpoint those who may have previously gone unrecognized in the traditional process.

Bloomberg

August 8, 2018

4 Min Read
Artificial Intelligence Is Coming for Hiring, and It Might Not Be That Bad

(Bloomberg) --Artificial intelligence promises to make hiring an unbiased utopia. 

There’s certainly plenty of room for improvement. Employee referrals, a process that tends to leave underrepresented groups out, still make up a bulk of companies’ hires. Recruiters and hiring managers also bring their own biases to the process, studies have found, often choosing people with the “right-sounding” names and educational background. 

Across the pipeline, companies lack racial and gender diversity, with the ranks of underrepresented people thinning at the highest levels of the corporate ladder. Fewer than 5 percent of chief executive officers at Fortune 500 companies are women, and that number will shrink further in October when Pepsi CEO Indra Nooyi steps down. Racial diversity among Fortune 500 boards is almost as dismal, as four of the five new appointees to boards in 2016 were white. There are only three black CEOs in the same group.

“Identifying high-potential candidates is very subjective,” said Alan Todd, CEO of CorpU, a technology platform for leadership development. “People pick who they like based on unconscious biases.”

AI advocates argue the technology can eliminate some of these biases. Instead of relying on people’s feelings to make hiring decisions, companies such as Entelo and Stella.ai use machine learning to detect the skills needed for certain jobs. The AI then matches candidates who have those skills with open positions. The companies claim not only to find better candidates, but also to pinpoint those who may have previously gone unrecognized in the traditional process. 

Stella’s algorithm only assesses candidates based on skills, for example, said founder Rich Joffe. “The algorithm is only allowed to match based on the data we tell it to look at. It’s only allowed to look at skills, it’s only allowed to look at industries, it’s only allowed to look at tiers of companies.” That limits bias, he said.  

Entelo today released Unbiased Sourcing Mode, a tool that further anonymizes hiring. The software allows recruiters to hide names, photos, school, employment gaps and markers of someone’s age, as well as to replace gender-specific pronouns—all in the service of reducing various forms of discrimination. 

AI is also being used to help develop internal talent. CorpU has formed a partnership with the University of Michigan’s Ross School of Business to build a 20-week online course that uses machine learning to identify high-potential employees. Those ranked highest aren’t usually the individuals who were already on the promotion track, Todd said, and often exhibit qualities such as introversion that are overlooked during the recruitment process. 

“Human decision-making is pretty awful,” said Solon Borocas, an assistant professor in Cornell’s Information Science department who studies fairness in machine learning. But we shouldn’t overestimate the neutrality of technology, either, he cautioned.

Borocas’s research has found that machine learning in hiring, much like its use in facial recognition, can result in unintentional discrimination. Algorithms can carry the implicit biases of those who programmed them. Or they can be skewed to favor certain qualities and skills that are overwhelmingly exhibited among a given data set. “If the examples you’re using to train the system fail to include certain types of people, then the model you develop might be really bad at assessing those people,” Borocas explained. 

Not all algorithms are created equal—and there’s disagreement among the AI community about which algorithms have the potential to make the hiring process more fair.

One type of machine learning relies on programmers to decide which qualities should be prioritized when looking at candidates. These “supervised” algorithms can be directed to scan for individuals who went to Ivy League universities or who exhibit certain qualities, such as extroversion. 

“Unsupervised” algorithms determine on their own which data to prioritize. The machine makes its own inferences based on existing employees’ qualities and skills to determine those needed by future employees. If that sample only includes a homogeneous group of people, it won’t learn how to hire different types of individuals—even if they might do well in the job. 

Companies can take measures to mitigate these forms of programmed bias. Pymetrics, an AI hiring startup, has programmers audit its algorithm to see if its giving preference to any gender or ethnic group. Software that heavily considers ZIP code, which strongly correlates with race, will likely have a bias against black candidates, for example. An audit can catch these prejudices and allow programmers to correct them.

Stella also has humans monitoring the quality of the AI. “While no algorithm is ever guaranteed to be foolproof, I believe it is vastly better than humans,” said founder Joffe. 

Boracas agrees that hiring with the help of AI is better than the status quo. The most responsible companies, however, admit they can’t completely eliminate bias and tackle it head-on. “We shouldn’t think of it as a silver bullet,” he cautioned. 

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like