Facebook Wants AI to Screen Content, But Fairness Issues Remain
The company in April unveiled new artificial intelligence tools to help flag posts potentially containing false information by letting them point to trusted sources that contradicted the post. But such a system could potentially be gamed, particularly in countries where most news sources have political biases, or by users teaming up to flag an accurate piece of information as false.
May 1, 2019
(Bloomberg) -- One of Facebook Inc.’s biggest issues in trying to stop the spread of fake news on its platform is being able to train its algorithms on good examples of truth and falsehoods.
"Often there is not common agreement on whether something is false news or not," Joaquin Quinonero Candela said in a phone interview ahead of his talk at the F8 developer conference in San Jose, California. "At our scale, there are not enough professional fact-checkers in the world to do it."
Facebook has been under pressure from governments and users around the world for not doing enough to check the spread of misinformation, extremist propaganda and hate speech on its platform.
The company in April unveiled new artificial intelligence tools to help flag posts potentially containing false information by letting them point to trusted sources that contradicted the post. But Candela acknowledged such a system could potentially be gamed, particularly in countries where most news sources have political biases, or by users teaming up to flag an accurate piece of information as false.
"This is a huge concern," he said. "It is very important not to let the bias flow into the labels themselves."
Alongside developing its AI, Facebook has sought to address the issue by hiring thousands of human reviewers, often through contractors, but the company has been continually caught out -- for instance, failing to block the live video transmission of the gunman who attacked two mosques in New Zealand in March and then struggling to prevent the same video from being reposted.
Mark Zuckerberg, Facebook’s chief executive officer, has repeatedly told U.S. lawmakers that artificial intelligence would soon be able to automatically filter content from Facebook’s two billion users to flag objectionable posts. But, today, the technology remains too immature to do this well.
Candela said that even if ground truth could be determined, Facebook needed to guard against bias in the way the algorithm classified content, and in how moderators chose to act when confronted with content flagged as false, extreme or hateful.
"Our community reviewers bring personal opinions and biases to the process themselves and we want to make sure all content is being treated the same no matter where it is coming from," he said.
This ambition may prove too difficult for Facebook. Candela said that when figuring out when the algorithm will flag content to a reviewer, the company wants to use the same rules for all content. But Candela said the company was aware that this definition of fairness -- that all groups be treated the same statistically -- might not satisfy all users.
For instance, certain language may be unacceptable for an outsider to use to refer to a member of a particular group, but suitable within that same group.
Candela said the company knows there are no easy answers to these questions. Referencing the time he spent learning the complex mathematics that underpins machine-learning algorithms and comparing it to the thorny problem of content moderation, Candela said, "I feel like when I was doing super-complicated math, that felt a lot easier than this."
Read more about:
Meta PlatformsAbout the Author
You May Also Like