Meta’s AI Chatbot Repeats Election and Anti-Semitic Conspiracies

Meta acknowledges that its chatbot may say offensive things, as it’s still an experiment under development. Chatbots have a history of taking reactionary turns.

Bloomberg News

August 8, 2022

2 Min Read
Meta logo on phone
Bloomberg

(Bloomberg) -- Only days after being launched to the public, Meta Platforms Inc.’s new AI chatbot has been claiming that Donald Trump won the 2020 US presidential election, and repeating anti-Semitic conspiracy theories.

Chatbots -- artificial intelligence software that learns from interactions with the public -- have a history of taking reactionary turns. In 2016, Microsoft Corp.’s Tay was taken offline within 48 hours after it started praising Adolf Hitler, amid other racist and misogynist comments it apparently picked up while interacting with Twitter users.

Facebook parent company Meta released BlenderBot 3 on Friday to users in the US, who can provide feedback if they receive off-topic or unrealistic answers. A further feature of BlenderBot 3 is its ability to search the internet to talk about different topics. The company encourages adults to interact with the chatbot with “natural conversations about topics of interest” to allow it to learn to conduct naturalistic discussions on a wide range of subjects. 

Conversations shared on various social media accounts ranged from the humorous to the offensive. BlenderBot 3 told one user its favorite musical was Andrew Lloyd Webber’s “Cats,” and described Meta CEO Mark Zuckerberg as “too creepy and manipulative” to a reporter from Insider. Other conversations showed the chatbot repeating conspiracy theories.

Related:The State of Chatbots: Pandemic Edition

In a chat with a Wall Street Journal reporter, the bot claimed that Trump was still president and “always will be.”

The chatbot also said it was “not implausible” that Jewish people controlled the economy, saying they’re “overrepresented among America’s super rich.” 

The Anti-Defamation League says that assertions that Jewish people control the global financial system are part of an anti-Semitic conspiracy theory.

Meta acknowledges that its chatbot may say offensive things, as it’s still an experiment under development. The bot’s stated beliefs are also inconsistent; in other conversations with Bloomberg, it approved of President Joe Biden, and said Beto O’Rourke was running for president. In a third conversation, it said it supported Bernie Sanders.

In order to start a conversation, BlenderBot 3 users must check a box stating, “I understand this bot is for research and entertainment only, and that is likely to make untrue or offensive statements. If this happens, I pledge to report these issues to help improve future research. Furthermore, I agree not to intentionally trigger the bot to make offensive statements.”

Related:Enterprise Chatbot Use Cases Increase as Technology Evolves

Users can report BlenderBot 3’s inappropriate and offensive responses, and Meta says it takes such content seriously. Through methods including flagging “difficult prompts,” the company says it has reduced offensive responses by 90%.

About the Author

Bloomberg News

The latest technology news from Bloomberg.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like