ChatGPT Risks Divide Biden Administration Over EU's AI Rules

Biden administration officials are divided over how aggressively new artificial intelligence tools should be regulated — and their differences are playing out this week in Sweden.

Bloomberg News

May 31, 2023

6 Min Read
ChatGPT written on keyboard
Alamy

(Bloomberg) — Biden administration officials are divided over how aggressively new artificial intelligence tools should be regulated — and their differences are playing out this week in Sweden.

Some White House and Commerce Department officials support the strong measures proposed by the European Union for AI products such as ChatGPT and Dall-E, people involved in the discussions said. Meanwhile, US national security officials and some in the State Department say aggressively regulating this nascent technology will put the nation at a competitive disadvantage, according to the people, who asked not to be identified because the information isn't public.

This dissonance has left the US without a coherent response during this week's US-EU Trade and Technology Council gathering in Sweden to the EU's plan to subject generative AI to additional rules. The proposal would force developers of artificial intelligence tools to comply with a host of strong regulations, such as requiring them to document any copyrighted material used to train their products and more closely track how that information is used. 

Commerce Secretary Gina Raimondo on Wednesday compared advancements in AI technology to social media, noting that in hindsight there should have been more restraint in its development. "The stakes are a whole lot higher" in AI, she said during a panel at the TCC meeting.

Related:Generative AI: A Cybercriminal’s New Best Friend

"Just because you can do it, doesn't mean you should," Raimondo said. "And so as we figure out the benefits of AI, I hope we're all really eyes-wide-open about the costs and do the analysis of whether we should do it."

National Security Council spokesman Adam Hodge said in a statement Tuesday the Biden administration is not divided and is working across the government to "advance a cohesive and comprehensive approach to AI-related risks and opportunities." He said the US has been "leading on these issues since long before the newest generative AI products."

How the EU decides to regulate AI arguably matters more than the debate in Washington. With Congress unlikely to pass binding rules for AI, the European bloc will be the first to dictate how tech giants including Microsoft Corp. and Google owner Alphabet Inc. develop the foundation models that underpin the next frontier of artificial intelligence.

Main Battlefield

These models rely on training data — often large samples of language pulled from the internet — to learn how to respond in various situations, rather than being designed for one specific task. This is the technology behind generative AI, which can respond to homework questions, design a PowerPoint or create fantastical images from text prompts.

Related:ChatGPT and Cybersecurity: The Good, the Bad, and the Careful

The question for regulators is who should bear responsibility for the risks associated with the technology, such as the spread of misinformation or privacy violations. The proposed EU rules would add to reporting requirements for companies that develop models used in chatbots, like OpenAI.

ChatGPT

ChatGPT_0

Michelle Giuda, director of the Krach Institute for Tech Diplomacy and a former assistant Secretary of State for global public affairs in the Trump administration, said one of the fundamental tasks for the TTC will be to strengthen trust between allies to foster innovation and keep ahead of China's advancements.

"The context is that innovation in AI is not happening in a vacuum — all of this is taking place in this 21st century contest between democracy and authoritarianism," Giuda said. "And you've got technology as the main battlefield."

High Risk

Until recently, the US and EU had a rough consensus to regulate uses rather than the technology itself, with a focus on high-risk areas such as critical infrastructure and law enforcement.

This approach was enshrined in the US's non-binding framework for AI systems, as well as the European Commission's initial proposals for the AI Act to regulate the technology. The last council meeting in December focused on end-use risk as well.

However, the release of ChatGPT made broader risks more apparent. This month an apparently AI-generated fake image of an explosion near the Pentagon spooked US markets, while the technology has already created corporate winners and losers.

"Europe is important, but this is bigger than Europe," Commission Executive Vice President Margethe Vestager told reporters at the TTC. She said that, in partnership with the US, "we can push something that will make us all much more comfortable with the fact that generative AI is now in the world and is developing at amazing speeds."

The European Parliament has proposed new rules that specifically target the foundation models used for generative AI. Lawmakers in committee agreed earlier this month that more scrutiny should be on the companies that develop these foundation models. Most of those companies, including Microsoft and Google, are based in the US.

Simmering Resentment

This added to already simmering resentment among tech executives over the EU's antitrust and content moderation rules, which disproportionately affect US companies.

The tech industry has criticized the Biden administration for not doing more to stand up for US companies in the face of what they see as trade discrimination. With the EU's proposed changes, they warn that the AI Act could go from a bright spot of cooperation to another example of Europe targeting US tech.

The revised AI Act could get a vote in parliament in June, ahead of final negotiations with the EU's 27 member states. 

Dragos Tudorache, one of the lead authors of the bill in the parliament, said after meeting with US officials that "they consider our moves to also deal with generative AI a good move."

Some US officials disagree, warning that restricting foundation models could hurt US competitiveness, according to the people involved in the discussions. 

Sam Altman, the chief executive officer of OpenAI, became the public face of corporate concern over regulatory overreach when he suggested his company could pull products from the European market if the rules were too difficult to follow. EU Commissioner Thierry Breton responded with a tweet accusing Altman of "attempting blackmail."

OpenAI co-founder Sam Altman

Altman

Altman later said he would work to comply with EU's rules. He will speak with Vestager on Wednesday and meet Commission President Ursula von der Leyen on Thursday.

European officials have resisted discussing the specifics of the AI Act with their US counterparts ahead of the TTC meeting, viewing it as inappropriate to bring Europe's democratic process into a multilateral debate, the people with direct knowledge of the talks said.

The EU is still debating regulation and there are European officials who think the parliament has gone too far, according to some of the people.

Generative AI will be mentioned in the TTC conclusion, according to a draft obtained by Bloomberg. The document affirms the transatlantic commitment to a risk-based approach, but it also highlights "the scale of the opportunities and the need to address the associated risks" of generative AI.

About the Author

Bloomberg News

The latest technology news from Bloomberg.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like