EU AI Act Would Scrutinize Many ‘General’ AI Models – SXSW 2024
Anthropic's deputy general counsel and public policy lead break down the legal risks of AI models.
This article originally appeared on AI Business.
The passage of the EU AI Act today brings extra scrutiny of ‘general purpose’ AI models - based in part on the computational power used to train them, according to the public policy lead at Anthropic.
These are models such as OpenAI’s GPT-4, Google’s Gemini and Anthropic’s Claude, among other popular large language or multimodal models upon which smaller, more bespoke models can be derived after further training.
“Interestingly, the EU has determined that any GPAI (general purpose AI) model that is trained at a computational power of 10x25 FLOPs or greater has a systemic risk,” said Rachel Appleton, public policy lead at Anthropic, during a fireside chat at SXSW 2024.
“That encompasses many of the models on the market today,” she said.
Models deemed to carry systemic risk “are subject to additional transparency and disclosure requirements around model evaluation, cybersecurity, red teaming, impact assessments and the like,” she added.
Companies that developed GPAI models currently in the market have to comply with the AI Act within three years while those that are still in development with their GPAI models have one to two years of grace.
As for open source models, the AI Act has granted an exemption – unless they are GPAI models or non-GPAI models with “unacceptable or high risk,” Appleton said.
“So it ended up being a fairly narrow exception in the end,” she said.
Appleton noted as well that the EU is setting up an AI office to govern GPAI models. “This is quite an undertaking as we understand it.”
Penalties for violations are up to 7% of global annual revenue or €35 million ($38.3 million), whichever is higher. The actual fine is based on the level of risk posed by the AI model.
Biden Invoked Wartime Powers for AI
In contrast to the EU’s AI Act, the U.S. does not have a comprehensive regulatory framework on AI passed into law, Appleton said.
She noted that President Biden did sign an executive order last October that had more than 100 directives to federal agencies – one of the largest executive orders released in history.
“One of the most interesting pieces is the Biden administration invoked their wartime powers under the Defense Production Act to require developers of dual use foundation models trained at 10X26 FLOPs or higher – so just a little bit higher than the EU threshold − to report those training runs to the Department of Commerce,” she said.
These models are deemed to pose a risk to national security, public health and economic security. Their safety test results must be reported to the U.S. government, according to Appleton.
However, the next president could get rid of Biden’s executive order with a stroke of the pen.
In Congress, Appleton said she was gratified to see AI as one of the few issues getting bipartisan support and there are several initiatives underway to regulate AI.
Sen. Chuck Schumer (D-NY) has hosted forums between the government and private sector to understand the socioeconomic implications of GPAI models.
Key Legal Issues Facing AI Models
Janel Thamkul, deputy general counsel of Anthropic who was also in the SXSW session, named the top legal concerns for GPAI models.
One is data privacy and security, both in the development and deployment phases.
On the development of AI models, one major issue is the collection and processing of personal information that is already available publicly on the web, she said.
“Regulators and policymakers in Europe as well as in the States are trying to grapple with what the right balance is in terms of balancing the harms,” Thamkul said.
“This information is already public,” she added. “The more sensitive implications (revolve around) how people are feeling about the use of that data in this new technology and context – it’s a new space that people are still trying to wrap their heads around.”
While promising research efforts in privacy protection are underway, they “don’t yet adapt to the size of the generative AI models that you’re seeing in the landscape like GPT-4, Claude 3. The privacy research that is going on in the academic spheres has not fully been proven out with the size of model.”
As for risks in deployment, companies may have questions about what an AI model does with the proprietary data or customer data that it ingests. Thamkul said Anthropic models do not train on these two types of data.
The second key concern is around algorithmic bias and discrimination, Thamkul said. Models that reinforce or perpetuate social biases and can be toxic and offensive. They can spread lies, automate disinformation campaigns and help create extremist texts.
One solution is having diversity in the training data, which actually “has a big impact” on results, Thamkul said.
The third key risk is IP rights. For now, the U.S. Patent and Trademark Office and U.S. Copyright Office have ruled that an invention or artistic creation must be mostly created by a human to be patentable or copyrightable. Thamkul said other countries might take a different approach, like India.
The fourth key risk is liability and accountability. Who is responsible for the harmful outputs?
“We’re watching this − how liability should be apportioned between the different players in the ecosystem,” Thamkul said.
One help is increasing transparency of models. Anthropic is working on “mechanistic interpretability research” to aid in this effort, she added.
Read more about:
AI BusinessAbout the Authors
You May Also Like