AI vs. Nuclear Weapons: Debating the Right Analogy for AI Risks
Future of Life Institute, which called for a 6-month pause in developing advanced AI almost a year ago, face Meta and IBM.
This article originally appeared on AI Business.
Last March, an open letter signed by the likes of Elon Musk, Steve Wozniak and many others called for a pause on the development of AI systems more powerful than GPT-4. Almost a year on, minds from both sides of the debate continue to argue.
During a panel discussion at the World AI Cannes Festival, Mark Brakel, director of policy at the Future of Life Institute, the group that published the letter, said that humanity itself has decided that certain technologies require careful handling, like human cloning or open source nuclear research.
But Francesca Rossi, IBM’s AI ethics global leader, disagreed with the idea that AI was in the same ballpark as nuclear weapons, arguing that nuclear weapons are a specific application of a wider technology while AI can be applied in many different ways.
“We have to be careful about this analogy because they can be very frightening. But they need to have basic elements to have real analogies that make sense,” Rossi said.
A 'Ridiculous' Analogy
Agreeing with the IBM ethics leader was Meta Chief AI Scientist Yann LeCun, who said the analogy of AI and nuclear weapons was “ridiculous.”
“AI is a technology to make people smarter, whereas nuclear weapons are designed to wipe out entire cities,” LeCun said. “Any powerful technology can be used for good or bad.”
LeCun argued that AI should be judged on whether the human race would be better off with or without it, adding the matter is “not whether the technology is intrinsically dangerous.”
Both Rossi and LeCun agreed that there was no issue when it comes to regulating deployments and products, but targeting research could have determental impacts on improving the safety of AI.
Moreover, Rossi contended that AI research itself “is a tool to mitigate the risk of AI.” Keeping AI development open source also puts more eyes on the underlying code and creates a more diverse community of developers working on it, LeCun added.
LeCun likening tightening of AI rules to how the Ottoman Empire banned the printing press, saying it was a way to exert more control but also to protect corporations with a vested interest.
It is major corporations, the ones developing AI secretively, that are trying to convince governments about existential risks from AI, LeCun added.
“This is not an individual process, not by any single company, and any single process. There's not like the lone genius, that suddenly invents AI and turns on the robot that takes over the world, that’s not happening,” LeCun said.
“What's happening is an open community that does open research, shares information, tries to do the right thing. And it’s in the open, that's the most democratic thing you can imagine.”
Future of Life Institute Stands Firm
But Brakel argued “we're not talking about the certainty around existential destruction. We're talking about potential risks, that there might be a probability that AI leads to an existential disaster.”
Brakel also said that talking about existential risks brought more attention to other kinds of vices and harms around AI, such as bias and breach of privacy. LeCun had made the point that focusing on existential concerns would overshadow real, existing harms of AI.
“We can see that in the EU AI Act but also in the Biden White House Executive Order, which addresses biases, housing and unemployment, but it also tries to address existential risks by regulating some of these systems beyond a certain level of computational power,” Brakel added.
The EU is set to bring in an AI agency to oversee the EU AI Act’s rules – and will fine companies that violate the rules potentially up to 7% of their global annual revenue.
Brakel supported the idea of having an agency oversee AI, but warned of repeating the fallout from the General Data Protection Regulation (GDPR), which saw companies move their European headquarters to Dublin since its local regulator was weak.
Read more about:
AI BusinessAbout the Authors
You May Also Like