Senate Democrats Demand OpenAI Detail Efforts To Make Its AI Safe
Following a Washington Post report, five Senate Democrats ask the artificial intelligence start-up to describe how it will ensure its tools don’t cause harm.
July 23, 2024
Senate Democrats demanded in a Monday letter that OpenAI turn over data about its efforts to build safe and secure artificial intelligence, following employee warnings that the company rushed through safety-testing of its latest AI model, which were detailed in The Washington Post earlier this month.
Led by Sen. Brian Schatz (D-Hawaii), the five lawmakers asked OpenAI’s chief executive Sam Altman to outline how the ChatGPT-maker plans to meet “public commitments” to ensure its AI does not cause harm, such as teaching users to build bioweapons or helping hackers develop new kinds of cyberattacks, in the letter obtained exclusively by The Post.
The senators also asked the company for information about employee agreements, which could have muzzled workers who wished to alert regulators to risks. In a July letter to the Securities and Exchange Commission, OpenAI whistleblowers said they had filed a complaint with the agency alleging the company illegally issued restrictive severance, nondisclosure and employee agreements, potentially penalizing workers who wished to raise concerns to federal regulators.
In a statement to The Post earlier this month, OpenAI spokesperson Hannah Wong said the company has “made important changes to our departure process to remove nondisparagement terms” from staff agreements.
The letter comes amid employee concerns that OpenAI is putting profit before safety in creating its technology. It cites a July report in The Post detailing how OpenAI rushed out its latest AI model, GPT-4 Omni, to meet a May release date. Company leaders moved ahead with the launch, despite employee concerns about the time frame, and sped through comprehensive safety testing, undermining a July 2023 safety pledge to the White House.
“Given OpenAI’s position as a leading AI company, it is important that the public can trust in the safety and security of its systems,” the senators wrote. “This includes the integrity of the company’s governance structure and safety testing, its employment practices, its fidelity to its public promises and mission, and its cybersecurity policies.”
“Artificial intelligence is a transformative new technology and we appreciate the importance it holds for U.S. competitiveness and national security," OpenAI spokesperson Liz Bourgeois said in a statement. "We take our role in developing safe and secure AI very seriously and continue to work alongside policymakers to establish the appropriate safeguards going forward.”
Lawmakers, including Sen. Chuck Grassley (R-Iowa), have said employees at AI companies need to be able to offer Congress a clear understanding of the technology as it attempts to regulate it - including concerns and risks.
Senators in the letter asked OpenAI to commit to not enforcing nondisparagement agreements and “removing any other provisions” from employee agreements that could be used to punish those who raise concerns about company practices.
Senate Majority Leader Charles E. Schumer (D-N.Y.) and a bipartisan working group of senators released recommendations earlier this year to infuse $32 billion into AI research and development, but critics have said the plan is vague and has stymied other efforts in Congress to craft legislation. The chances of passing comprehensive legislation this year are dwindling as attention in Washington shifts to the 2024 election.
In the absence of new laws from Congress, the White House has largely relied on voluntary commitments from the companies to create safe and trustworthy AI systems. The Biden administration also passed a sweeping AI executive order requiring companies to share testing results about the most powerful models.
The letter also asked Altman whether OpenAI will dedicate 20 percent of its computing resources to research on AI safety, a commitment the company made last July when announcing a team dedicated to preventing existential risks. That group, the “Superalignment team,” has since been disbanded and its staff redistributed to other parts of the company.
Senate Democrats asked OpenAI if it will allow independent experts to assess the safety and security of its systems before release, and to make its next foundational AI model available to government agencies for predeployment testing. Legislators also asked OpenAI to outline what misuse and safety risks its staff have observed after releasing its most recent large language models.
Stephen Kohn, a lawyer representing OpenAI whistleblowers, said Senate Democrats’ requests are “not sufficient” to cure the chilling effect of preventing employees from speaking about company practices. “What steps are they taking to cure that cultural message,” he said, “to make OpenAI an organization that welcomes oversight.”
Senate Democrats asked OpenAI to fulfill the requests by Aug. 13, including documentation on how it plans to meet its voluntary pledge to the Biden administration to protect the public from abuses of generative AI.
Kohn added that Congress must hold hearings and an investigation into OpenAI’s practices.
“Congressional oversight on this is badly needed,” Kohn said. “It’s essential that when you have a technology that has the potential risks of artificial intelligence that the government get in front of it.”
About the Author
You May Also Like