U.S. Takes First Step to Formally Regulate AI

The Biden administration follows China, Italy, Canada and the U.K.

ai_connections_network_blue
Alamy

The Biden administration said on Tuesday that it is seeking public comment on upcoming AI policies as the U.S. moves to put safeguards in place against harms like bias without dampening innovation.

In a first official step towards potential AI regulations at the federal level, the U.S. Commerce Department’s National Telecommunications and Information Administration (NTIA) wants public input on developing AI audits, assessments, certifications and other tools to engender trust from the public.

“The same way that financial audits created trust of financial statements for businesses, accountability mechanisms for AI can help assure that an AI system is trustworthy,” said Alan Davison, assistant commerce secretary for communications and information, at an event in Pittsburgh, Pennsylvania.

“But real accountability means that entities bear responsibility for what they put out into the world,” he added.

Written comments must be submitted by June 10 here.

The NTIA will be seeking input specifically on the types of certifications AI systems need before they can deploy, what datasets are used and how these are accessed, how to conduct audits and assessments, what AI designs developers should choose, what assurances should the public expect before the AI model is released, among other issues.

Related:Testing for AI Bias: What Enterprises Need to Know

“Our initiative will help build an ecosystem of AI audits, assessments and the tools that will help assure businesses and the public that AI systems can be trusted,” Davidson said. “This is vital work.”

There already have been attempts to regulate AI, with more than 130 bills introduced in federal and state legislatures in 2021 that either were passed or proposed. This is a “huge” jump from the early days of social media, cloud computing and even the internet itself, Davidson said.

Meanwhile, China, Italy, Canada and the U.K. are stepping up scrutiny of generative AI.

Italy has temporarily banned ChatGPT and threatened to impose fines until OpenAI complies with its user privacy concerns, while Canada’s privacy chief said it will be scrutinizing the chatbot. Meanwhile, the U.K.’s privacy watchdog said organizations using or developing generative AI must ensure people’s data are protected because it is the law.

Continue reading this article on AI Business

Read more about:

AI Business

About the Authors

AI Business

AI Business, an ITPro Today sister site, is the leading content portal for artificial intelligence and its real-world applications. With its exclusive access to the global c-suite and the trendsetters of the technology world, it brings readers up-to-the-minute insights into how AI technologies are transforming the global economy - and societies - today.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like