Guidance Published To Help Developers Build Safer AI
The National Institute of Standards and Technology also launched a series of tests AI model developers can implement to evaluate generative systems.
This article originally appeared on AI Business.
The National Institute of Standards and Technology (NIST) has released several guidance documents to help companies build AI more safely.
NIST published four draft publications designed to provide advice to businesses implementing chatbots and text-based image and video systems.
Also published were documents on developing global AI standards and promoting transparency.
Initially released as drafts, NIST is seeking feedback to help finalize the publications before a final release later this year.
The guides are designed to work with other AI-related NIST publications like the AI Risk Management Framework and Secure Software Development Framework.
“For all its potentially transformative benefits, generative AI also brings risks that are significantly different from those we see with traditional software,” said Laurie E. Locascio, NIST director and undersecretary of commerce for standards and technology. “These guidance documents will not only inform software creators about these unique risks but also help them develop ways to mitigate the risks while supporting innovation.”
The AI RMF Generative AI Profile contains a list of 13 potential risks of model output and more than 400 actions developers employ to mitigate them.
The document outlines potential risks, categorizing their impact on technical issues, human misuse or broader societal concerns.
The second publication, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models, focuses on securing underlying software code.
Developers can use it to help take action against potentially malicious information found in datasets.
It provides guidance on training data collection processes, providing recommendations on analyzing text for signs of bias and manipulation.
Reducing Risks Posed by Synthetic Content provides insights into AI-generated content, including how to potentially implement transparency measures like watermarking and recording metadata.
The fourth guidance document, A Plan for Global Engagement on AI Standards, focuses on information sharing. Users are provided with recommendations on standards and cooperation in AI development.
NIST has also launched a series of tests AI model developers can implement to evaluate generative systems.
The NIST GenAI tests whether a generative AI system outputs potentially discriminative content. It also evaluates whether outputs are indistinguishable from human-produced content.
The tests currently only work on text generation systems, with more support for more modalities like images, video and code coming soon.
“The announcements we are making today show our commitment to transparency and feedback from all stakeholders and the tremendous progress we have made in a short amount of time,” said U.S. Secretary of Commerce Gina Raimondo. “With these resources and the previous work on AI from the department, we are continuing to support responsible innovation in AI and America’s technological leadership
NIST’s publications come 180 days after President Biden signed the AI executive order.
“In the six months since President Biden enacted his historic Executive Order on AI, the Commerce Department has been working hard to research and develop the guidance needed to safely harness the potential of AI, while minimizing the risks associated with it,” said Secretary Raimondo.
In addition to NIST’s publications, the U.S. Patent and Trademark Office (USPTO) has launched a request for comment on how AI could affect evaluations on whether an invention is patentable under U.S. law.
The USPTO, which resides in the Commerce Department, wants views from intellectual property experts on potential impacts for an examiner determining what qualities as prior art, a key assessment used to evaluate the novelty of an invention.
Read more about:
AI BusinessAbout the Authors
You May Also Like