Microsoft Boosts Responsible AI Team From 350 to 400 Personnel

Microsoft disclosed the expanded headcount in its inaugural AI transparency report.

Bloomberg News

May 1, 2024

2 Min Read
Microsoft Copilot logo on a laptop screen
Bloomberg

(Bloomberg) -- Microsoft Corp. expanded the team responsible for ensuring its artificial intelligence products are safe, boosting personnel from 350 to 400 last year.

More than half of the group focuses on the task full-time, the company said Wednesday in its first annual AI transparency report, which outlines measures to ensure its services are rolled out responsibly. The team’s additional members include new hires as well as existing employees.

Last year, Microsoft dissolved its Ethics and Society team amid broader layoffs across the technology sector that gutted trust and safety teams at various companies, including Meta Platforms Inc. and Alphabet Inc.’s Google. 

Microsoft is keen to boost trust in its generative AI tools amid mounting concerns about their tendency to generate strange content. In February, the company investigated incidents involving its Copilot chatbot, whose responses ranged from weird to harmful

The following month, a Microsoft software engineer sent letters to the board, lawmakers and the Federal Trade Commission warning that the tech giant wasn’t doing enough to safeguard its AI image generation tool, Copilot Designer, from creating abusive and violent content. 

“At Microsoft, we recognize our role in shaping this technology,” the Redmond, Washington-based company said in the report. 

Related:Sam Altman and Satya Nadella Tapped for U.S. Government AI Security Board

Microsoft’s approach to deploying AI safely is based on a framework devised by the National Institute for Standards and Technology. The agency, which is part of the Department of Commerce, was tasked with creating standards for the emerging technology as part of an executive order issued last year by President Joe Biden. 

In its inaugural report, Microsoft said it has rolled out 30 responsible AI tools, including ones that make it harder for people to trick AI chatbots into acting bizarrely. The company’s “prompt shields” are designed to detect and block deliberate attempts — also known as prompt injection attacks or jailbreaks  — to make an AI model behave in an unintended way.

Read more about:

Microsoft

About the Author

Bloomberg News

The latest technology news from Bloomberg.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like