Facial Recognition, Other ‘Risky’ AI Set for Constraints in EU

Facial recognition and other high-risk artificial intelligence applications will face strict constraints under new rules unveiled by the European Union.

Bloomberg News

April 22, 2021

2 Min Read
Facial Recognition, Other ‘Risky’ AI Set for Constraints in EU

(Bloomberg) -- Facial recognition and other high-risk artificial intelligence applications will face strict constraints under new rules unveiled by the European Union that threaten hefty fines for companies that don’t comply.

The European Commission, the bloc’s executive body, proposed measures on Wednesday that would ban certain AI applications in the EU, including those that exploit vulnerable groups, deploy subliminal techniques or score people’s social behavior.

The use of facial recognition and other real-time remote biometric identification systems by law enforcement would also be prohibited, unless used to prevent a terror attack, find missing children or tackle other public security emergencies.

Facial recognition is a particularly controversial form of AI. Civil liberties groups warn of the dangers of discrimination or mistaken identities when law enforcement uses the technology, which sometimes misidentifies women and people with darker skin tones. Digital rights group EDRI has warned against loopholes for public security exceptions use of the technology.

Other high-risk applications that could endanger people’s safety or legal status—such as self-driving cars, employment or asylum decisions -- would have to undergo checks of their systems before deployment and face other strict obligations.

The measures are the latest attempt by the bloc to leverage the power of its vast, developed market to set global standards that companies around the world are forced to follow, much like with its General Data Protection Regulation.

The U.S. and China are home to the biggest commercial AI companies -- Google and Microsoft Corp., Beijing-based Baidu, and Shenzhen-based Tencent -- but if they want to sell to Europe’s consumers or businesses, they may be forced to overhaul operations.


Key Points:

  • Fines of 6% of revenue are foreseen for companies that don’t comply with bans or data requirements

  • Smaller fines are foreseen for companies that don’t comply with other requirements spelled out in the new rules

  • Legislation applies both to developers and users of high-risk AI systems

  • Providers of risky AI must subject it to a conformity assessment before deployment

  • Other obligations for high-risk AI includes use of high quality datasets, ensuring traceability of results, and human oversight to minimize risk

  • The criteria for ‘high-risk’ applications includes intended purpose, the number of potentially affected people, and the irreversibility of harm

  • AI applications with minimal risk such as AI-enabled video games or spam filters are not subject to the new rules

  • National market surveillance authorities will enforce the new rules

  • EU to establish European board of regulators to ensure harmonized enforcement of regulation across Europe

  • Rules would still need approval by the European Parliament and the bloc’s member states before becoming law, a process that can take years

About the Author

Bloomberg News

The latest technology news from Bloomberg.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like