The AI Act Needs a Shared Responsibility Model, and Here’s Why

Gartner analyst Nader Henein discusses the need for a shared responsibility model in the EU's proposed AI Act.

3 Min Read
Wooden gavel with brass engraving band and golden alphabets AI on a round wood sound block
Alamy

I am an ardent believer that regulators need to protect consumers while fostering innovation.

As such, I have been on board with much of the data regulation coming out of Europe over the past five years. It is reassuring to see legislative committees open to passing sweeping laws and learning from their mistakes. The GPDR is an excellent example of four decades of organic regulatory evolution starting with the OECD guidelines in the late '70s.

Under the GDPR, controllers were expected to assess the risk emanating from any solutions or services they plan to use, calculate if the existing controls provided adequate protections to the data subject and mitigate if need be.

In the same way, the AI Act mandates a risk-based approach, requiring organizations (referred to as deployers) to assess the risk when they “deploy” pre-trained models.

There are two fundamental challenges with that requirement,

  1. Most model developers do not publish the needed data for deployers to conduct an impact assessment.

  2. Assuming that model developers do publish the needed data, I would argue that 90% of organizations do not have the expertise to properly absorb this data and subsequently make a risk-based determination.

The first challenge will have to straddle a very thin line between what regulators deem adequate transparency on behalf of the developers without crossing the boundary into intellectual property overreach.

Related:Why You Shouldn't Specialize in AI/ML Development

The second challenge is much more complicated. Like encryption algorithms, the complexity of the math involved makes it impossible for most organizations to assess these algorithms in-house. Which is why many rely on independent certifications such as FIPS 140-2 when selecting their encryption algorithms.

But a complex model (such as a large language model) does not lend itself to this kind of certification and if one were to be proposed, it would have to be a risk-scoring tool and it would take years to finalize.

Having worked with thousands of organizations on privacy and data governance for the past decade, here is my humble 2-euro-cent recommendation for the European Parliament, the EU Council of Ministers and the European Commission who will be debating the AI Act as part of the trilogue process.

Instead of piling the responsibility on the deployer, make it a shared responsibility model where:

  • The developer of the model makes a series of assertions on specific aspects of the model defined by the AI Act.

  • The deployer then takes these assertions, uses them to conduct and document a risk-based assessment in line with their risk posture and introduce mitigations accordingly.

  • The deployer would also have a duty to monitor model outcomes for outliers and maintain a log to demonstrate on-going compliance in line with the earlier risk-assessment.

This shared responsibility model would alleviate the complexity and speed up adoption by placing the responsibility with the parties who are able to appropriately assess the risks. 

This will not please model developers, but if you build a self-driving car that then drives into pedestrians on the sidewalk, you need to bear some of the responsibility.

NB, in the case of open-source models, even though there is a community of developers, this shared responsibility is not realistic, at which point organizations (deployers) would bear the brunt of the risk.

This article originally appeared on the Gartner Blog Network

About the Author(s)

Gartner Blog Network

The Gartner Blog Network has expert views on today’s technology and business topics and trends. 

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like