Major Tech Firms Come Out Against Police Use of AI Algorithms

Facebook, Microsoft, Alphabet Inc.’s Google and DeepMind, Amazon, Apple and International Business Machines all said that artificial intelligence algorithms should not be used by law enforcement to make decisions about jailing people.

Bloomberg

April 26, 2019

2 Min Read
artificial intelligence
kishore kumar/iStock/Getty Images

(Bloomberg) -- A consortium whose members include major technology firms has said that artificial intelligence algorithms should not be used by law enforcement to make decisions about jailing people.

The Partnership on AI said in a report published Friday that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, are potentially biased, opaque, and may not even work.

The group’s members include Facebook Inc., Microsoft Corp., Alphabet Inc.’s Google and DeepMind, Amazon.com Inc., Apple Inc., and International Business Machines Corp., as well as academic researchers.

The group said it opposed any use of these systems -- which work by trying to predict how likely a defendant or prisoner is to re-offend if released -- unless they are properly regulated.

Despite concerns about the fairness and efficacy of algorithms designed to help legal authorities make decisions about incarceration, Partnership on AI found that such systems are already in widespread use in the U.S. and were gaining a foothold in other countries too.

One of the best known cases of the problems with the tools is Equivant’s algorithm, COMPAS (Correctional Offender Management Profiling for Alternative Sanctions). In 2016, an investigation by ProPublica found the algorithm was twice as likely to incorrectly label black defendants as being at higher risk than whites.

While Equivant disputed ProPublica’s findings, subsequent academic studies found the algorithm performed no better than the estimates of untrained humans or a much less complicated set of simple rules. Despite this, the algorithm remains in use in many places.

The Partnership on AI report highlighted that in many places where AI systems were being used to provide risk assessments, there was inadequate governance around how judges, bail officers or parole boards ultimately deciding whether to put or keep someone in jail should use the risk score and how heavily it should be weighed against other factors.

The report "highlights, at a statistical and technical level, just how far we are from being ready to deploy these tools responsibly," said Logan Koepke, senior policy analyst at Upturn, an organization that promotes equity in the design, governance and use of digital technology. Upturn is a member of the Partnership on AI.

Big technology firms have been increasingly questioning the use predictive algorithms, even though they have helped pioneer the underlying techniques and use AI-based tools as a major lure for their cloud computing businesses.

Microsoft last year called for clearer laws around the use of facial recognition technology amid concerns that such software could be used by police and governments in ways that violate civil liberties. It was later joined by Amazon, which said it also had concerns about the use of the technology, which it offers.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like