Internal Google Panel to Vet AI Projects Packed With Executives
Google has been at the center of a widening public debate over how automated systems might disadvantage vulnerable groups or lead to large-scale job losses, and whether AI should be incorporated into weaponry.
April 8, 2019
(Bloomberg) -- Two weeks ago, Google established an external panel of experts to review thorny ethical issues related to artificial intelligence. It quickly imploded. After a staff revolt over the panel's members, the company disbanded the panel on Thursday afternoon. Its members never even got the chance to meet.
The episode illustrates the difficulty Google is having as it grapples with the societal implications of the powerful technology shaping its future. It’s hard to overstate how important AI is to the company. “It’s more profound than, I don’t know, electricity or fire,” said Sundar Pichai, the company’s chief executive officer, last year. But Google has been at the center of a widening public debate over how automated systems might disadvantage vulnerable groups or lead to large-scale job losses, and whether AI should be incorporated into weaponry.
The capability of Google’s panel – the Advanced Technology External Advisory Council, or ATEAC – to confront such issues was never entirely clear. But another group with a nearly identical name – the Advanced Technology Review Council – does hold real power to shape Google’s approach to its ethical quandaries.
The company describes this other council, which it assembled last year, as an attempt to “represent diverse, international, cross-functional points of view that can look beyond immediate commercial concerns,” according to documents viewed by Bloomberg News. It has not publicly released the names of its members, but it includes some of the most influential people at the company, who have worked for years with Pichai and his boss, Google co-founder Larry Page, according to people familiar with the group.
This council is run by Kent Walker, Google's chief legal officer. The other members are Urs Hölzle, Google's infrastructure chief; Jeff Dean, head of AI; Jacquelline Fuller, vice president for Google.org; Maggie Johnson, vice president of education and university programs; and Mustafa Suleyman, co-founder of DeepMind, a leading AI research firm owned by Google parent Alphabet Inc.
Critics familiar with the council see it as a whitewash. They say a board of top executives is unlikely to serve as a serious check on Google in situations where its stated ethical principles butt up against its financial interests. And multiple people inside the company said the agenda and decisions of this corporate board remain unclear months after launching. These people asked not to be identified for fear of retaliation.
When asked for comment, a Google spokesman pointed to a blog post in December that first introduced the group, highlighting the company’s progress in implementing its ethical principles on AI. At that time, the company had conducted 100 reviews. In the blog post, Walker noted that the company had decided to hold off on a general-use facial recognition tool “before working through important technology and policy questions.”
Many technology companies have recently laid out ethical principles to guide their work on AI. Microsoft Corp.; Facebook Inc.; and Axon, which makes stun guns and body cameras for police departments, all now have advisory boards on the issue. But experts question whether these actions are designed primarily to head off new government regulations. “Ethical codes may deflect criticism by acknowledging that problems exist, without ceding any power to regulate or transform the way technology is developed and applied,” wrote the AI Now Institute, a research group at New York University, in a 2018 report. “We have not seen strong oversight and accountability to backstop these ethical commitments.”
AI Now’s report noted that Google had pursued a censored version of its search engine for the Chinese market, even after it had released a set of principles that seemed to preclude such a project on human rights grounds.
Google’s AI strategy first ran into serious ethical concerns last year, when revelations about its work on a drone initiative called Project Maven led to staff protests and work stoppages. Last June, it released a set of ethical AI principles and created multiple committees to vote on whether proposed products meet the criteria. These groups are similar to the way Google reviews services, like advertising and mobile software, for privacy guidelines and legal liability. For AI, there's a "Responsible Innovation team," led by 12-year Google veteran Jen Gennai, which handles "day-to-day operations" and refers issues to an overarching board, according to the December company blog post. Another larger group of advisers, comprised of less senior staff, is on call for in-house expertise on technical products and protocols.
Those two sit below the highest review board, the Advanced Technology Review Council, which will "help resolve complex issues surfaced by [Google's] review process," the internal document said.
The short life of the external council was a case study in the company’s vitriolic internal politics. Within hours of its formation, outrage quickly built among staff members who objected to Google’s inclusion of Kay Coles James, president of the Heritage Foundation, and her prior public comments about gay and transgender people and climate change. A petition calling for her removal began circulating. One member of the board resigned, while another defended the decision to stay. Right-wing news outlets published leaked messages from employees criticizing James. Breitbart News described messages harshly criticizing James as “outright smears.” The Heritage Foundation did not immediately respond to a request for comment on Friday.
Joanna Bryson, a computer science professor at the University of Bath who was invited to Google’s external board, described its collapse as a major setback. “The bottom line is that we are back at the status quo, which personally I think is unacceptable,” she wrote in an email.
Google seems no closer to coming up with an approach that will satisfy anyone. On Thursday, when the company shuttered its external council, a company statement said it was "going back to the drawing board." It made no mention of the internal panel.
Read more about:
Alphabet Inc.About the Author
You May Also Like