Does Ethical AI Require the Regulation of AI?
AI Summit indicates that ethical AI is a top-of-mind issue for policymakers.
NEW YORK — Artificial intelligence is poised to fuel dramatic economic development in many enterprises, regions and sectors. But untrammeled growth poses obstacles to the adoption of AI-enabled systems — not to mention threats to ethical AI-based outcomes.
Data ethics and transparency in AI were major themes at The AI Summit, which took place this week. Experts debated the sweet spot for regulation of artificial intelligence at the panel discussion " The Role of Policy and Regulation in the Development of AI."
“We have benefitted so much from the growth of tech[nology],” said Ana Ariño, chief strategy officer for the New York City Economic Development Corporation, during the session. “But we are concerned about the next stage of growth . . . if we don’t address the crisis of trust. To support innovation, we need to proactively address the concerns about the ethics of AI.”
AI could contribute up to $15.7 trillion to the global economy in 2030, more than the current output of China and India combined, according to a recent PwC study.
AI-enabled image recognition has made key strides in early detection of diseases such as cancer — with far better accuracy rates than humans, for example.
But AI is fueled by data, and data sets can carry bias when they aren’t sufficiently large or diverse. The resulting training data from which algorithms learn to identify patterns, and in turn AI-driven decision-making, could be distorted by underlying skews in data.
Poor data quality can also undermine algorithmic rigor. “Garbage in, garbage out,” warned Anindya Ghose, Heinz Riehl chair and professor of business at Stern School of Business at New York University, during another AI Summit session on using AI and blockchain to bring transparency to digital marketing.
AI-enabled systems in sectors that handle personally identifiable information and sensitive corporate data — which means most economic sectors these days — necessitate regulation as well.
Regulations such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act take a rigorous stance on data protection. GDPR, for example, includes stringent requirements for the deletion of data, which could affect data sets stored and used to train AI algorithms.
“There is a real need in regulated areas like financial services, healthcare and other areas that involve individual identification and sensitive corporate information to have regulation,” said Bob Cohen, a senior fellow at the Economic Strategy Group, a public policy think tank that focuses on globalization.
Ethical and responsible AI also requires algorithmic transparency. Without sufficient insight into how data is trained, algorithms can become a black box. Some industries have used algorithms for decades, and their mathematical foundations are more transparent than others, wrote Bahar Gholipour in an article on transparency in AI.
“The regulations that prevent credit scoring companies from using inscrutable algorithms are absent in other areas, like the legal system and advertising,” Gholipour wrote.
Policymakers and other emerging technology experts have called for regulation of AI to ensure equitable — and accurate — data-driven outcomes.
“Do we need regulation to promote ethical AI? I think the answer is yes,” Ariño said. “We need to try to turn the practices that everyone agrees on into practice.”
AI has also begun to have an impact on trade and globalization; global regulation could encourage protectionism or greater openness of AI markets. Recently, China ordered that all foreign computer equipment and software be removed from government offices and public institutions within three years, the Financial Times reports. The ban could have a dramatic impact on technology providers such as Microsoft whose core technologies include AI. It could also hamstring global efforts to develop ethical AI.
“There need to be agreements; otherwise, the entire market will be segmented,” Cohen warned.
Global regulation has become important terrain for the AI market, to promote innovation and collaboration rather than fragmentation. Global standards are also critical to ensuring ethical and transparent AI.
Some work has been done on this front. The EU has developed principles for ethical AI, as has the IEEE, Google, Microsoft, and other countries and corporations. The Organization for Economic Co-operation and Development (OECD), an intergovernmental organization that represents 37 countries, has also drafted a set of principles to guide ethical development of AI as well as transparency.
Industry observers suggest that 2020 will be an important year for data ethics, where many of the inherent problems of data bias and lack of transparency could hinder adoption if they aren’t addressed.
“It will keep responsible and ethical AI at the forefront of everyone’s mind,” said Kathleen Walch, principal analyst at Cognilytica.
Ultimately, regulation of AI is just as much about trust and transparency in the handling of data as it is about regulation of emerging technologies such as AI.
“The whole question is about trust,” said Tim Bradley, minister counselor in Department of Industry, Innovation and Science in Australia. “If we don’t have community buy-in, community engagement, then there will be calls for moratorium. AI won’t grind to a halt, but there will be an overcorrection.”
About the Author
You May Also Like