AI Summit London: AI Experts Share Best Responsibility Practices

Regulatory and consumer considerations are among the ideas proposed by Google, Bill.com, and Shell.

2 Min Read
artificial intelligence
Alamy

Computer vision is one of the core uses of AI. It allows systems to determine information from videos or images to provide users with insights.

It is used in everything from autonomous vehicles to Industry 4.0. For example, multinational energy giant Shell uses computer vision to monitor its structures.

Speaking on a panel at the AI Summit London, Amjad Chaudry, Shell’s capability center manager for data science and ML, said that his company uses computer vision to guide robots that navigate plants to monitor gauges.

Google and the Bank of England were among the other brands represented on the panel, which discussed future-proofing computer vision implementations and offering best practices to attendees.

Chaudry said the integral needs for his business are to embed security across a project and keep an eye on the ever-changing regulatory landscape.

Joining Chaudry on the panel, Arunita Roy, senior data and ML scientist at the Bank of England, said her team looks at other regulators to see what they’re doing and how they’re incorporating AI.

Impact: Safety and society

Also on the panel was Toju Duke, program manager for responsible AI at Google. Duke spoke about the need to ensure that the impact products that use AI and ML may have on society is not harmful.

Related:Google Debate Over ‘Sentient’ Bots Overshadows Deeper AI Issues

She referenced one example of an action Google took to ensure safety that was related to facial recognition. Google users can turn off Face Grouping, its facial recognition software, in apps like Google Photos.

Duke said it is important to try and get things right, but that businesses cannot get it right all the time. She said being responsible needs transparency, explainability, maintaining data quality, and using tools and fairness metrics.

“We should be trying to be responsible not just because of regulation,” she said.

Eitan Anzenberg, chief data scientist at Bill.com, supported Duke’s point, saying businesses should not wait for governments to do something and that it's up to brands to "do the right thing."

Describing privacy considerations as a “complex issue,” Anzenberg said that he often tries to take consumers’ perspectives when building models, asking himself how they might feel about what he’s doing end to end.

“Personal responsibility is very important,” he concluded.

This story originally appeared on AI Business, an ITPro Today sister publication.
 

Read more about:

AI Business

About the Author(s)

Ben Wodecki

Assistant Editor, AI Business

Ben Wodecki is assistant editor at AI Business, a publication dedicated to the latest trends in artificial intelligence.

AI Business

AI Business, an ITPro Today sister site, is the leading content portal for artificial intelligence and its real-world applications. With its exclusive access to the global c-suite and the trendsetters of the technology world, it brings readers up-to-the-minute insights into how AI technologies are transforming the global economy - and societies - today.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like