AI Summit Silicon Valley 2021: Accelerating AI through trust and ingenuity

First day of the conference saw experts from Microsoft and IBM Watson offer insights into their AI work

Ben Wodecki, Jr. Editor

November 4, 2021

2 Min Read

First day of the conference saw experts from Microsoft and IBM Watson offer insights into their AI work

Business leaders in AI must rely on the resources of their teams and not solely on technology, according to Mitra Azizirad, corporate VP for Microsoft AI and Innovation.

Azizirad told attendees of the 2021 AI Summit Silicon Valley that bringing together the ingenuity of people with technology can “truly change the world.”

“Adapting and changing is about so much more of tech – it’s the combination of human and machine that will help organizations both reimagine and transform their businesses,” she said, adding, “human ingenuity with AI is truly a force multiplier.”

Azizirad cited a McKinsey report which found that 61 percent of high-performing companies increased their investment in AI during the COVID-19 crisis.

“This underscores just how integral AI capabilities have become in terms of how work gets done,” she said.

"Even before the pandemic, my team and I were working with many customers around the best ways to inculcate an AI-ready culture in their organizations."

Transparency and trust

In a later session, IBM Watson’s chief AI officer, Seth Dobrin, stressed that trust is a key part of how to enable the adoption of AI and drive it at scale.

Dobrin told the AI Summit attendees that achieving trustworthy AI requires thinking holistically.

"In business, we need to understand context, jargon, and code to get a better sense of data that hasn't been mined in a while."

During his speech, Dobrin touched on potential regulations related to trust in AI.

One jurisdiction that's pressing ahead on this is the EU – with the bloc’s proposed 'Artificial Intelligence Act' potentially forcing all AI systems to be categorized in terms of their risk to citizens' privacy, livelihoods, and rights.

Any system determined to pose ‘unacceptable risk’ would be outright banned, while those deemed ‘high risk’ would be subject to strict obligations before they can be put on the market.

Dorbin argued that such governance shouldn't cover all AI, but solely systems that affect human health and employment.

“As corporations, it’s our responsibility that not just us produce trustworthy AI,” he said, adding the need for teams like his to participate with the community in relation to governance.

He went on to talk transparency, saying, “transparency drives trust, without [it] you’re not going to get people to trust AI.”

He likened his ideal for AI transparency to nutritional labels on foods, stressing to the audience that it should be easy to understand.

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like