AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

AI Implementation

AI Summit Silicon Valley 2021: Accelerating AI through trust and ingenuity

by
 
Article ImageFirst day of the conference saw experts from Microsoft and IBM Watson offer insights into their AI work

Business leaders in AI must rely on the resources of their teams and not solely on technology, according to Mitra Azizirad, corporate VP for Microsoft AI and Innovation.

Azizirad told attendees of the 2021 AI Summit Silicon Valley that bringing together the ingenuity of people with technology can “truly change the world.”

“Adapting and changing is about so much more of tech – it’s the combination of human and machine that will help organizations both reimagine and transform their businesses,” she said, adding, “human ingenuity with AI is truly a force multiplier.”

Azizirad cited a McKinsey report which found that 61 percent of high-performing companies increased their investment in AI during the COVID-19 crisis.

“This underscores just how integral AI capabilities have become in terms of how work gets done,” she said.

"Even before the pandemic, my team and I were working with many customers around the best ways to inculcate an AI-ready culture in their organizations."

Transparency and trust

In a later session, IBM Watson’s chief AI officer, Seth Dobrin, stressed that trust is a key part of how to enable the adoption of AI and drive it at scale.

Dobrin told the AI Summit attendees that achieving trustworthy AI requires thinking holistically.

"In business, we need to understand context, jargon, and code to get a better sense of data that hasn't been mined in a while."

During his speech, Dobrin touched on potential regulations related to trust in AI.

One jurisdiction that's pressing ahead on this is the EU – with the bloc’s proposed 'Artificial Intelligence Act' potentially forcing all AI systems to be categorized in terms of their risk to citizens' privacy, livelihoods, and rights.

Any system determined to pose ‘unacceptable risk’ would be outright banned, while those deemed ‘high risk’ would be subject to strict obligations before they can be put on the market.

Dorbin argued that such governance shouldn't cover all AI, but solely systems that affect human health and employment.

“As corporations, it’s our responsibility that not just us produce trustworthy AI,” he said, adding the need for teams like his to participate with the community in relation to governance.

He went on to talk transparency, saying, “transparency drives trust, without [it] you’re not going to get people to trust AI.”

He likened his ideal for AI transparency to nutritional labels on foods, stressing to the audience that it should be easy to understand.

Trending Stories
All Upcoming Events

Upcoming Webinars

Latest Videos

More videos

EBooks

More EBooks

Research Reports

More Research Reports
AI Knowledge Hub

Newsletter Sign Up


Sign Up