AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Fei-Fei Li: How To Build Human-Centered AI

by Ciarán Daly
Article ImageToday's anxieties over job losses are just the start. Universities, corporations, and government need to start working together to build human-centric AI, the Chief AI Scientist for Google Cloud has argued in an op-ed for The New York Times last week.

Fei-Fei Li is world-renowned as one of the biggest innovators in the field. As well as being the chief of AI research at Google Cloud, Li heads the Stanford AI Lab and continues to work as a computer science professor. Most significantly, she played a leading role in ImageNet, a crowdsourced dataset of millions of training photographs compiled to advance machine vision technology.

"I worry that enthusiasm for A.I. is preventing us from reckoning with its looming effects on society," Li argues in the piece. "Despite its name, there is nothing "artificial" about this technology - it is made by humans, intended to behave like humans, and affects humans. So if we want it to play a positive role in tomorrow's world, it must be guided by human concerns."

This is what she calls 'human-centered AI', and it consists of three goals that she believes can responsibly guide the development of intelligent machines.

Fei-Fei Li: three goals for responsible AI development

[caption id="attachment_10790" align="alignleft" width="314"]feifei20150814075 Fei-Fei Li[/caption]

"First, AI needs to reflect more of the depth that characterizes our own intelligence. Consider the richness of human visual perception. It's complex and deeply contextual, and naturally balances our awareness of the obvious with a sensitivity to nuance. By comparison, machine perception remains strikingly narrow."

"Making AI more sensitive to the full scope of human thought is no simple task. The solutions are likely to require insights derived from fields beyond computer science, which means programmers will have to learn to collaborate more often with experts in other domains."

Such a collaboration would, she argues, represent a return to 'the roots' of the AI field. "Younger AI enthusiasts may be surprised to learn that the principles of today's deep-learning algorithms stretch back more than 60 years."

"Reconnecting AI with fields like cognitive science, psychology, and even sociology will give us a far richer foundation on which to base the development of machine intelligence. And we can expect the resulting technology to collaborate and communicate more naturally, which will help us approach the second goal of human-centered AI: enhancing us, not replacing us."


Ocado plan to implement AI for the 'last mile' of the customer journey

Related: AI Is One Tech To Rule Them All, Says Ocado Chief Technology Officer


Error-prone, repetitive, dangerous

Li highlights a trend "toward automating those elements of jobs that are repetitive, error-prone, and even dangerous. What's left are the creative, intellectual and emotional roles for which humans are still best suited."

"No amount of ingenuity, however, will fully eliminate the threat of job displacement. Addressing this concern is the third goal of human-centered AI: ensuring the the development of this technology is guided, at each step, by concern for its effect on humans."

Additional potential pitfalls, Li outlines, include bias against underrepresented communities in machine learning; the tension between AI's appetite for data and the privacy rights of individuals; as well as the geopolitical implications of a global intelligence race.

She calls on universities to foster interdisciplinary connections between computer science, social science, and the humanities, and governments to encourage greater computer literacy among young girls, racial minorities, and other 'underrepresented groups'. Corporations, meanwhile, should combine their aggressive investment in intelligent algorithms with ethical A.I. policies that 'temper ambition with responsibility'.

Fei-Fei Li continues to evangelize for a human-focused AI, and here, she's keen to highlight that 'human values are machine values': "No technology is more reflective of its creators than AI. It has been said that there are no 'machine' values at all, in fact; machine values are human values. A human-centered approach to AI means these machines don't have to be our competitors, but partners in securing our well-being. However autonomous our technology becomes, its impact on the world - for better or worse - will always be our responsibility."

Read Fei-Fei Li's full column over at the New York Times

Practitioner Portal - for AI practitioners

Story

Shaping AI and analytics services

8/6/2020

AI and analytics teams must market themselves – and they have to have a clear service strategy

Story

UK's ICO publishes guidance on AI and data protection

8/3/2020

The document aims to help organizations mitigate the risks of using personal data in AI applications

Practitioner Portal

EBooks

More EBooks

Upcoming Webinars

More Webinars

Experts in AI

Partner Perspectives

content from our sponsors

Research Reports

9/30/2019
More Research Reports

Infographics

AI tops the list of most impactful emerging technologies

Infographics archive

Newsletter Sign Up


Sign Up