Prominent AI expert calls for devs to conduct thorough technical analyses

Ben Wodecki, Jr. Editor

March 2, 2022

2 Min Read

Prominent AI expert calls for devs to conduct thorough technical analyses

AI developers must avoid biases that could arise when designing algorithms, Ricardo Baeza, director of research at the Institute of Experiential AI at Northeastern University, has warned.

Speaking at the Mobile World Congress in a session organized by the Catalan Data Protection Authority (APDCAT), Baeza stressed the need to avoid biases as well as the indiscriminate use of mass video surveillance with facial recognition.

“We cannot appreciate its impact or the risks it hides, because AI transforms the world in an unprecedented way,” he said.

Baeza told attendees to "maintain a critical view of AI" and to question systems that could harm citizens’ which are not obvious from a technical point of view.

"Ethics must be incorporated into AI, and in the process, we must be demanding, beyond proposing ethical codes", added Borràs.

APDCAT director Meritxell Borràs i Solé also spoke, agreeing that AI today requires increased transparency and reliability.

The APDCAT director also touched on the need to have legal mechanisms to guarantee citizens the protection of their rights.

Such legal mechanisms may come to fruition in the EU after the European Commission proposed plans that would force all AI systems to be categorized in terms of their risks to citizen’s privacy, livelihoods and rights

Any system considered to cause ‘unacceptable risk' would be banned, with ‘high risk’ systems subject to risk assessments and ‘appropriate’ human oversight measures.

‘Limited risk’ and ‘Minimal risk’ categories require limited or no obligations, covering chatbots, AI-enabled video games, and spam filters. Most AI systems will fall into the category of ‘limited’ or ‘minimal’ risk.

Baeza went on to talk about discrimination, models that do not understand semantics and the indiscriminate use of computer resources.

He emphasized three types of biases: those of the data, those of the algorithm itself, which sometimes amplifies that of the data and those of the interaction between the system and its users, which are a combination of algorithmic and cognitive biases.

He also spoke about the challenges to be addressed in this area, such as the principles to be met by software, cultural differences, regulation and individual cognitive biases.

In addition to the ethical questions that must be asked before launching a system, the Chilean AI expert said developers should consider political and technical competence.

He stressed that devs should consider if they have done a thorough technical analysis and whether they have weighed the possible individual and social impacts.

On what society can do to meet these challenges, Baeza insisted that responsible AI "implies a process that involves from the beginning all the actors in the design, implementation and use of software.”

They are, he said, "a multidisciplinary team, from experts in the problem to be solved to computer scientists.”

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like