March 1, 2023
At a Glance
- Trustworthy AI must give way to responsible AI to ensure systems are built properly.
- Humans must be in the loop from the start to the end of a product's or system's lifecycle.
- AI systems must make sure they can be applied practically.
Trustworthy AI – having confidence in its output − should be replaced with responsible AI – placing guardrails to ensure fairness − to ensure AI is developed properly.
That is according to Ricardo Baeza-Yates, director of research, Institute for Experiential AI at Northeastern University. Speaking at Four Years From Now (4YFN), the startup-focused event co-located at Mobile World Congress, Baeza-Yates likened AI to elevator safety.
“No one would take an elevator if they had a sign saying, ‘this elevator doesn't work 1% of the time.’”
He instead proposed a responsible AI approach that would involve humans in the loop from the beginning to the end of a product's or system's lifecycle.
His approach, created by the team at his institute, includes embedding AI ethics training from early stages, technical audits -- and establishing independent AI ethics advisory boards consisting of technically minded individuals, user representatives and experts in sociology and ethics.
However, Cristina Fonseca of Indico Capital Partners in Portugal believes his view of the supervisory board was “generic.” Instead, the general partner argued that only a vertical-specific and practical approach would work for AI technologies to maximize deployment effectiveness.
“Ten years ago, the first AI companies hired a bunch of Ph.D.s, put them in a room and told them to build AI systems, but they were very far from (being able to address) the actual problems,” she said.
“Companies have been learning to group different stakeholders at the table to make sure everything is considered. But I think a supervisory board, that's generic. I'm not sure that's going to work perfectly because of domain-specific aspects of the different AI applications.”
EU AI Act
The panel also talked about the EU AI Act, which came about due to its risk-based approach. The prospective legislation would categorize AI deployments based on trustworthiness.
Baeza-Yates said that the lack of categorial thinking in the legislation would lead to definition problems, such as spelling out clearly when something stops becoming ‘high risk.’
The academic noted that once AI has been regulated, so too will blockchain and quantum computing. However, he argued that the technology itself should not be regulated, leaving it to more important areas like access to food and medicine.
He also called out the U.S.’s AI Bill of Rights, saying the name was misleading as it was focused on the AI systems themselves, not the people.
Fonseca also questioned whether regulations would keep humans in the loop, and stressed the important role humans will play in supervising AI systems.
“AI and all these generative models have a lot of limitations these days. … I think it's about asking the right questions and developing a critical mindset," she said. "You don't just trust everything that an algorithm tells you.”
“There’s huge potential in all these AI systems, but humans will have a very important role in supervising this.”
Fonseca pointed to the adoption of the ATM in the early 1970s and fears that human bank tellers would no longer be needed. Some 50 years on, human tellers and bank staff are still around.
She pointed to ChatGPT, OpenAI’s viral conversational AI chatbot, saying that she uses it “a lot” but only as an assistive tool.
Read more about:Conference News
About the Author(s)
You May Also Like