Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
Disparities between approaches to AI governance exposed with concerns that over-regulation could hinder progress
Leaders are at odds on how to craft responsible AI, diverging between a governance-led and experiment-first approach, with regulators struggling even more to get the balance right.
These insights were revealed at last week’s AI Leaders Forum, where industry leaders discussed the best approaches to AI adoption and governance.
Speaking on a panel about responsible AI, Andre Rogaczewski, CEO of Netcompany, a digital solutions provider, said there had been too much talk about ethics and regulation, and companies should just “make it happen.”
He said there are ways to overcome legitimate concerns about protecting data and using AI for decision-making, such as data filters and using it as an assistant only, but the technology is maturing and companies risk falling behind without investment.
“I don’t think there’s much to discuss about the first 80% of what we can use AI for,” he said. “Companies should reuse what's already out there.”
Conversely, Alex Tyrell, CTO, health division and head of the AI Centre of Excellence at software solutions company Wolters Kluwer, said his company takes a governance-first approach and doesn’t “write a single line of code” before first considering the associated risks.
“There’s no way to move forward unless you've met certain explicit criteria to make sure it’s a good use case and that it’s used responsibly and ethically, as well as bringing value,” he said.
Wolters Kluwer had taken the buzz around AI and “tried to create engagement rather than friction” around its use to help employees experiment safely, Tyrell said. This is becoming increasingly important because companies have no control or transparency around how large language models (LLMs) are trained, he said.
A recent Deloitte survey revealed that two-thirds of companies are boosting their investment in generative AI, though many of these initiatives are still in the early phases. The survey also highlighted that challenges related to data, scalability and risk are restricting options and dampening leadership enthusiasm.
Companies are wary of how governments might regulate AI, while regulators are grappling with how to legislate safeguards without stifling innovation.
The European Union’s AI Act enacted earlier this year, the first regulation from a major regulator, highlights the challenges of getting the balance right. It was intended to provide certainty to enterprises, but Kai Zenner, head of office and digital policy adviser at the European Parliament said the act’s complexity had done the opposite.
“There’s a realization in the European Union that we did too much as regulators, too many laws, often extremely incoherent, overlapping, contradicting and so on,” he said. “In Germany, companies are scared, they’re not investing if there's legal uncertainty.”
This is different from the US, where President-elect Donald Trump has promised to rescind President Joe Biden’s executive order on AI governance when he takes office.
Kenner said companies should engage with regulators to help them learn and they will get “good will’ ’in return, adding “In an optimal world we would use this time to innovate and regulate later.”
You May Also Like