EU AI Act Reaction: AI’s GDPR Moment as Big Tech ‘Sounds the Alarm’

Businesses should understand how their model works, stakeholders suggest

Ben Wodecki, Jr. Editor

June 19, 2023

4 Min Read
KENZO TRIBOUILLARD/AFP via Getty Images

At a Glance

  • AI Business takes a look at the various takes on the historic EU AI Act vote last week

Amidst the fanfare of the AI Summit London, lawmakers in Brussels overwhelmingly voted to pass the EU AI Act.

The legislation classes all AI systems based on their trustworthiness – how likely they are to impact a citizen’s rights – and now merely needs leaders of both the European Council and Parliament to sign off on a final version.

AI Business takes a look at some of the reactions to the historic vote.

Lawmakers

Co-rapporteur Dragos Tudorache said the EU AI Act will go on to “set the tone worldwide in the development and governance of AI.”

The Romanian MEP said the bill would ensure AI is “used in accordance with the European values of democracy, fundamental rights and the rule of law.”

Following the vote, co-rapporteur Brando Benifei said: “While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose.”

Benifei said nations like the U.S., Brazil and Canada are all “discussing systems for risk mitigation.”

“Maybe they’re a bit behind us on laws, but they are talking about the same sort of things,” the Italian MEP said.

The view from businesses

Following the AI Act vote, SambaNova Systems’ EMEA general manager Alex White said the best way to prepare for such regulations is “to know exactly what is happening inside your AI model.”

“The black-box nature of many AI models means that you can’t knowingly comply with regulation if you don’t know what data the model uses and how it generates outputs,” said White. “Owning your model outright is an excellent way to gain this insight. Looking ahead, enterprises will need to be able to prove that the models they are using are in step with regulation to avoid possible compliance issues.”

Informatica’s EMEA VP Greg Hanson said the AI Act will “get the AI Act off to the right start.”

Hanson said: “Ultimately, the E.U.’s AI act is a strong starting point but while the potential of AI is being overhyped in the short-term, its transformative power in the long-term is being underestimated. AI will enable companies to digitize, become more productive, operate intelligently and innovate.

“As the use of AI evolves, it’s likely that regulation will have to keep pace and grow in complexity to ensure organizations act responsibly and use AI to serve a common good as well as their bottom line.”

Osborne Clarke lawyer Tamara Quinn likens the EU AI Act to the General Data Protection Regulation (GDPR), creating a "Brussels effect" to "set the global gold standard for AI regulation.”

The EU’s GDPR passed some seven years ago and went on to set guidelines for how online platforms can collect user data, with platforms across the world affected.

“Although there are a lot of details and some fundamentals still to settle, the direction of travel under the Act is broadly clear, and we are seeing many businesses starting to align their AI development to future compliance,” Quinn said. “In the absence of AI-specific legislation from the government, many AI developers will choose to use the EU's standards as their benchmark for compliance."

Tim Wright, tech and AI regulation partner at UK law firm Fladgate also likened the bill to GDPR, warning that non-compliance with the AI Act will come at a significant cost.

Wright said: “Failures to meet data governance and high-risk AI transparency obligations will attract penalties of up to the higher of EUR 20 million or 4% of global turnover, whilst most other obligations will risk penalties of up to the higher of EUR 10 million or 2% of global turnover.”

Brussels-based Patrick Van Eecke, head of law firm Cooley’s European cyber, data and privacy practice, said he was concerned about attempts to regulate such an early-stage technology.

“ChatGPT and similar generative AI tools were hardly known when the first drafts of the proposed AI Act circulated in April last year,” Van Eecke said. “Introducing a general legal framework to regulate the main principles on how to develop, market and use AI is a very good idea. But we run the risk that by creating an elaborate and complex law with many detailed rules, like the proposed AI Act, we miss the point, and we may be forced to go back to the drawing board in a matter of months, because of new technological developments in AI.

“Drafting and changing laws cannot be done as fast as technological developments, so we have to make sure that the laws we create, especially in relation to AI, are sufficiently future-proof. You can only do that by focusing on main principles - not by creating detailed rules to regulate the specificities of AI as we know them today."

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like