Sponsored By

UPDATE: EU Reaches Deal on Historic AI Act

Following 20 hours of deliberation, the European Parliament and the bloc's 27 member nations reached a deal. A vote is slated for 2024.

Ben Wodecki, Deborah Yao

December 8, 2023

3 Min Read
EU flag on a blue sky background. The EU AI Act stalls as lawmakers argue with member states on rules for generative AI and biometric systems
20 hours of continuous debate couldn't break the deadlock on the EU AI ActGetty Images

At a Glance

  • EU lawmakers kept up deliberations even past the Dec. 6 deadline. Two days later, they reached an agreement.
  • They overcame disagreements on biometric systems and regulating generative AI. A vote, largely a formality, is set for 2024.

UPDATE: On Dec. 8, the European Parliament and the bloc's 27-member nations finally agreed on the rules for what would be the world's first comprehensive regulations governing AI.

They overcame disagreements over controversial parts of the Act, including the use of facial recognition systems by law enforcement and how to regulate generative AI.

"Historic!" tweeted Thierry Breton, the EU’s Internal Market Commissioner. "The EU becomes the very first continent to set clear rules for the use of AI. The AI Act is much more than a rulebook - it's a launchpad for EU startups and researchers to lead in the global AI race."

View post on Twitter

The Act does allow - but limit - biometric identification systems used by police and bans social scoring using AI to manipulate or exploit user vulnerabilities. It also gives consumers the right to file complaints and get back "meaningful" explanations, among other provisions.

Violations of the Act range from €35 million ($38 million) or 7% of global revenue to €7.5 million ($8 million) to 1.5% of revenue.

"MEPs (members of the European Parliament) reached a political with the Council (of Europe) on a bill to ensure AI in Europe is safe, respects fundamental rights and democracy, while businesses can thrive and expand," the European Parliament said in a statement. "This regulation aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high risk AI, while boosting innovation and making Europe a leader in the field. The rules establish obligations for AI based on its potential risks and level of impact."

Related:Reflections on AI Governance Global 2023

What is left is the formal vote by the European Parliament, largely seen as a formality that is expected to come early next year.

Yann LeCun, Meta's chief AI scientist and a strong proponent of open-source models, tweeted a "kudos" to the French, German and Italian governments for "not giving up on open source models." The Act provided broad exemptions to open-source models.

Amanda Brock, CEO of OpenUK, was cautiously optimistic. Citing early reports of the draft legislation, which are not yet publicly available, she said that nonprofit organizations that "sell" open-source software but reinvest the proceeds in not-for-profit activities are exempt from the rules. "If this is indeed true that will be a significant victory for the open source communities," she said in emailed comments to AI Business.

Hours leading up to the agreement

Lawmakers had continued negotiating even as the Dec. 6 deadline to finalize the EU AI Act came and went.

Related:UK's AI Safety Summit: What They Discussed

Among the key sticking points are governing generative AI systems, like ChatGPT, and the use of AI in biometric surveillance systems, according to Reuters. Some 20 hours of continuous debate took place before getting to these major sticking points, with a Council of the European Union press conference due to take place on Dec. 7 postponed as negotiations continue.

This led Thierry Breton, the EU’s Internal Market Commissioner, to post on X (Twitter): “New day, same trilogue.”

View post on Twitter

After the European Parliament signed off on its position on the AI Act in June, member states and lawmakers have tried for months to get the legislation over the line.

The legislation would introduce a rules-based system that would categorize all AI systems based on their potential to impact citizen’s rights. Those more likely to impede civil liberties would be subject to strict rules or could be outright banned.

The prospective AI Act would class biometric identification systems as ‘high-risk’ and would ban their use by law enforcement in publicly accessible spaces. However, some member states are unhappy, wanting exceptions for national security and military uses.

Parliamentarians introduced new rules on foundation models such as GPT-4 and Gemini this past summer but France, Germany and Italy wanted a late addition offering self-regulation for generative AI systems.

The Dec. 6 deadline to finalize the legislation marks exactly one year since the Council of the EU adopted its position on the bill.

This article was updated to reflect that an agreement over the AI Act.

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like