May 11, 2023
At a Glance
- Key EU committees have voted in favor of the EU AI Act, with Parliament now set to vote on it in the summer.
- New amendments will see makers of models like PaLM 2 forced to adhere to ‘transparency requirements.’
The EU AI Act has edged ever closer to fruition after leading parliamentary committees green-lighted the legislation in a vote.
The Internal Market Committee and the Civil Liberties Committee voted to adopt a draft negotiating mandate with 84 votes in favor, 7 against and 12 abstentions.
MEPs “substantially amended” parts of the bill, including adding new obligations for providers of foundation models.
New model-focused obligations would force vendors to register in an EU database and publish summaries of copyrighted data used for training.
The ‘high risk’ classification for AI systems was expanded to include harm to people’s health, safety, fundamental rights or the environment.
MEPs also added new items that would be construed as high risk: including AI systems used to influence voters in political campaigns and recommender systems used by social media platforms with more than 45 million users.
Revisions were also made to the ban on intrusive and discriminatory uses of AI systems, which proved a major sticking point leading up to the vote.
Following the vote, the committees said amendments were made to “ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly.”
“(MEPs) want to have a uniform definition for AI designed to be technology-neutral, so that it can apply to the AI systems of today and tomorrow,” the EU announced.
The bill now needs to be endorsed by the whole Parliament, with the committees saying a vote could be expected during the June 12 to 15 session. However, given how much wrangling it took to get here, take this with a pinch of salt.
Once the majority of the EU Parliament endorses the bill, negotiations will take place with the Council on the bill’s final form.
“Given the profound transformative impact AI will have on our societies and economies, the AI Act is very likely the most important piece of legislation in this mandate," AI Act co-rapporteur Dragos Tudorache said following the vote.
“We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate while protecting fundamental rights, strengthening democratic oversight and ensuring a mature system of AI governance and enforcement."
While welcoming the vote, the (Business) Software Alliance (BSA) expressed concerns about the allocation of responsibilities in the AI value chain and the treatment of foundation models that do not reflect the roles of companies in the AI ecosystem.
BSA policy director Matteo Quattrocchi said, “The rules as currently written are not tailored to reflect companies' roles in the AI ecosystem or differences in business models and AI uses, and likely will not address some of the concerns raised by specific applications of some foundation models.”
Tim Wright, a tech and AI regulatory partner at British law firm Fladgate, said the U.S. approach is typically to experiment first and, once market and product fit is established, to retrofit to other markets and their regulatory framework. "This approach fosters innovation whereas EU-based AI developers will need to take note of the new rules and develop systems and processes, which may take the edge off their ability to innovate."
While the U.K. is adopting a similar approach to the U.S., "the proximity of the EU market means that U.K.-based developers are more likely to fall into step with the EU ruleset from the outset; however the potential to experiment in a safe space - a regulatory sandbox - may prove very attractive.”
How did we get here?
MEPs were meant to have an agreement in place by March, one of the lead negotiators said in January. However, lawmakers were at loggerheads over several provisions, mostly about the use of biometric identification systems.
The AI Act had deemed all remote biometric identification systems “high-risk.” Further compounding this was a ban on law enforcement agencies from using such systems in publicly accessible spaces. Exemptions were possible but only if enforcement authorities received authorization from a judicial source or independent body.
Adding to the delay were considerations of whether provisions covered generative AI following major interest in the wake of ChatGPT.
To be transposed into law, the legislation needs to pass both institutions.
About the Author(s)
You May Also Like