Lobbying group bankrolled by big tech argues over ‘expansive’ definition

Ben Wodecki, Jr. Editor

July 27, 2021

3 Min Read

Lobbying group bankrolled by big tech argues over ‘expansive’ definition

The European Union’s recently proposed regulation on AI would “kneecap the EU’s nascent AI industry before it can learn to walk,” according to the US think tank Center for Data Innovation.

Under the draft regulation, all AI systems in the EU would be categorized in terms of their risk to citizen’s privacy, livelihoods, and rights.

Challenging the proposals, the CDI claimed that small and medium-sized enterprises (SMEs) with turnover of €10 million ($12m) that deploy 'high risk' systems could see as much as a 40 percent reduction in profit as a result of the legislation.

“Rather than focusing on actual threats like mass surveillance, disinformation, or social control, the Commission wants to micro-manage the use of AI across a vast range of applications,” Benjamin Mueller, CDI’s senior policy analyst, said.

“The EU should adopt a light-touch framework limited in scope and adapt it based on observed harms.”

Rosy outlook based on shibboleths

Unveiled in late April, the proposal for The Artificial Intelligence Act aims to guarantee the safety and fundamental rights of people and businesses while strengthening investment in AI and the uptake of the technology across the EU.

Among the rules, law enforcement agencies would be prohibited to use biometric identification systems in publicly accessible spaces. Such technology would be considered ‘high-risk’ and subject to strict controls.

Any system determined to be posing ‘unacceptable risk’ would be outright banned. These would include systems or applications that manipulate human behavior to circumvent users' free will, and systems that allow ‘social scoring' by governments.

In its report, Center for Data Innovation argued that the broad definition used in the proposal is “so expansive that it covers any software using standard machine learning techniques,” and that requirements placed on ‘high-risk’ systems “would curtail the use of many socially beneficial applications of AI.”

“The consequences of the law are all too predictable: it will slow down the spread of AI, encourage innovators to build next-generation technology outside of Europe, and discourage European businesses from adopting AI,” Mueller said in the 16-page report.

He suggested that the innovation-based growth the EU had hoped its legislation would foster, would not manifest: “The rosy outlook is largely based on opinions and shibboleths rather than logic and market data.”

The CDI report estimates that The AI Act would cost European businesses €10.9 billion per year by 2025, and the European economy as a whole would lose €31 billion ($36bn) by that date.

“The provisions of the AIA, however well intended, will extract a heavy price from an increasingly uncompetitive European economy,” Mueller said.

Who benefits

It’s important to remember that CDI is funded by the Information Technology and Innovation Foundation (ITIF), which in turn is bankrolled by major US tech corporations, including Apple, Google, and Facebook.

Former Google CEO turned National Security Commission on Artificial Intelligence (NSCAI) chair, Eric Schmidt, said in May that the proposed regulation and the associated transparency requirements would be "very harmful to Europe."

Last February, Google CEO Sundar Pichai, Facebook's Mark Zuckerberg, and John Giannandrea, Apple’s senior VP for ML and AI strategy, all had separate meetings with Margrethe Vestager, executive vice president of the European Commission and one of the brains behind the regulation. The trio reportedly voiced their concerns over plans to regulate AI.

Andrew McAfee, principal research scientist at MIT, echoed similar points in a recent opinion in The Financial Times, arguing that the proposed law would hinder innovation.

“Restricting the field of potential innovators to those who can afford high upfront costs is a bad idea. It leads to slower progress and growth and fewer hometown success stories, which are also risks,” McAfee wrote.

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like