China wants to regulate algorithms as EU lawmakers argue over facial recognition rules.

Ben Wodecki, Jr. Editor

March 25, 2022

10 Min Read

China wants to regulate algorithms as EU lawmakers argue over facial recognition rules

AI Business brings you a detailed analysis of how jurisdictions around the world are looking to regulate AI.

From levels of trustworthiness in the EU to the U.S. lagging behind its rivals, here are some of the biggest changes and challenges.

 

I. EU

What’s on the table?

In April 2021, the EU introduced what was the first attempt to regulate AI on a super-national level.

Under the proposed ‘Artificial Intelligence Act,' all AI systems in the EU would be categorized in terms of their risk to citizens' privacy, livelihoods, and rights.

‘Unacceptable risk' covers systems that are deemed to be a "clear threat to the safety, livelihoods, and rights of people.” Any product or system which falls under this category will be banned. This category includes AI systems or applications that manipulate human behavior to circumvent users' free will and systems that allow ‘social scoring' by governments.

The next category, 'High-risk,' includes systems for critical infrastructure which could put life or health at risk, systems for law enforcement that may interfere with people's fundamental rights, and systems for migration, asylum-seeking, and border control management, such as verification of the authenticity of travel documents.

AI systems deemed to be high-risk will be subject to “strict obligations” before they can be put on the market, including risk assessments, high quality of the datasets, ‘appropriate’ human oversight measures, and high levels of security.

‘Limited risk’ and ‘Minimal risk’ categories require limited or no obligations, covering chatbots, AI-enabled video games, and spam filters. Most AI systems will fall into this category.

Also on the table is the new Machinery Regulation. This replaces the EU Machinery Directive, with the revised iteration containing provisions that would force businesses that use AI-capable machinery to conduct a single conformity assessment.

Machines covered in the latest regulation would likely include 3D printers, construction machinery, and industrial production lines, the Commission said.

Both pieces of legislation are still in the early stages and have yet to be passed.

What could the changes mean?

For many companies using AI, the trustworthiness legislation simply means a greater emphasis on oversight – addressing biases and greater consideration of data usage. This is already a growing emphasis for companies in the AI sphere, with the likes of IBM pushing a more inclusive approach when it comes to deployments and development.

For some tech areas, it spells trouble. Example: biometrics – under the EU’s prospective law, all remote biometric identification systems are considered ‘high-risk.' Further compounding this is a ban on law enforcement agencies from using such systems in publicly accessible spaces.

The prohibition does have some exemptions, with such systems only to be used in certain circumstances, with authorization by a judicial or independent body required.

One notable system that could be banned under the rules is AI hiring tools. Such offerings discriminate against applicants based on their ethnic, socio-economic or religious background, gender, age or abilities, according to Natalia Modjeska, AI research director at analysts Omdia.

Modjeska said that biased systems “perpetuate structural inequalities, violate fundamental human rights, break laws, and cause significant suffering to people from already marginalized communities.”

“Let’s not forget about the reputational damage biased AI systems inflict,” Modjeska added “Millennials and zoomers value diversity, inclusion and social responsibility, while trust is the fundamental prerequisite that underlies all relationships in business.”

The biggest stumbling block however is arguments between the policymakers themselves. Lawmakers are split over the rule banning facial recognition technology – with disagreements delaying the adoption plans by a year.

Axel Voss, the rapporteur of the EU’s controversial copyright regulation, is among those disgruntled MEPs. He joined calls for a “centralized, hybrid” approach where the basic implementation is left to national regulators and certain applications and impacts left to the Commission.

Talks remain ongoing as the future of the prospective regulation remains up in the air.

Kneecapping European AI and stronger safeguards

The view from the U.S. is that the proposed regulation would “kneecap the EU’s nascent AI industry before it can learn to walk” – that’s according to the Center for Data Innovation (CDI) think tank.

The CDI suggests that small and medium-sized enterprises (SMEs) with a turnover of $12 million that deploy 'high risk' systems could see as much as a 40% reduction in profit as a result of the legislation.

“Rather than focusing on actual threats like mass surveillance, disinformation, or social control, the Commission wants to micro-manage the use of AI across a vast range of applications,” Benjamin Mueller, CDI’s senior policy analyst, said last summer.

“The EU should adopt a light-touch framework limited in scope and adapt it based on observed harms.”

However, taking a different view was Oded Karev, general manager of robotics process automation at NICE (Neptune Intelligence Computer Engineering).

He told AI Business that the EU’s AI Act “should take a stronger approach to safeguard citizens.”

“Our recommendation is to regulate AI technologies according to the risk they pose, with lighter regulation on technologies using AI to identify processes that improve human productivity and help make work experiences more positive.

“AIA can and should do more to dispel legitimate human concerns and protect their interests."

II. The U.S.

What’s on the table?

Currently, there are no federal regulations in place in the U.S. All that may change however as plans begin to take shape.

The National Institute of Standards and Technology (NIST), which falls under the Department of Commerce, is looking to develop voluntary risk management frameworks related to the trustworthiness of AI systems.

NIST sought stakeholder views on the AI Risk Management Framework (AI RMF) that would aim to encourage privacy and avoid bias.

Sticking with biases, the Federal Trade Commission suggested in an April 2021 memo that it would use its authority under Section 5 of the FTC Act to pursue the use of biased algorithms. “Keep in mind that if you don’t hold yourself accountable, the FTC may do it for you” – stern words.

However, compared to the EU, there’s not much on the table. Policymaking related to AI under Trump was almost non-existent and while the Biden administration has sought to step things up, the U.S. lags behind the leaders.

The Senate did introduce a new social media transparency bill late last year that would allow university researchers to use raw platform data from sites like Facebook and Instagram. But this too is a mirror of the EU’s Digital Services Act which is further along in its legislative life than the Platform Accountability and Transparency Act.

Also targeting social media sites is the proposed Justice Against Malicious Algorithms Act. Published in October, the bill would remove legal liability protections for tech giants whose algorithms lead to harm.

The new legislation hopes to amend Section 230 of the Communications Decency Act, which provides wide protections to platform websites. The bill aims to limit the liability protections for platforms that knowingly or recklessly make a recommendation of third-party information that harms users.

What could the changes mean?

The U.S.’s regulatory focus so far is on social media sites. And given the damning truths revealed by Facebook whistleblower Frances Haugen, it’s an understandable approach.

Speaking at the SXSW 2022 conference, she said that at some point, AI will be able to solve the problems on such platforms. But for now, it’s the policymakers that are trying to step in.

The Justice Against Malicious Algorithms Act would hold platforms accountable until the time that AI is ready.

The current draft of the bill provides exemptions for small businesses that are less able to moderate platforms and interactive computer services with “5,000,000 or fewer unique monthly visitors or users for not fewer than 3 of the preceding 12 months.” For reference, Facebook, has around 200 million users in the U.S.

III. China

What’s on the table?

China’s moves have focused on regulating algorithms. The Cyberspace Administration of China (CAC) wants to control how platforms attract and retain users.

The draft ‘Internet Information Service Algorithmic Recommendation Management Provisions’ would force the likes of Taobao, TikTok (known in China as Douyin), and Meituan to have their internal mechanisms scrutinized. The draft proposals could potentially bar models that encourage users to spend a lot of money.

The country’s cyber watchdog would be free to run the rule over any AI algorithms used to set pricing, control search results, make recommendations or filter content.

Companies caught violating CAC’s prospective new rules could face sizable fines or stricter punishments including having their business licenses pulled or full takedowns of their apps and services.

Reaction: More regulatory clarity, please

Shameek Kundu, chief strategy officer and head of financial services at TruEra, told AI Business in November that algorithm developers need not worry about these rules, instead suggesting that regulation will encourage broader adoption of AI.

“More regulatory clarity can only help encourage responsible innovation in AI,” he said.

“I also like the fact that the draft rules call out a specific set of AI use cases, making it easier for firms to respond.”

One macro-level concern he raised, however, was the potential for regulatory fragmentation, as other jurisdictions move to create their own rules around the governance of algorithms.

“Of course, each country will have its own socio-economic priorities when bringing in such regulation, and the Chinese proposals are no exception to that,” Kundu said.

“But to the extent that there can be some degree of international harmonization of such rules, I think that would be very welcome, particularly for firms with multi-country operations.”

IV: The U.K.

What’s on the table?

Lawmakers in the U.K. are also considering regulating aspects of AI technologies.

Last July, a Westminster eForum conference focused on balancing trust and algorithmic transparency.

And a few months later, the U.K. turned its attention to protective rights, launching consultations on the extent to which patents and copyright should protect inventions and creative works made by AI. That consultation ended in early January.

Under current British patent and copyright laws, an AI system cannot obtain either protection.

On the patents side, the precedent for the denial of patent protections for AI-generated inventions stems from the DABUS case. In this instance, an AI system developed by a University of Surrey professor sought to obtain patent protection for innovations it created.

Such attempts to obtain IP rights were rejected in the U.K. across several instances, with the latest outing of the case before the Court of Appeals saw yet another denial. That case saw similar denials in the U.S. and Europe, with all three jurisdictions stipulating that “only natural persons” can obtain patent protection.

On the copyright side, systems that can generate works that would be subject to copyright protection if created by a human does raise a potential definitional problem. However, it’s important to note however that some works used to train AI systems may be protected by copyright, requiring a license to be granted for those training it to use the work.

New standards hub and a 10-year plan

The U.K. appears to be in a similar position to the U.S. in terms of slow regulatory movements covering AI.

The Johnson administration did unveil a National AI Strategy at last year’s AI Summit London. The strategy provides a 10-year plan that hopes to “harness AI to transform the economy and society while leading governance and standards to ensure everyone benefits.”

The only other notable movement in Britain was a new Standards Hub opening earlier this year. The U.K. Government tasked the hub with improving AI governance through increased creation of educational materials

The Alan Turing Institute is leading a pilot of the new hub along with the British Standards Institution (BSI) and the National Physical Laboratory (NPL). In its pilot phase, the new hub will focus on collating information on technical standards and development initiatives, with tools and guidance made available for businesses and other organizations to engage with creating AI technical standards.

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like