China’s plans to regulate algorithms: Impact on developers

Should developers be worried about this? Not really, says TruEra’s Shameek Kundu

Ben Wodecki, Jr. Editor

November 2, 2021

4 Min Read

Should developers be worried about this? Not really, says TruEra’s Shameek Kundu

China wants to regulate algorithms. Its cyberspace watchdog wants to tighten its grip on how platforms attract – and keep – their users.

The Cyberspace Administration of China (CAC) plans to establish ‘professional evaluation teams’ that will analyze algorithms to make them "fair and transparent."

It also aims to “vigorously promote the research on algorithm innovations ... and enhance the core competitiveness of China's algorithms."

The likes of Taobao, TikTok (known in China as Douyin), and Meituan will have their internal mechanisms scrutinized, with draft proposals potentially barring models that encourage users to spend a lot of money.

Shameek Kundu, chief strategy officer and head of financial services at TruEra, told AI Business that algorithm developers need not worry about these rules, instead suggesting that regulation will encourage broader adoption of AI.

More regulatory clarity, please

TruEra is a machine learning model intelligence platform that analyzes model quality, and aims to provide enterprise-class explainability and governance tools.

Kundu said his team “looks forward to supporting AI adopters everywhere in adhering to regulations in this space.”

“More regulatory clarity can only help encourage responsible innovation in AI,” he added.

“I also like the fact that the draft rules call out a specific set of AI use cases, making it easier for firms to respond.”

One macro-level concern he raised, however, was the potential for regulatory fragmentation, as other jurisdictions move to create their own rules around the governance of algorithms.

“Of course, each country will have its own socio-economic priorities when bringing in such regulation, and the Chinese proposals are no exception to that,” Kundu said.

“But to the extent that there can be some degree of international harmonization of such rules, I think that would be very welcome, particularly for firms with multi-country operations.”

Advice for algorithm developers

Kundu noted that anyone seeking to do business in China and using algorithmic recommendation engines will be subject to the new rules.

“They do not appear to be targeted at any one specific company,” he said.

When asked whether this will hurt AI development and deployment in China, he suggested that some firms might have to build additional guardrails around their AI-powered systems as a result.

“Any potential short-term impact is likely to be compensated by the benefits arising from greater regulatory certainty.

“While some are focusing on what the new regulations are limiting, I believe that the positive use cases of AI are numerous and there is [a] great opportunity for further exploration in AI.”

Comparing China

China isn’t the only country looking to regulate AI – some of its draft guidelines mirror those proposed by the Digital Services Act in the EU.

The Digital Services Act includes provisions that protect users' fundamental rights online. Effectively, online platforms would face liability for third-party content and would have to vet their third-party suppliers.

Kundu noted that in a similar fashion, firms in China will have to explain to individuals why they are seeing an ad, and let them opt-out immediately if they choose.

Businesses in China will also have to ensure that they are not pricing differently based on users' online behavior, or other personal data.

“Firms will be assessed on whether their algorithms are causing excessive spending. The proposal also puts ‘staff welfare’ obligations on firms that are using algorithms to schedule work for gig workers,” Kundu said.

While Singapore, the UK, and the US have all issued guidance or launched consultations on AI, none appear to be this explicit in their requirements.

“While not all of these proposals will translate to Western contexts, I suspect regulators in the US, Europe, and the rest of Asia will be looking at the Chinese proposals carefully as they firm up their own requirements.”

Don’t worry, be appy

Should developers be worried about this? Kundu doesn’t think so.

“Anyone who is building or using AI systems that can impact large parts of the population should expect a level of regulation anyway,” he opined.

“These draft rules, and the EU drafts of the Digital Services Act and the AI law, provide some idea of the direction in which things could move elsewhere in the world.”

Instead, his advice to firms was to get ready for the rules. How? By embedding the notion of ‘AI quality’ into the design, development, and monitoring of their AI systems.

“This would involve aspects such as ensuring transparency and auditability, assessing and mitigating algorithmic bias, and perhaps being more aware of the wider socio-economic impact of using AI in specific contexts.”

About the Author

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like