AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

In-Depth Analysis

You’re fired: the new EU law that threatens to hinder automating recruitment

by Ben Wodecki, Reporter
Article ImageAI Business’s Ben Wodecki outlines how the new ‘AI Act’ will impact AI hiring technologies

For those looking to get back into work or change career paths, your CV could now be read and screened by an AI system.

Increasing numbers of firms, including the likes of Vodafone, PwC, and Unilever, are using such technologies to filter through applications to find the perfect candidate.

However, a new law that’s been proposed by the European Commission could prove troublesome for those looking to adopt new smart methods for hiring.

Under the ‘Artificial Intelligence Act,’ all AI systems in the EU would be categorized in terms of their risk to citizen’s privacy, livelihoods, and rights.

Any system determined to be posing ‘unacceptable risk’ would be outright banned, while those deemed ‘high risk’ - would be subject to strict obligations before they can be put on the market.

Those developing AI-based recruitment tools would be required to conduct risk assessments, include ‘appropriate’ human oversight measures, and use high levels of security and quality of datasets.

Battling biases

Why would recruitment technologies be considered high risk?

Some HR systems discriminate against applicants based on their ethnic, socio-economic, or religious background, gender, age, or abilities, explained Natalia Modjeska, AI research director at Omdia.

Modjeska said that biased systems “perpetuate structural inequalities, violate fundamental human rights, break laws, and cause significant suffering to people from already marginalized communities.”

Such tools could also harm the businesses deploying them – with high-performing candidates potentially left by the wayside.

“And let’s not forget about the reputational damage biased AI systems inflict. This is especially important because millennials and zoomers value diversity, inclusion, and social responsibility, and because trust is the fundamental pre-requisite that underlies all relationships in business and in life," she added.

The Omdia analyst even suggested that the likes of Vodafone and Unilever who are deploying such systems are being harmed – “biased AI may reject or overlook those high performers you’d want to have on your team.”

And let’s not forget about the reputational damage biased AI systems inflict.

Modjeska pointed to Amazon’s sexist AI recruitment app as an example of how it can all go horribly wrong.

Back in October, the e-commerce giant hit the headlines as its algorithm-based recruitment tool was found to have been mostly trained on men, meaning women were often given lower candidate scores.

Despite Reuters reporting that the company had realized its system was not grading candidates in a gender-neutral way back in 2015, it wasn’t scrapped until three years after the fact.

“This is especially important because millennials and zoomers value diversity, inclusion, and social responsibility, and because trust is the fundamental pre-requisite that underlies all relationships in business and in life,” Modjeska added.

Diverse datasets and the ‘golden’ rule

The law would also impact engagement with freelancers, with the bill referring to "persons in work-related contractual relationships," highlighted Shireen Shaikh, lawyer fromTaylor Wessing.

To avoid falling foul of the prospective law, Shaikh said developers should embrace transparency in terms of how their AI system makes its decisions about a candidate.

“The machine must not be left in charge, meaning the system's intelligence should be capable of human oversight. It will be for the provider to identify what 'human oversight measures' have been taken when designing the product and also which are available when operating it,” she said. 

Modjeska pointed out that there is “no way for developers of such systems to change how they are categorized.”

She stated that the legislation specifically says that the classification of an AI system as high-risk is based upon not only on its function “but also the specific purpose and the modalities for which that system is used.”

In her recommendations for not falling foul, Modjeska recommends that companies take care in how they design such systems by using diverse datasets.

The Omdia analyst also suggested making use of bias detection tools, as well as frameworks like Datasheets for Datasets and Model Cards for Model Reporting.

Modjeska also offered a far more general ‘golden’ rule - treat other people like you yourself want to be treated.

“Put yourself in other people’s shoes: how would you feel if your next interview were conducted by a biased AI system? Or your/ your son or daughter’s application rejected because of who they are or where they come from, and not because of what they know and can do? How would you feel being responsible for having crushed their dreams?”

Juggling biases and mistakes

Any company using such systems that fail to comply with could have a lot more to deal with than just shoddy recruitment.

Penalties for non-compliance include an up to €10m fine – or up to two percent of a company’s total worldwide annual turnover for the preceding financial year – whichever is higher.

Juggle Jobs is one platform that would be slapped with a ‘high risk’ tag under the proposed law. The company, which “helps organizations find and manage experienced professionals on a flexible basis,” said it supports “well thought through oversight when done correctly.”

Its CEO, Romanie Thomas, noted that AI-based hiring tools improve the average time to shortlist applicants, adding that among those chosen by its own system, over 65 percent of interviewed candidates were female, and 30 percent were non-white.

Thomas also noted that AI isn’t perfect – and that mistakes can be made but stressed the need to learn to take certain steps to mitigate them moving forward.

“Design and development teams, for example, should comprise engineers and product leaders from diverse backgrounds.

That way, the problem can be addressed from multiple angles starting from a position of deep accountability from the companies who are at the forefront of building these innovative solutions and will ultimately benefit from people using them.”

It remains to be seen how much the proposed law will impact companies Juggle Jobs. But one thing is certain, the automation and digitization of recruitment is only going to increase in the near future – and biases are expected to follow, no matter what measures you use to hide them.

EBooks

More EBooks

Latest video

More videos

Upcoming Webinars

More Webinars
AI Knowledge Hub

Research Reports

More Research Reports

Infographics

Smart Building AI

Infographics archive

Newsletter Sign Up


Sign Up