If you are one of those people who delights when discovering a zoom call is “audio only” or who desperately checks that the camera is covered before doing a Microsoft Teams call with your colleagues, imagine what it would be like if your company brought in facial recognition technology to use on you and other staff during lockdown.
No hiding those yawns or eye rolls now!
This is not just a hypothetical. The last few months have seen several new technology launches that integrate facial recognition technologies for use by employees in the face of Covid-19 developments and more will inevitably emerge. Such new technologies and deployments may have their uses but they are not without regulatory challenges.
In general, we are seeing two types of employer-based facial recognition solutions to deal with different Covid-19 challenges. In the first category, are those aimed at trying to get staff back into the workplace safely. In such cases, facial recognition may be used, for example, for detecting whether or not masks are being worn as required or for thermal fever detection.
In the second category, are technologies deployed specifically for workers who can’t return to the workplace, for example using camera technology on laptops to track worker movements.
Both scenarios would require very careful consideration before deployment to see if it is viable. Data protection laws will apply to the processing of personal data and we have seen data protection authorities raise serious concerns about facial recognition technologies already.
The UK’s Information Commissioner took the step of publishing a formal Opinion, her first following the implementation of GDPR, focusing on live facial recognition being trailed by the Metropolitan Police Service and South Wales Police. It outlines the high statutory burden that needs to be met for facial recognition as well as the various accountability and other obligations that must be satisfied, calling for a statutory code to be put in place. In that respect, the Information Commissioner reveals that, even though GDPR is largely technology neutral, we don’t yet have sufficient clarity on how such technologies should be regulated in practice.
With both categories of facial recognition outlined above, employers would, to even consider them, require a detailed data protection impact assessment which would take a company through a series of considerations and challenges. For instance, what is the lawful basis for the processing, since consent won’t work in an employment context? A legitimate interests balancing may also be hard to satisfy if less intrusive technology could be used.
Employers would also have to consider if any sensitive personal data will be processed which requires that additional thresholds be met and which can be quite limited. Additionally, is only a minimal quantity of personal data being processed, and is it accurate? A particular challenge is that GDPR contains specific derogations for the processing of employee personal data meaning that all of these questions would need to be considered separately for each European location and additional rules may apply including the need for works council approval in certain countries.
Does facial recognition have a place in our new age of home working?
The knotty legal challenges at play could certainly prove to be a minefield for employers looking to use surveillance technologies. However, the first category of facial recognition, aimed at returning employees to the workplace safely, would be less challenging (although still not clear cut), particularly if there was an obvious need for mask checking, for example in a food factory and if other technologies might be less effective.
The second deployment scenario, which could see facial recognition entering the home working domain, is clearly far more difficult and unlikely to prove viable in the EU at the current time other than potentially for highly regulated sectors where there are legal requirements to monitor staff. A possible example would be requirements to monitor traders for criminal activity as required under financial services legislation. In an office environment, financial organisations may have various means of conducting such monitoring including CCTV, phone recording, even technologies involving key stroke or combination behavioural analytics. All such technologies would still need to go through the steps indicated above and are not straightforward. They would still require careful consideration of placement of cameras for example, as well as appropriate notices and access controls.
However, putting them into a home working environment is potentially fraught with new problems. How can you ensure that other individuals in the home are not inadvertently monitored for example? How do you stop the intrusion into the private domain?
Many of us find it hard enough to keep kids out of the home office and find ourselves having to switch between different roles (parent, teacher, worker) often without warning. Would we be happy to do that all on camera and with our reactions and facial expressions being monitored and recorded?
Proportionality and consideration of the right to a private life is entrenched at the heart of privacy laws and human rights. It’s not clear that facial recognition technology has yet got to grips with this balance yet in our new age of homeworking.
Elle Todd is a partner at international law firm Reed Smith. Widely recognised as a leading practitioner in digital and data law, her clients range from the biggest international household names to disrupters and tech entrepreneurs. She advises on all matters relating to privacy law. In addition to client work, Elle also holds non-executive advisory roles for tech start-ups and supports various charities with their privacy requirements.