“Mr. DeMille, I’m [NOT] ready for my close-up!”

AI Business

April 21, 2020

7 Min Read

“Mr. DeMille, I’m [NOT] ready for my close-up!”

by Keith Oliver and Amalia Neenan, Peters & Peters

In 2015, Bill Gates gave a Ted Talk on the biggest threat to the human population. Not a war. But a virus. Today, we live out this prediction. In 2018, Bill Gates’ company, Microsoft, was the first tech giant to warn of another threat to our way of life: facial recognition technology.

Without proper regulation, we have started to witness the often-undetected spread of this technology, much like a virus taking over a host body.

In these incredulous times, it is not surprising that the Covid-19 pandemic and the rise of facial recognition technology are now intertwined, with the latter being harnessed as part of the fightback for the former. The worry, however, is what do we do post-Corona. Is it likely that these extensive technological powers are going to be scaled-back when the threat has passed, or will they remain as part of the new ‘normal’ under the guise of protecting the public?

Eye in the sky

In 2017, Beijing used facial recognition technology in public toilets to monitor and dispense each person’s quota of loo roll (28 inches to be exact). Who would have thought that three years later we would see the ascent of the humble toilet roll to an almost currency-like status? In the wake of the pandemic, China built upon their previous expertise and rolled out extensive recognition technology to detect elevated temperatures in crowds. Additionally, Hanwang Technology Ltd claims to be able to identify people whilst wearing facemasks – a feat not previously thought possible.

But the governmental collection of data is now limitless. This has ignited privacy fears worldwide, with charities such as Human Rights Watch suggesting that Covid-19 may be used to spark the permanent deployment of these systems, similar to how the 2008 Beijing Olympics were used to establish China’s existing surveillance regime. Who can say governments must scale-back, particularly when there is no codified law on how to process, store or discard data?

A picture is worth a thousand words?

Much like the differing approaches taken by world governments in their efforts to contain Covid-19, different jurisdictions have tackled facial recognition regulation in varying ways, creating confusion. Most recently, the EU backtracked on imposing a five-year moratorium on the technology. Early drafts of the European Commission’s policy on artificial intelligence had indicated that there would be a ban so that problems and potential abuses could be analyzed. However, in its final form, the EU White

Paper merely identifies key risks. For example, ‘by analyzing large amounts of data and identifying links among them, AI may be used to de-anonymize data about persons, creating new personal data protection risks.’ As a result, facial recognition should only be used when ‘duly justified, proportionate and subject to adequate safeguards.’ But what is ‘justified’ and ‘proportionate’ in one Member State may be completely different in another. EU countries have been left to their own regulatory devices, muddying the waters in their wake.

The EU’s reluctance to scale-back operations has not been mirrored elsewhere. The US has taken a different stance, with San Francisco and San Diego issuing bans on certain forms of facial recognition. Illinois goes one step further, providing specific legislation to curtail the misuse of the tech through the Biometric Information Privacy Act. The Act requires companies that collect biometric data to obtain written consent. Individuals can also seek damages under the Act, as was the case in March this year when Facebook was forced to settle a class action for $550 million on charges that the company had illegally mined biometric data using recognition software.

The UK saw its first legal challenge in May 2019 in the case of R (Bridges) v CCSWP and SSHD. Bridges contested the legality of the deployment of facial recognition cameras that had captured his image. He failed at first instance when Cardiff's High Court ruled that the tech had been lawfully used. However, permission to appeal was granted, which suggests that this will be an important test-case.

Yet, it appears as if the UK has no intention of putting the technology on the back-burner. Instead, the Met police are now using it in key locations to identify people on watchlists. But should the police be harnessing these powers when there is no specific legislation in place to ensure their safe and justiciable usage? While the technology broadly falls under the Data Protection Act 2018/GDPR, the Protection of Freedoms Act 2012 and article 8 of the Human Rights Act 1998, there is no single instrument that looks at facial recognition and associated technologies in detail. Rather, we have a patchwork framework that is no match for this technological sophistication.

Say ‘cheese’, Big Brother is watching you

Without a homogenized legal framework, what will prohibit manipulation for nefarious ends? The Covid-19 pandemic could provide the perfect conditions for such abuses. Take for instance ‘smile-to-pay’ technologies previously deployed in China, where facial recognition systems allow customers to pay for products by scanning their faces and matching the image against a linked bank account. Social distancing has changed the way we transact. Physically holding cash or tapping numbers on a card machine is a threat. It seems logical that advancements that forgo the need for tactile action be promoted, hence why the UK is also considering installing these systems.

However, any process that involves a bank account will likely be targeted by fraudsters. For example, a 2018 Forbes study attempted to hack the smartphone ‘face-unlock’ feature. The LG G7 and

Samsung Galaxy S9 could be tricked into unlocking with a 3D print-out of the owner's head. Fraudsters with the right equipment could therefore gain access to your phone and by proxy your data, banking details, photos, email accounts and anything else that might be stored there.

Yet, despite the continual worry of privacy verses security, in the right environment, this technology could be the way of the future: a key weapon for fighting fraud. For example, passwords can be hacked. However, 'liveness detection’ in online banking would add another layer of biometric security for transactions. Such a simple step might someday eliminate this type of fraud altogether… or at least make inroads to doing so.

What tomorrow brings is anyone’s guess. Without a clear and technologically adaptable regulatory framework, which specifically legislates for the proper storage and destruction of images, it is unlikely that these powers will be scaled-back when we eventually emerge into the post-Corona world. It is vital to ensure that the pandemic does not legitimize exploitation in the interest of national safety. One final thought. If things are set to continue in this way, The Queen’s homage to the famous Vera Lynn lyric, in her public address to the nation earlier this month, needs to be upgraded. ‘We’ll scan your face again, we’ll know where, but we don’t know when.’ A comforting thought brought directly into your home!

Keith Oliver is Head of International, and Amalia Neenan is Legal Researcher, at Peters & Peters Solicitors LLP

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like