AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

AI Security

Why AI and facial recognition should be seen as a force for good

by Rob Watts, Corsight AI
Article ImageOne of the most precious assets a company now holds is not liquidity or owned assets, it is the data they possess. As the intelligence one can yield from market or customer data is so vast, the EU has had to put subsequent regulations in place to ensure sensitive information is processed fairly, lawfully and in a transparent manner.

Using and applying logical rules to help analyze this data therefore requires more than a calculator and a spreadsheet.

Artificial intelligence, and its subsets of machine and deep learning are now regularly used to help quickly and efficiently sort through data and process patterns. Not only does it learn as it goes, thus reducing error rates and improving accuracy, AI can also pick up outliers which would take a human hours, days, weeks or even months longer to spot. From statistics to toothbrushing habits, we’re now seeing the many benefits that AI can have in our lives.

Why then, is there still a fundamental distrust around it and some of the technologies it powers – such as facial recognition?

When it comes to this specific technology, AI is actually often seen as the lesser evil, compared to sophisticated surveillance cameras, threatening to take us one step closer to totalitarian states by revealing where we go and what we do. However, it is time for a shift within the AI Facial Recognition Technology (FRT) debate, and we must begin to recognise how it can be used as a force for good.

Prioritising privacy

Part of the fear surrounding facial recognition comes from individuals wanting to protect their privacy. Data protection is essential – the recent backlash against WhatsApp’s change in terms and conditions proves that society will not stand for an intrusion of privacy. What’s more, biometrics in particular are incredibly personal forms of data. Therefore, privacy has to be at the very center of FRT, with manufacturers, distributors, implementors and end-users integrating data protection into each and every process.

Leaders in technology, particularly those in the facial recognition industry, should strive for transparency and accountability, and ensure that privacy is fully ingrained in FRT solution. As Tony Porter, former Surveillance Camera Commissioner and CPO at Corsight suggests, watchlists should not be impermissibly wide, and must be strictly compiled based on significant public interest and justification for impinging on human rights – tracking a dangerous criminal, for example.

One way to ensure privacy remains protected is to implement facial recognition technology on an opt-in basis. In the case of a missing child, parents could offer law enforcement the biometric data of their son or daughter, opting-in to its use and allowing AI facial recognition technology to track them down. As well as monitoring live surveillance for potential matches, it would take this technology mere seconds to analyze recently recorded footage, therefore rapidly speeding up the process and bringing a stressful investigation to a close quickly.

At the level of sophistication AI facial recognition has now reached, a missing child could be identified even from oblique angles and with up to 50 per cent of their face covered. For each individual within that same footage, who have not opted-in to analysis of their biometric data, their faces would be blurred, and their data stored for less than a second, ensuring complete protection of their privacy. We must consider the anxieties surrounding privacy, while also understanding that voluntary watchlists that could save the life of a missing child are certainly in the public interest. Most importantly, at no point would a police force allow AI facial recognition to solve a missing child case alone; human intervention should always be part of its usage. That way, privacy and ethics can be closely monitored and prioritized in every use case.

Voicing the benefits

Another example of how AI-powered facial recognition can be beneficial to society, is by allowing the NHS or police force to identify those with severe cases of Alzheimer’s. If they are alone, seemingly lost or in a confused state, the technology could help authenticate the individual, their condition and where they live, which could lead to the best course of action taken. The typical process would involve being held in hospital or police station, alone, until they are identified. But provided they have opted in to dedicated watchlists – or have had a next of kin do so for them – they could be taken straight home and handled in a more comfortable way.

Alternatively, FRT could be leveraged to stop registered sex offenders getting close to vulnerable children. Technology could detect, track and deter offenders from entering areas typically populated by children, such as parks. If the offender was to come in sight of a camera, the police would then be alerted and could respond to a breach in real time. Rapid detection of those that pose a danger to either one individual or a whole society could ultimately lead to saving lives.

There is also exciting potential for AI facial recognition to integrate with other sophisticated technologies, for example body scanners within an airport. If an alert is triggered after a potential bomb is detected, LFR would take less than a second to scan the individual and measure against terrorist watchlists. Compared to the hours this typically takes law enforcement, it is hard to argue this usage would not free up police time and resource, but also dramatically improve the safety of others.

The technology sector is under the microscope, with AI and facial recognition at the forefront. This is not uncommon – with computers and mobile phones once subject to scrutiny and distrust, showing how public perception can chance over time, if presented with logical information. We therefore need to get on the front foot and address the nay-sayers, as this technology is a game-changer, and can be used as a force for good due to it speed, accuracy and efficiency.

To gain trust, we need to talk more about privacy and ensure FRT is used within a proper framework of regulation and best practice. If COVID-19 has taught us anything, it’s that technology is a help, not a hindrance, so now let’s step forth and unleash the power for good that some of these integrations can harness, to make our society safer.


Rob Watts, CEO at Corsight AI, has over 20 years’ experience of business leadership and outcome focused client engagement in the technology industry. In recent years he has held leadership positions at Northgate Public Services and NEC's Public Safety business across Europe. He has a real desire to make a difference for Corsight clients and partners and is passionate about leveraging facial recognition as a force for good within society.

EBooks

More EBooks

Latest video

More videos

Upcoming Webinars

More Webinars
AI Knowledge Hub

Research Reports

More Research Reports

Infographics

Smart Building AI

Infographics archive

Newsletter Sign Up


Sign Up