Public expectations for safety are high - can AI be trusted to help?

An opinion piece by Motorola Solutions' country manager for the U.K. and Ireland

May 31, 2022

5 Min Read

An opinion piece by Motorola Solutions' country manager for the U.K. and Ireland

Global expectations for public safety have never been higher, especially in the U.K where the experiences of the past two years remain at the forefront of public consciousness.

Extended periods of uncertainty and disruption caused by the COVID-19 pandemic have increased public pressure to respond more proactively to unforeseen and rapidly escalating threats. One technology that continues to evolve and can be used to improve public safety across the board is artificial intelligence (AI).

AI already powers many aspects of our everyday lives, from unlocking our phones with image recognition to making preference-based recommendations about what to watch on streaming services and beyond. New AI breakthroughs are also being made to improve patient treatment and drug discovery, solve complex supply chain issues and help businesses to better predict their operational and competitive risks.

AI is also emerging as an essential technology to protect communities and keep citizens safe while also helping public safety agencies to work more safely and efficiently. It’s well understood that what feeds the AI engine is data, and with industry forecasts that we’re on a path to having 180 zettabytes of data by 2025, the amount of data in the world is growing as profusely as the terms used to describe it.

When a city is rich in data, AI can be applied responsibly to enhance public safety. Video security cameras can help to identify a lost child among a sea of faces while historical crime records and other data can be quickly analyzed to help predict where trouble might occur next.

Examples like these may seem advanced, but in public safety, AI can make a difference at more fundamental levels too. One important use for AI is to support human decision-making by eliminating manual and repetitive tasks that can drain our attention spans.

Even the keenest of human eyes can miss critical details when sifting through video footage, but when AI is fed the right parameters of what to look for, it can quickly identify specific objects or people. Used in this way, AI can free up more time for public safety officials to work on more productive tasks.

Ensuring trust and transparency in AI deployment

The successful implementation of AI in a public safety environment - both today and in the future - depends on ensuring trust and transparency around its usage.

Our recent research, conducted by Goldsmiths, University of London, found that only a little more than half (52%) of U.K. respondents would trust AI to analyze situations of threat, raising caution for how the technology is used. This is why - in all public safety scenarios - AI must be used as an assistive technology to support human judgement, not to replace it.

Additional safeguards must also be in place to ensure AI data is not exploited - for example, having controls to ensure that any data entered and stored within technology systems is owned and managed exclusively by those employing the technology.

While a recent Deloitte study found that smart technologies such as AI could help cities reduce crime by up to 40% and emergency response times by up to 35%, AI can also help organizations in ways that were previously unimagined.

The pandemic dramatically increased pressure across all sectors to support the safety of their employees and customers. When the pandemic was at its peak, AI algorithms were used to securely scan video feeds of passengers in high risk spaces such as airport lounges and train platforms to ensure social distancing remained possible, and enabled staff to take action if an area became too busy. This helped to minimize the spread of the virus and improve safety at a critical time.

When an emergency occurs, AI algorithms can also quickly analyze a combination of real-time alerts, instant voice communication and video analytics to recommend how to resolve an incident - the results of which can then be verified by a human who decides what to do next. For example, health and safety officials used AI-powered video analytics to identify individuals who were not wearing a mandatory face covering during the pandemic.

Importance of community support

While the benefits of AI are vast and yet to be fully realized, no organization can assume it has widespread public consent to use new technologies without public consultation. The public want and expect technology to be used in transparent, fair and inclusive ways, and for the benefits to be clearly understood. 

When citizens trust how technology is used and its objectives, they also become more willing to share their own data. In fact, 75% of U.K. citizens said they will trust organizations that hold their information, so long as they use it appropriately, according to the research from Goldsmiths, University of London.

When that occurs, it results in richer pools of shared data and, ultimately, the development of solutions that enable better outcomes for safety overall.

Where there is enough data, new AI systems can also be introduced or existing systems can be expanded upon and improved. By contrast, poor data quality is one of the biggest barriers to the greater update of AI and machine learning. Having a unified data platform for AI also makes it easier to manage user access and permission structures, further improving user accountability while protecting sensitive data. 

As threats to public safety evolve, the responsible application of AI and ongoing public education about how it will be used is the way to build a safer and more prosperous future.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like