AI Business is part of the Informa Tech Division of Informa PLC
This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.
Humans have always feared being replaced by someone – or something – that’s better, faster, or stronger. This idea has fueled science fiction for a century, and gave rise to popular stories like The Terminator, 2001: A Space Odyssey, and Blade Runner.
In a more realistic setting, when a body like the World Economic Forum (WEF) publishes a report on the future of automation in the workplace, saying that smart machines “could potentially replace a large proportion of existing human jobs,” maybe there is a reason to be worried.
The WEF seems to have revised its position, saying in a more recent publication that AI “may not lead to massive unemployment. Instead, AI technology will create more jobs than it automates.”
Will there be a time when such fears about technological unemployment are quashed for good? Yes, says Neil Sahota, book author, speaker, and artificial intelligence adviser to the United Nations – but it may not be for some time.
Sahota sat down with with AI Business to discuss how sci-fi has been fanning the flames of fear, how to educate those wary of AI, and when we might see a time when no one is scared of robots.
[Listen to key extracts from the interview below]
Sahota noted that fearmongering about intelligent machines dates as far back as the 19th Century. Pointing to the Industrial Revolution, he said that people generally don't like change, and look at new things with a fearful eye.
Less than a century later, man had reached the moon, understood the universe, and even fought off biological devastation in the form of the pandemic.
"Change has been happening faster and faster and as a result, we have fewer times to react and adapt – this has increased the anxiety around AI," he said.
Sahota said younger generations have become more used to the idea of AI – but that fear could also stem from certain cultural backgrounds. In the US and the UK, he said, there was a lot more concern about AI there than Japan, where most people see robots as helpers and “have a much more open mind towards it.”
In his role advising the UN, Sahota is tasked with talking with leaders to ease their concerns over AI and robotics – and offering examples of how the technology is being used for good.
Sahota started working with the UN around six years ago, and said that attitudes have shifted to a more positive outlook during that time.
“When I first started working with the UN, I was told most [representatives] thought it was Terminator time. They thought the machines were going to rise up – but I gave an optimistic view and didn't just talk about what AI is, but offered some public service use cases.”
“A year later, the UN was talking about having AI judges. That about-turn was phenomenal."
He continued: "Today, AI is almost everywhere. I don't know a sector or industry that's not really using it – it's in sports, HR, and telecommunications, but there remain some slow movers that are just starting.”
In terms of using AI for social good, he said the biggest roadblock was teaching people to understand how to apply AI, and what they can expect to achieve.
Military forces around the world emerged as some of the most prominent early adopters of AI.
Russia is reportedly looking to develop several autonomous weapon systems. In early July, British troops deployed an AI system in the field for the first time. And the US Defense Department is set to splurge $1.5 billion on AI over the next five years.
How can you assuage fears that the primary use case for AI is war? Sahota described this as an uphill battle. He jovially referred to a scene from the 1997 film Starship Troopers.“Who needs a knife in a nuke fight? All you’ve got to do is push a button,” a mouthy trooper asks his superior. The unruly trainee is instructed to place his hand on a wall, only for the officer to throw a knife into his hand, saying, “the enemy cannot push a button if you disable his hand.”
Sahota stressed that the most pressing concern of modern defense is cyber warfare. With increasing ransomware attacks, the impetus was on securing physical infrastructure.
Instead of fearing technology as a weapon, Sahota stressed that we need to find applications that aren’t solely about making money – like the growing numbers of ‘AI for good’ projects.
“I'm really proud of Gen Z as they have that attitude, they're really going to help transform the world because they're already up and coming with that social enterprise mindset. Getting people to think that way is key.”
If younger generations are more welcoming towards AI, how long is it before there won't be any resentment left? Maybe 50 years, Sahota said, but stressed that any predication for this is tough as the issue is more generational than previously thought.
“It's tougher for older people to change their minds. It's more about adoption and use – the more you're exposed to it, you see the value and the more accepting of it you are.”
He also noted the importance of traditional education: “Students are more open to actually using the tools and once they enjoy it, they take it to the hospital or wherever they end up joining. It's that kind of approach that will spearhead more acceptance of AI.”
To kick-start positive education of consumers, he believes the need for more exposure and transparency: “Actively working with educational institutions would help get this exposed. We're not trying to turn people into roboticists or programmers, just help them understand what AI is, can do, and can't do – and spark their interest to maybe solve problems like climate change.
“This is not the realm of technologists. The most successful AI solutions I've seen were by domain experts. We need the domain expertise to understand the area it's being deployed while technologists understand the capabilities – the marriage, or ying-yang, will create the solution,” Sahota concluded.