The double-edged sword: how can we harness artificial intelligence to power good

Max Smolaks

December 17, 2019

7 Min Read

By Wael Elrifai, Hitachi Vantara

17 December 2019

The UK’s AI market is booming right now. It’s set to contribute around £232 billion to the economy by 2030 and the UK is already home to around a third of all of Europe’s AI companies.

There’s a reason the government is doubling-down on investments in this area: AI stands to have a huge impact on some of the biggest challenges we will face as a species over the next few years – from climate change to mass urbanization.

It’s an exciting time, but in the backdrop to this explosive growth and rapid innovation are concerns about job displacement, security and existential debates around the ethical implications of advancement in AI. As we move into the next decade of innovation, there’s work to be done to reconcile public mistrust in the pursuit of progress.

That starts with those in the industry speaking frankly about the technology – dispelling myths but also acknowledging flaws.

Defining artificial intelligence

AI today isn’t actually intelligent. The fact that we call it ‘artificial intelligence’ is a misnomer, insofar as describing the technology in its current form. In the 1990s when IBM’s supercomputer Deep Blue played chess against Russian grandmaster Garry Kasparov and won, people thought it was the start of the machine uprising. But over two decades later, and humans are still at the top of the food chain. Why? Because Deep Blue wasn’t intelligent by itself – it didn’t even know what chess was. Kasparov was essentially bested by a mathematical equation.

The truth is, most of us already interact with AI on a daily basis and, for the most part, we enjoy it. We like it when Spotify suggests a new song based on our listening habits or Netflix recommends a film based on our personal preferences – and that’s all powered by AI. Sometimes it may make us uncomfortable when Amazon’s algorithm pushes us uncannily personalized adverts but when we’re fretting over the ethical implications of AI, Alexa typically isn’t the Skynet-style tyrant we’re worried about. What we’re really talking about is super intelligence – and that doesn’t exist yet.

One prominent researcher commented that worrying about the destruction of humanity at the hands of AI today is like worrying about overpopulation on Alpha Centauri. In other words, the prospect is so far in the remote future it’s not even worth considering. In fact, that same professor believes we’ll actually reach Alpha Centauri – the next nearest solar system to our own – before we even achieve super-intelligent AI.

I don’t really subscribe to that line of thinking. That doesn’t mean we shouldn’t be asking the difficult questions today. We absolutely should be transparent about the real ethical challenges advancements in AI pose now and in the future – hence why I’m writing about them here. I think Hitachi Vantara’s president and CEO Toshiaki Higashihara said it best when he suggested that all technologies have “light and shadow,” and I believe it’s the responsibility of those in the industry to cast a light on those shadows. 

We need to think very carefully about how we strike the balance between risk and reward, and drive progress the right way.

Out of the shadows

So, what are some of those problems we risk running into as we continue to drive the development of smarter AI?

Stuart Russell, one of the founding fathers of artificial intelligence, posited two major risks associated with creating a super-intelligent AI. The first is known as the ‘Gorilla problem,’ and it’s somewhat comparable to the current anxieties around automation causing job displacement. Essentially, he suggested that we face the danger of building a machine capable of outmoding human beings altogether, in the same way that humans evolved to dominate our gorilla kin.

The second problem is something I like to dub the ‘King Midas problem’. If you’re familiar with the fable, you’ll know that King Midas was the cautionary tale of “watch what you wish for.” In other words – human beings aren’t great at concisely articulating what they want. Our language is imbued with centuries of unspoken nuances and underlying meanings that a machine, which relies on accurate instructions, wouldn’t pick up on.

For instance, imagine I tasked a super-intelligent AI to find a cure to a deadly disease and it decides to go out and infect every person on earth just to run as many tests as possible –  everyone gets sick just because I didn’t write into its code to find a cure, but not to harm anybody in the process.

So the good news is: we’re pretty far off from building an AI that could evolve us out of existence. But the King Midas problem is more immediate, and we’re starting to see it play out already. What it really boils down to is an issue of value alignment – what a machine cares about (nothing, unless you tell it to) versus what humans care about. Of course, the above is an extreme example – but it does underscore one of AI’s biggest flaws today.

AI isn’t a sentient, empathetic entity. It’s a system that looks at past decisions and outcomes to determine the best course of action for future decisions – we call it supervised learning. But this somewhat simplistic methodology can be problematic when we try applying it to complex real-world scenarios. One of the most troubling and topical examples of this we’re already starting to encounter is selection bias – algorithms used in the hiring process that discriminate based on ethnicity or gender, not because the machine itself is bigoted but because it is extrapolating based on trends from the past. 

Finding the light

So, does this mean we should slam on the brakes? Well, continuing Higashihara’s metaphor, where there’s shadow, there must also be light. It’s businesses that have a responsibility to drive innovation for the betterment of society and find that light.

Key thinkers are already considering how to mitigate some of the ethical challenges that may arise from the creation of an advanced AI – industry players like Google, Facebook and SpaceX have already come together to write guidelines on weaponization, biases and more. But in the meantime – right now – we need to make sure we’re leveraging AI purposefully. That means starting with the impact and working backwards from there. After all, dramatizations aside, AI is just a tool and like any tool, how it is used depends entirely on who is using it.

Here at Hitachi Vantara we believe technology can and must power good. That isn’t just a brand slogan, it’s a philosophy that drives all the work myself and my colleagues do here.

Wael Elrifai is VP of Big Data, IOT and AI at Hitachi Vantara - a company established in 2017 to unify the operations of Pentaho, Hitachi Data Systems and Hitachi Insight Group

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like