Before AI makes God, may we have a word from God?

Artificial intelligence technologies are affecting all aspects of human life – including questions of faith

Len Strugatsky

July 24, 2020

7 Min Read

Artificial intelligence technologies are affecting all aspects of human life – including questions of faith

Artificial intelligence. For some people it’s exciting, for some it’s scary, and for some of our readers, it’s a tool for interesting work.

For John Lennox, it’s a moral and a theological question.

AI, machine learning and neural networks solve otherwise-intractable problems, doing things that humans cannot. Techno-optimists like Ray Kurzweil predict that self-improving AIs will figure out how to make themselves smarter than humans. In the next 25 years, godlike super-beings will either merge with humans, or replace them in the so-called Singularity.

In 2015, researchers, philosophers and public figures including Stephen Hawking and Elon Musk signed an Open Letter on Artificial Intelligence, warning that AI research could have unforeseen consequences. Robot soldiers might go on a rampage or, less apocalyptically, AI personnel systems could make people unemployable for completely unknown reasons.

Enter Oxford professor John Lennox: “The most urgent thing is the obvious remark that this technology outpaces the ethical considerations about what’s right and what’s wrong,” he tells AI Business in an interview. “This is a huge concern at the top of the industry: how are we going to organize this from an ethical perspective?”

Lennox is a mathematics professor emeritus, but mostly engages in philosophy, holding public debates with famed atheists like Richard Dawkins. We spoke to him about his latest book, 2084: Artificial Intelligence and the Future of Humanity (excerpt published here) and asked him how AI researchers should act, given the concerns around the technology.

Creating in the image

Lennox has concerns about both “narrow AI”, the systems which solve immediate problems, and about “artificial general intelligence” (AGI), the machines that would think as well as, or better than, humans.

Narrow AI is pretty innocent, he says. It might help find a vaccine for Covid-19, or enable our snooping smartphones to guide us to the things we want – but it has led to “surveillance capitalism,” the term coined by Shoshanna Zubof for the uncontrolled use of personal data by Google and Facebook – or the “Big Other.”

He’s also concerned about “surveillance communism,” where CCTV cameras and data-gathering give states like China – but also some Western countries – powers the state had in George Orwell’s 1984.

His book takes its title from Orwell’s. 1984 isn’t particularly prophetic about AI (though it does describe automatic transcription and artificially-composed music), but Lennox uses the word “Orwellian” as a label for generalized concerns about powerful technology.

In theory some of these concerns should be assuaged by Asilomar principles, set out by the Future of Life Institute – the group behind the Open Letter. The group’s conference was held at the Asilomar conference center, where previous gatherings had hoped to limit controversial technologies including genetic engineering.

The 23 Asilomar Principles are a bit like Asimov’s Three Laws of Robotics. They aim to protect humanity from AI by ensuring the technology is applied fairly, and future research is reined in, so as not to produce an uncontrolled super-intelligence.

The question of ethics has a commercial impact, Lennox says: “If you are in AI commercially, and you develop a system that gets a bad press, you are set to lose a large amount of money. You have to ask yourself not just how can it go right, but how could it go wrong?

“Practitioners in this area have to build in some sort of ethical norms into their systems,” he adds, warning that the ethics programmed into systems like self-driving cars are the ethics of the programmer. “There needs to be some kind of agreement on this kind of thing.”

“People are desperately trying to rein in AGI and the stuff that is up and running,” he says, citing that AI evaluating CVs can encode bias against women or ethnic minorities. AIs trained to drive cars or do any other job, just like people, will need to be trained to do it safely.

“People working on this should consider the ethics,” Lennox says. “What we are trying to do is use the best of human decision making, based on a common morality that is recognized around the world, independent of philosophy.”

He’s right. And in fact AI safety and AI ethics are a burgeoning discipline. Leading researcher Stuart Russell has said that “in the future, moral philosophy will be a key industry sector”.

Looking for points of reference

Lennox is not interested in defining moral principles, because he has a ready-made answer: Christianity. He is an industrious Christian apologist, with his name on six books in the last year alone, including April 2020’s Where is God in a Coronavirus World? This latest work arrives courtesy of Zondervan, a Christian media division of HarperCollins.

Lennox’s worldview comes in pretty quickly. He lauds James Tour as “one of the currently most influential scientists in the world”. Tour is a nano-chemist who worked on graphenes and created the single-molecule Nanocar (an organic vehicle with fullerene wheels), but began to doubt the theory of evolution.

Lennox quotes Tour saying DNA could not have arisen by chance: “The proposals offered thus far to explain life’s origin make no scientific sense”. This is Tour’s opinion, but Lennox calls it “the verdict of science,” and goes on to suggest that the “code” in DNA is the language of a creator, in a Watchmaker-style argument that says we should all believe in God.

Later on, Lennox makes an interesting, but fairly literal-sounding, appeal to the Book of Genesis to argue that artificial life is not possible, and artificial intelligence is not genuine intelligence: “It’s simulated intelligence,” he tells me in the interview.

In our conversation, I steer clear of arguing about God, but potential readers should be warned, the God Question is central to the book. The PR tells me its original title was AI, The Future of Humanity and The God Question, and she’s very keen I should quote the current title but, to be honest, the earlier title was fairer. Lennox jumps swiftly from general arguments in favor of theism, to a narrow application of Christianity as the answer to any problems.

His arguments against atheism are practiced, but not really of interest outside the kind of debates he has with Dawkins.

It’s more of a concern that the book doesn’t even wonder if there might be a Buddhist, Muslim, Jewish or Hindu contribution to the debate. He explains that by saying: “I feel it is up to them to do the explaining. It’s not up to me to say what an Islamic or Jewish view of AI would be.”

He advises AI practitioners to be guided by their conscience, though their work may be misused later: “You can develop a narrow AI system that does a lot of good, and there’s nothing to stop someone else coming in and saying ‘that’s just what I need to destroy a population’”.

In our conversation, he says control of AI will require new regulations at the highest level, but warns that in international regulations, “power comes into play.”

I don’t follow up: in the book, he floats the possibility of the EU becoming a world government, apparently as a literal fulfillment of Chapter 13 of the Book of Revelation. Time is limited, and I’m not really interested to know whether Lennox sees the Number of the Beast as a prophecy about AI.

Religion aside, he thinks AI researchers are more likely than AI practitioners to consider the ethics of their work: “The academics I read who are involved in this are constantly thinking of the ethical implications of this.”
As for the rest of us, he’s keen for more participation in the AI ethics debate. “I’m keen on stimulating public intelligent discussion,” he says. “I meet people who are scared stiff of AI, they don’t understand there’s always an important role for people to explain what’s going on.”

If nothing else, as well as ensuring the AI systems are educated properly to behave themselves, governments need to make certain they are educating their citizens to live in a world where AI exists.

Lennox says the book is for a mixed audience, but acknowledges its Christian stance: “My own position is Christian. I want to demystify AI for Christians, to encourage the bright ones to get into it and make a real contribution – and to show my atheist friends that perhaps Christianity does have something to say in this area.”

About the Author(s)

Len Strugatsky

Len is an IT journalist with more than 20 years of experience across print and digital. He's got degrees in both physics and fine art, and if there was a degree in Marvel comics, he would be teaching it.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like