Seven decades on, the Turing Test is yet to be passed in the general sense in which it was proposed - but does it even matter?

July 13, 2020

6 Min Read

Seven decades on, the Turing Test is yet to be passed in the general sense in which it was proposed - but does it even matter?

When computer scientist Alan Turing proposed what became known as the ‘Turing Test’ 70 years ago, he didn’t have the needs of business leaders in mind, or technology buying decisions. He was simply looking ahead to a day when computers might rival the finest brains in any field – or appear to do so.

Yet there are still some important takeaways for organizations from his famous concept of machine intelligence one day becoming indistinguishable from its human counterpart.

At the dawn of AI in 1950, four years before his death, Turing published a paper called Computing Machinery and Intelligence, in the journal Mind. The paper included a description of a test in which a machine might pass for human, regardless of subject and context.

In a text-based conversation, such a machine would convey the nuances of human speech without betraying its artificial nature. Would a person consistently be able to tell the machine from a human?

Decisions, thinking and consciousness

Today, we have become used to the hit-and-miss experience of speaking to Alexa, Siri, and other digital assistants, and to the idea of computers winning quiz shows – as IBM’s Watson did with Jeopardy; but these devices are explicitly machines. They may be like us in some ways – a design decision so we feel comfortable interacting with them – but we don’t believe them to be human.

At least, not yet. Some academics believe things should stay that way: that machines should always make their artificial nature explicit to avoid the moral hazards of filling the world with fakes.

Androids – robots built to resemble us – are becoming more commonplace, yet often behind the scenes there is a room full of computers and human operators, helping to simulate their apparent intelligence. We are not conversing with an uncanny machine; it’s often the case that other humans are conversing with us through it.

We interact with chatbots and voice response systems, but we are always aware that they are operating within prescribed boundaries. In Turing Test terms, we have a long way to go. Or do we?

There have been a handful of recent examples of AI systems hailed for ‘passing’ the Test. In 2014, on the 60th anniversary of Turing’s death, Eugene Goostman (‘ghost man’), a computer program designed to simulate a 13-year-old boy, convinced one-third of the judges in a competition that it was human.

However, participants had been told that the supposed teenager was Ukrainian to excuse any misunderstandings or poor use of English – and were also chosen by the software designers. As a result, that test is now widely seen as having been rigged in the machine’s favor.

In May 2018, Google’s Duplex chatbot used a convincingly human voice and mannerisms to book a haircut at a salon, but that test was not carried out in lab conditions, the evaluators (an audience at a Google event) knew that it was a machine, and the context was narrow and pre-selected, so again Duplex is not seen as a success.

Both candidates were impressive technical achievements but, seven decades on, the Turing Test is yet to be passed in the general sense in which it was proposed.

Whether passing the Test should even be seen as an indicator of machine intelligence, or simply the proof that machines can mimic human conversations without understanding them, remains a hotly debated topic. A machine could gather reams of data about questions and answers on any subject, create new answers from that data, but have no concept of the subject itself or even what the words it is using mean.

For Turing, the point was imitation, that a machine would one day convince people that it was intelligent, not that it would necessarily have consciousness or intellect. For a materialist, intelligence is matter that considers and acts upon itself, language is just a by-product of that process, not the core of it.

Despite this, the Turing Test has taken on an almost mythic quality and is seen, wrongly, as the tipping point at which machine intelligence takes over – at a time when many people are alarmed by the spread of AI and supposedly job-stealing robots.

Turing himself came up with objections to his Test – to test the Test, as it were. For example, the ‘heads in the sand’ objection, that some people simply don’t like the idea of intelligence in a machine, remains true today, but is not a valid argument against the possibility of it existing.

Another was that there are mathematical limits to the power of ‘discrete state’ machines that humans don’t have, yet it cannot be proven that humans have unlimited intelligence; indeed, policymakers increasingly see AI as augmenting our own limited ability to crunch data.

The next generation of Natural Language Processing (NLP) programs, such as Baidu’s ERNIE and Google’s BERT, points the way to a future of systems that are able to learn semantic relations, inference, and sentiment analysis, largely from processing vast amounts of data.

Yet for many people thinking implies consciousness – an awareness of self. So can machines think? Or, as Ada Lovelace suggested, do they simply do whatever we program them to? Turing asked this question in 1950 and reasoned that machines may carry out something that “ought to be described as thinking”, but may be different to what a human brain does.

The ‘digital computers’ that Turing described in his paper – as distinct from the human computers (mental arithmeticians) of his day – ran on electricity, but mechanical devices like Charles Babbage’s proposed Analytical Engine were still capable of calculating.

The implication of this was clear: decisions can be arrived at without the spark of life or consciousness. But Turing said that does not preclude the possibility that machines may one day think as humans do – and may even be just as illogical.

Human intelligence is linked to a reasonable degree of autonomy, allowing for experience and discovery that nurture the brain’s development. AI and machine learning may have to converge with robotics to gain a form of autonomy that allows them to ‘experience’ and learn from the world in a positive feedback loop.

A credible general AI would need to ‘live’ in the world to really fit in with the human crowd. But would we accept such a device? That may turn out to be the real test.

Mark Sheldon is CTO of AI firm Sidetrade

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like