Dichotomy of Intelligence – a Thorny Journey Towards Human-Level Intelligence

Dichotomy of Intelligence – a Thorny Journey Towards Human-Level Intelligence

Robert Woolliams

November 14, 2016

10 Min Read

by Michael Wei

Particularly since 2009, fueled by deep learning technology, machine learning has made historical breakthroughs in speech recognition, image recognition and translation, matching, or even superseding human’s capability in selective domains, while many other are being intensively explored as we speak.

"While we embrace and celebrate this trophy out of decades of artificial intelligence endeavor, a question inevitably arises – how far we are still towards human-level intelligence?"

The future is promising and exciting, but the journey is certainly far from easy, due to the Dichotomy of Intelligence. Andrew Ng stated, early 2016 in the Baidu AI conference, that the areas deep learning has proved superior are largely the tasks that human can do within a second. Those are primarily heuristics activities, such as vision/speech recognition. In late 1980s, Hans Moravec, Rodney Brooks, Marvin Minsky and many others discovered “Moravec’s paradox” that “easy task to human like low-level sensorimotor skills require enormous computation resources while difficult tasks such as high-level reasoning requires very little computation”.

Today’s progress in deep learning provides endorsement to Moravec’s paradox – along with exponential growth of computational power and available data, those “human easy” sensory tasks are what deep learning can currently accomplish the best.What makes human intelligence fascinating is certainly beyond sensational capability, but rather the knowledge-based reasoning, concept abstracting and common sense. As John McCarthy pointed out, “the epistemological part of AI is as prominent as the heuristic part”. A large portion of the intelligence territory is still not only under-exploited but also largely beyond our current capability to explain.

Sensation and Knowledge

Although the computerised pursuit of artificial intelligence can go back to as early as Alan Turing’s paper in 1950 where he posed the famous question – “Can machines think?”, human’s exploration on intelligence traces to thousands of years earlier, when ancient Greek philosopher Plato argued in his book “Theaetetus” on sensation and knowledge. He claimed that the mere use of sense could not be the source of our knowledge – sensing is unreliable and only unchanging form can be known. His strong argument resided on that “no two people will ever hear or see the same thing in an identical way and, consequently, will never perceive sensory information in the same way either. This difference is beyond description, hence unreliable”. Plato further stated that sensation will always trick us as “it keeps us busy in a thousand ways because of its need for nurture … Sensation would not give us true form, but recognition is possible”.

Does this argument resonate the challenge that today’s deep learning is facing?  The same model trained with same data could generate distinct representation, hence different output. Many sensational domains that use artificial neural networks are only understood on a heuristic level, and we empirically know that specific training algorithms work well with a specific large set of data.

There is absolutely the other side of the argument as well. His most famous student, Aristotle, considered experiences of what happens as a key to all demonstrative knowledge. In other words, he believed knowledge is the result/output of sensation. The key argument between Plato and Aristotle is whether empirical observation can eventually lead to rational knowledge, an argument that has been going on for thousands of years, and unfortunately still remains unanswered.

Things in the sensible world resemble the Forms, and our senses do help us to recollect them, however, we must learn to distrust our senses because we all too often overvalue sense experience and neglect to look beyond to the reality they only imitate. Take Searle’s “Chinese room” analogy as an example. Given a set of rules on how to distinct Chinese words and correlate elements in Chinese with elements in English, a human agent in the black room is capable of translating the Chinese scripts into English. However, the question arises about whether this person or the room is counted as intelligent. In this example, the essence of intelligence is in the rules rather than the room or the agent, which may be perceived as intelligent as a lot of computerised programs are, but they are actually just unintelligent front-ends of another intelligent entity.

Then what is knowledge? In “Theaetetus”, Plato didn’t provide assertion, rather inferred that “real knowledge” is a bounded concept with three characteristics: true belief, composition, and logical structure. Plato’s inferred definition on knowledge might be too narrow, in my humble opinion, but he was among few of the early philosophers to shrewdly point out that sensation, a seemingly intelligent outcome, doesn’t equalise to knowledge, nor intelligence as consequence.

"Intelligence is the ability to acquire, represent, store, transfer and utilise knowledge. To understand intelligence, we need to firstly understand knowledge".

“Knowledge by description” and “Know by acquaintance”

Epistemology, which literally means “Theory of knowledge” in Greek, was firstly proposed by Scottish philosopher James Frederick Ferrier in 1854. He argued that there are three kinds of knowledge - “knowing how”, “knowing that” and “acquaintance-knowledge”, which are distinct in terms of scope and source. He explains by example: riding a bicycle is “knowing how”, which takes repetitive practicing but won’t necessarily lead to designing a better bike, whereas a task that requires “knowing that” in physics is normally acquired by either logical thinking or studying. In the early 20th century, English philosopher Bertrand Russell took on the epistemology and defined it into a dichotomy - “knowledge by description” and “knowledge by acquaintance”, which I believe are more clarified on distinction in both form and matter of intelligence.

The core separates “knowledge description” from “knowledge by acquaintance” is a rationalisation. J.F.Ferrier and Bertrand Russell’s dichotomy on knowledge is largely a heritage of Plato’s argument on “Sensation and Knowledge”, and not uncommon for a lot of philosophers. Ronald de Sousa speaks of a “two-track” mind that tracks the mental process of  being “intuitive” and “analytic”, where a comprehensive mind is created by collaborating both the brain and the body, namely thinking and feeling respectively - “I know it is right” and “I feel it is right” have enormous distance.

The dichotomic view on knowledge naturally divided scientific efforts along the 60 years of AI research. The first of wave of AI research was led by rationalists, back to 1950s. Two pioneers of Artificial Intelligence, Simon and Newell, created a theory on “bounded rationality” which stems from their finding that humans don’t make decisions on the optimal solution but rather a satisfactory one. This mechanism is built upon the capability to formalise the problems in a logical manner under the belief that anything not understandable cannot be solvable, therefore dismisses sensation as part of artificial intelligence.

Rationalism dominated the trend of Artificial Intelligence research from the 1950s all the way to the 1990s, topped by “expert system” mania in early the 1980s. Contrarily, empiricism led by machine learning were not accepted into the artificial intelligence mainstream until as late as the 1990s while the effort started much earlier, pioneered by Rosenblatt who in the late 1950s developed Perceptron that proved the potential of recognizing hand-written letters through machine training. In the same tribe, another artificial intelligence pioneer, Hans Moravec once stated “after all, intelligence is not only a symbolic approach”, and dedicated himself to research on the heuristics approach in the robotics domain. Empiricism started to show the dominance on the turn of the century when exponential growth of computational power and available data finally overcame the hurdle of scalability and curse of dimensionality, while the scalability constraints on human remain largely unchanged for rationalism. The coin is eventually flipped.

Two sides of the same coin

In David Gelernter’s recent book “The Tides of Mind”, he described two tides of mind - down-stream spectrum with story vs. up-stream spectrum with logic. Voice recognition (tone and volume), smell, image, imagination and dreaming are the low-stream of the spectrum, which is often instinct and reflective, emotional and vivid. Language, reasoning and inference are on the up-stream of the spectrum activities, which are often logical and structural. Recent achievements in deep learning is largely in the sensational world, mainly “know how” experience, down-stream spectrum of mind, through large-scale machine learning via massive volume of data. But if we deem our goal toward the up-stream spectrum, logical and rational mind, we are still far in distance.

The question rising asks whether the two distinct types of intelligence shares the same architecture or in other words, whether it is possible for sensational intelligence to eventually lead to logical intelligence? The same question was asked by Plato 2,500 years ago. Any answer today won't be for certainty. The fundamental problem is that “we cannot yet characterise in general, what kinds of computational procedures we want to call intelligent”, as John McCarthy puts it.

I believe J.F.Ferrier and Bertrand Russell are right to take a holistic view of knowledge and divide it into “how” which is acquired through experiencing and “that” which is acquired by logical structure. “Know how” is learning to remember, stemming from memory, detail and specifics oriented. Repetitive exposure on specific perceptional task enhances the capability of accomplishing the task by remembering the distinction of subtle details. That’s what happens in deep learning (memorised observation) until the “outfitting” happens, whereas on the  contrary, “know that” is learning to forget. It is the process of filtering out non-critical information and focusing only on the critical ones. With repetitive exposure, the distinction on specifics become blurry, and commonness starts to stand out. Quite often when we face enormous information, we utilise a hypothesis to facilitate the filtering process, therefore settle on local optimisation, as Newell’s “bounded rationality theory” stated. In that sense, “know how” and “know that” might not be architecturally compatible.

This dichotomic view of intelligence also has its neuroscientific root. The Complementary Learning System (CLS), firstly introduced by David Marr in 1971, drew a theoretical framework on intelligence that effective learning requires two complementary systems, hippocampus and neocortex, that are accounted for empirical data analysis and rational perspective respectively. Hippocampus is “fast, episodic, isolated, and heuristic” while neocortex is “slow, generalising, compositional and structural”. The characteristics, function, representation and composition of those two distinguish a substantial manner, indicating two distinct systems in mechanism and architecture.

Intelligence is a complex phenomena, itself is a high dimensional problem to solve. This dichotomic view of intelligence is just to offer one angle to look at the complexity and provide potential hints, if any. With today’s technology, we have conquered selective domains of intelligence, largely sensation, but many more remain mysterious.

"The fact is we’re not even close to understanding human intelligence in all its multi-faceted glory: reasoning, abstraction, generalization, consciousness, dreams, memory, imagination, quantum waves in our brains – there are so many questions that we’ve yet to answer. We should celebrate the progress to date".

Looking forward, we should not give up chasing the structural knowledge, which might lead us to alternative path, due to the Dichotomy of Intelligence.

Michael Wei is Director of Samsung AI research center, based in US. In this position, he is responsible for technology strategy, key projects and collaboration with universities and startups. Prior to joining Samsung in September 2016, Michael was Director of Huawei AI lab. He has 15 years expertise in intelligence technology with various positions from Lucent Bell Labs, IBM Watson, A.T.Kearney to Huawei. Michael received his MBA from UT @ Austin and Master in Computer Science from University of Southern California.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like