Semantic Reasoning: The (Almost) Forgotten Half of AI

Ciarán Daly

May 22, 2018

5 Min Read

By Larry Lefkowitz

BOSTON, MA - Since the dawn of Artificial Intelligence more than 60 years ago, the goal of data scientists has been to create systems capable of performing tasks that traditionally required (at least) human intelligence.

To achieve this, such systems would need knowledge about whatever function(s) they were intended to perform. From the start, there were two main approaches to making these systems knowledgeable; explicitly teach them what they need to know, or have them learn from experience. This dichotomy still holds true today.

The latter approach, that of learning from experience - i.e. machine learning (ML) - has been far more prominent in recent years and has, to some, become synonymous with AI. It has demonstrated great value across a range of classification and prediction tasks, such as categorizing help-desk requests, identifying potentially fraudulent transactions, making product recommendations, determining users’ intents in chatbots and more.

However, that is not the full picture; AI can, and should, do so much more.

There are many tasks that require explicit reasoning using knowledge about the problem domain and often, about the world in general (i.e., “common sense”). Consider, for example, assembling a project team, planning a dinner party, coordinating a response to a natural disaster, or even more mundanely, preparing your kid’s lunch, buying a car, or understanding a newspaper article.

In each case, the task is more complex than selecting from a set of possible options or determining the value of particular variable – and critically, relies on information or knowledge that was not part of the “input data”. Additionally, these tasks require being able to dynamically combine knowledge to answer questions or reach conclusions.

This type of machine reasoning requires that the knowledge be modeled in a way that a machine can efficiently process it, i.e., as an ontology or knowledge base.

160408_jennings_nick_004.jpg
Related: The AI Spring Is Upon Us – But There’s Lots Of Work Still To Do

Semantic Modeling, Reasoning, and Inference

In contrast to machine learning, which results in a network of weighted links between inputs and outputs (via intermediary layers of nodes), the semantic modeling approach relies on explicit, human-understandable representations of the concepts, relationships and rules that comprise the desired knowledge domain.

There are various levels of fidelity with which this knowledge can be represented and corresponding levels of expertise and expense associated with modeling this knowledge.  For some simpler representations, such as a “knowledge graph” comprised of triples of the form <subject predicate object>, it may be possible to (partially) automate the acquisition/creation of such knowledge. Richer representations, such as formal logic (e.g., predicate calculus), are both more complex and more powerful and typically require human input in the acquisition/authoring process.

Fortunately, one does not need to build such models from scratch; it is often useful to extend existing knowledge models, including domain-specific ontologies (such as the Financial Industry Business Ontology (FIBO) or numerous healthcare ontologies) and broader knowledge bases such as Cyc, SUMO or the DBpedia Ontology.

The value that derives from explicit, declarative knowledge modeling is that not only can such knowledge be retrieved, but it can be mechanically reasoned with. Moreover, the machine reasoning, or inference, process can dynamically combine the knowledge to answer questions (backward inference) or to draw conclusions (forward inference) in ways which were not necessarily anticipated or algorithmically specified.

That is, semantic solutions rely on modeling (aspects of) the world and use human-like reasoning over those knowledge models, rather than relying on procedural algorithms that specify how a task is to be done (i.e., traditional programming) or learned correlations between inputs and outputs (i.e., machine learning).

In addition to being able to address a different set of problems than ML, knowledge-based systems offer far greater transparency into the conclusions they reach than do more opaque ML models. This is important where auditability is required as well as during the system development. It also makes the model easier to maintain, especially when some aspects of the world change that would render previous results invalid.

What’s Hard is Easy and What’s Easy is Hard

Why, then, has machine learning, rather than semantic modeling and reasoning, dominated the AI mindshare?

In part, because such solutions are both relatively simple and highly useful, given the availability of inexpensive and scalable computing power, the vast amounts of data available as fodder and the ubiquity of human-computer (in the broad sense) interactions.

However, there is also a perhaps dangerous misconception that since this approach is working for an apparently wide range of problems, it won’t be long before it will address any remaining ones. If I can ask Siri for the start time of a movie, surely it won’t be long before it will engage me in a meaningful discussion about the movie’s plot and artistic merit, right?

Alas, it is surprisingly easy to simultaneously (1) under-estimate the amount of knowledge required to achieve human-like behavior and (2) under-appreciate the results of systems that accomplish this. In part this is because the things that humans do trivially and the things at which machines excel tend to be quite different.

Discovering correlations from massive amounts of data is something machines do well. Drawing on a huge and diverse repository of knowledge, knowing what to apply when and being able to combine this knowledge to solve complex problems is a very different and arguably, a much harder task. Playing a game of Go is, in many ways, much less complicated than walking down to the store to purchase the game.

As valuable as machine learning solutions are proving to be, they are going after the relatively low-hanging fruit. It is far from clear that this correlation-based approach is sufficient for a broader range of problems and a different paradigm is likely to be required.

Fortunately, most serious AI practitioners recognize, and have recognized for a long time, that one size doesn’t fit all and that emulating human intelligence, or at least performing at a human-like (or better) level, will require a more advanced suite of tools and techniques. Diversity within the AI field is a virtue, so don’t be surprised if tomorrow’s solutions are very different than what we’re seeing today.

larry_lefkowitz-headshot.jpg

Larry Lefkowitz, PhD, is Chief Scientist of the AI Practice at Publicis.Sapient

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like