AI Business is part of the Informa Tech Division of Informa PLC
This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.
CAMBRIDGE, MA - Artificial intelligence seems poised to change every aspect of our world, from robotics to recommendations. But one of the most practical problems that AI is now able to solve is to understand the written word - and new advances in 'common sense AI' have taken that ability to unprecedented levels of productivity.
Over the years, many advances have already transformed the way computers can understand language. Back in the 60s, we relied almost entirely on pattern matching algorithms and keyword spotting. In the 90s, the advent of rule sets and ontologies gave humans more manual control. And in the 2000s, unsupervised Bayesian machine learning techniques furthered the cause by letting humans encode prior beliefs into models to make up for sparse data.
Then came deep learning. Deep learning focuses on finding patterns in input data at a sophistication and scale that hadn’t been accomplished before. In 2013, Google popularized a type of machine learning called word embeddings, that applied the power of deep learning to natural language understanding. Large companies with accumulated Big Data could begin improving how they read and interpreted otherwise unanalyzable “dark data” from unstructured text.
Over the years, many advances have already transformed the way computers can understand language. Then came deep learning.
But deep learning also introduced a new problem: data acquisition. Training a deep learning system can take hundreds of millions of data points before it can produce meaningful results. For instance, before beating top human opponents in chess, Google’s AlphaZero played 68 million games against itself. And that was just to learn chess; AlphaZero can’t generalize those lessons and apply them to checkers, or Monopoly, let alone more modern complex games. This difficulty is more pronounced when trying to operationalize AI for business tasks. Global B2C organizations can’t simulate a contact center calling itself 68 million times. Smaller data sets, like 5000 survey responses or last week’s trouble tickets, aren’t enough to feed a deep learning system.
This flaw exists because these machine learning models must start all over each time they learn anything new. They rely only on the data they’re given, and are otherwise naive about the world outside that input. That’s not how people work. We generalize from known examples, and build analogies to understand complex concepts. We know a giraffe can’t fit through a door. We know you can pack a suitcase or stand on it to reach something. As humans, we have common sense about how the world works. That knowledge matters, and to make real progress in AI, the industry must get machine learning models to incorporate that common sense on top of whatever data they train on.
Teams pursuing methodologies to add common sense to machine learning models have rediscovered interest in domain adaptation techniques such as transfer learning, and knowledge bases such as ConceptNet, to achieve impressive results. They see unsupervised, automatic adaptation of models, even incorporating unencountered terminology, working on 1/1000th of the data. Those results are coming in minutes rather than after months of manually hand-tuning ontologies. They see duplicatable results in several different languages.
The science strongly agrees. At the 2018 North American Association for Computer Linguistics conference, the 12th annual SemEval event pitched commercial and academic NLP systems head-to-head on semantic tasks such as reading comprehension and lexical semantics, operating on unseen test data. The prior year, only a few participating systems incorporated a knowledge base. One of these common sense systems easily captured first place in its two tasks. This year, 10 of the 21 systems competing in one task used a knowledge base, claiming 8 of the top 10 scores. And, as the task organizer’s research paper observes, those systems outperformed others because they exploited information from a knowledge base.
This trend of adding common sense to learning algorithms goes beyond language to other areas of artificial intelligence with data problems. Everything from robotic grasping to certain types of computer vision problems have benefitted from adding a knowledge graph like structure.
Common sense AI will experience further successes as more companies adopt the technology to tackle previously unassailable business problems. Companies using this improved approach to natural language get useful information from unstructured data in minutes, instead of spending months creating their own models or using off-the-shelf industry models that miss out on new terminology and customer lingo. In practice, that means they can ask consumers more open questions in surveys, confident they’ll be able to read all the feedback instead of sampling answers for qualitative narratives. Now, customer experience and product designs teams are completely changing their approach to incorporating feedback, thanks to this technology. Maybe all that’s needed to make these smart machines even smarter is a little common sense.
Dr. Catherine Havasi, a co-founder of the Common Sense Computing Initiative, is also the Chief Strategy Officer and co-founder of Luminoso Technologies.