AI Business is part of the Informa Tech Division of Informa PLC
This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.
By Jelani Harper
Aside from machine learning, natural language—in all of its manifestations, including Natural Language Processing (NLP), Natural Language Understanding (NLU), Natural Language Interaction (NLI) and Natural Language Generation (NLG)—is likely the most visible form of artificial intelligence in existence.
Traditionally, NLP has played an influential role in facilitating text analytics, for which demand won’t abate any time soon. Although use cases for this application will continue to rise in the coming year, the entire natural language suite of technologies is also making significant contributions in progressive speech to text and semantic search use cases as well.
Perhaps the greatest consequence of these trends is that in most instances, natural language’s aptitude is considerably enriched by the application of machine learning. The tandem of these technologies will not only continue to inform enterprise processes in 2019, but more importantly, make AI more accessible and acceptable to a host of laymen users not necessarily cognizant of their impact.
“Overall, I think that people understand that machine learning and natural language are the foundation to any AI system, just [in] the ability to communicate with us in a human way and to automate that learning process,” SAS Artificial Intelligence and Language Analytics Strategist Mary Beth Moore notes. “What you build on top of that, whether it’s predictive, prescriptive analytics, forecasting, optimization, wherever you want to go, that foundation always comes back to these technologies that have been around for decades.”
This partnership allows natural language to supplement machine learning and, in turn, machine learning to better natural language for more relatable AI directly affecting an array of business objectives.
Natural language’s role in AI is largely based on the following functionality of its various technologies:
Machine learning substantially aids natural language with applications of both supervised and unsupervised learning, especially in text analytics. Once NLP understands the terms in a document and their parts of speech, unsupervised learning can determine mathematical relationships between them.
For example, the deployment of Boolean operators and writing Boolean rules is a form of supervised learning, Moore mentions, and is an effective means of implementing rules for text analytics. Those rules are instrumental in creating the models on which text analytics is performed. “Unsupervised learning is when you have to ingest all of these documents and you’ve not given the machine any direction,” Moore says. “On the supervised piece what you’re saying is I want to give you some direction. That’s where the rules come into play. Supervised is more of what I call the intentional output.”
Partially because of the predominance of deep learning in image recognition systems, Moore stated it was “emerging” in text analytics. Still, it’s assistance of natural language is as considerable as it is multifaceted. Recurrent Neural Networks are gaining traction in certain text analytics platforms for document classification and entity tagging. “If you have a sentiment variable that had not just positive, negative and neutral but love, hate, anger and what have you, that’s basically a predictive outcome that techniques like Recurrent Neural Networks can leverage to give you a very accurate classification for, using the results of parsing,” SAS Product Manager for Advanced Analytics and Artificial Intelligence Simran Bagga said.
According to Moore, when used in conjunction with natural language, deep learning provides higher accuracy levels than machine learning does for sentiment classification and document classification. Deep learning is also being used more for the summaries produced by Natural Language Generation, for which it “provides more singular input and output,” Moore observes. “It’s not necessarily just pulling out the themes, but [for example] if you were to take a picture of your meal and then a machine automates a description of that picture. So, it’s really looking at that sequence of language, that generation output.”
Deep learning is a foundational technology for conversational, speech recognition systems. Recently, it’s beginning to imbue chatbots with much more sophisticated intelligence than they conventionally had with their simplistic, template-based approaches. Although most chatbots still rely on natural language for text analytics (as opposed to formal speech recognition), the goal is for deep learning deployments to “enhance the experience of chatbots, making it seem more realistic: making it not so frustrating,” Moore comments.
The template approach towards chatbots is gaining credence in business intelligence as a means of speech-to-text user interfaces for accessing information for reporting. Despite the fact that this application of NLI relies on templates, the processing required is advanced with “audio recognition, and then you’re converting the audio into text, which behind the scenes is going back to ones and zeros,” Moore reveals. “And then, going to find that answer, that kind of call and response, and take it from ones and zeros back to text and maybe the automated voice back to you.” The large amounts of math in this procedure necessitate significant customization for speech-to-text systems. However, they’re an integral means of making BI more interactive and responsive to the needs of the business.
The demand for semantic search is another trend projected to impact natural language and machine learning in the coming year.
The need to readily sift through an organization’s collection of documents for specific terms, concepts, and business requirements is critical, particularly in the context of increasing regulatory measures. According to Bagga, more organizations “want to be able to search; they want this intelligence coming from NLP and machine learning into a search based framework, to not only inject back into operations but they want to build intelligent search, or semantic search applications.”
Semantic search involves both NLP and Natural Language Understanding, and requires a granular comprehension of the core ideas contained within text. It’s also aided by certain facets of machine learning, specifically the sort of rules Moore mentioned that are associated with supervised applications. “It’s a direction—a trend—that’s always been there but all of a sudden now it’s jumping forward because you can see how semantic search makes your data accessible to more business users,” Moore explains.
Text analytics will likely remain the most widespread use case for natural language in 2019. However, these technologies will also become more prevalent in use cases involving speech-to-text, intelligent chatbots, and semantic search. Abetted by applications of deep learning, unsupervised and supervised machine learning, the multitude of natural language technologies will continue to sculpt the communication capacity of cognitive computing.
Jelani Harper is an editorial consultant servicing the information technology market, specializing in data-driven applications focused on semantic technologies, data governance and analytics.