2019 Trends in Natural Language Processing

2019 Trends in Natural Language Processing

Ciarán Daly

October 16, 2018

7 Min Read

By Jelani Harper

Aside from machine learning, natural language—in all of its manifestations, including Natural Language Processing (NLP), Natural Language Understanding (NLU), Natural Language Interaction (NLI) and Natural Language Generation (NLG)—is likely the most visible form of artificial intelligence in existence.

Traditionally, NLP has played an influential role in facilitating text analytics, for which demand won’t abate any time soon. Although use cases for this application will continue to rise in the coming year, the entire natural language suite of technologies is also making significant contributions in progressive speech to text and semantic search use cases as well.

Perhaps the greatest consequence of these trends is that in most instances, natural language’s aptitude is considerably enriched by the application of machine learning. The tandem of these technologies will not only continue to inform enterprise processes in 2019, but more importantly, make AI more accessible and acceptable to a host of laymen users not necessarily cognizant of their impact.

“Overall, I think that people understand that machine learning and natural language are the foundation to any AI system, just [in] the ability to communicate with us in a human way and to automate that learning process,” SAS Artificial Intelligence and Language Analytics Strategist Mary Beth Moore notes. “What you build on top of that, whether it’s predictive, prescriptive analytics, forecasting, optimization, wherever you want to go, that foundation always comes back to these technologies that have been around for decades.”

This partnership allows natural language to supplement machine learning and, in turn,  machine learning to better natural language for more relatable AI directly affecting an array of business objectives.

Breakdown of natural language technologies

Natural language’s role in AI is largely based on the following functionality of its various technologies:

  • Natural Language Processing: Considered the hierarchical term for the range of natural language technologies, NLP is leveraged within almost every text analytics solution. It’s the cognitive computing component focused on linguistics and language’s classification. “That’s really [for] the semantic structure of the language,” Moore remarked. “What are the nouns, what are the verbs, what is the vocabulary, how does [the] sentence structure fit together with your adverbs and pronouns? What are the different stems?"

  • Natural Language Understanding: NLU is largely regarded as a subset of NLP focused on the actual meaning of words, which might be at odds with how they’re semantically structured. It provides an understanding of how terms are used in context for situations involving sarcasm, irony, sentiment, humor, colloquialisms, and others. According to Moore, in most text analytics platforms relying on NLP, “usually NLU is a part of it because most people are now not looking to do sentiment or contextual analysis, or contextual extractions, separately. So, they’re certainly combined.”

  • Natural Language Generation: NLG is the converse of NLP in that it’s not an analysis of the semantic meaning of language, but the production or generation of it (usually in either text or speech). However, it also has a core element of summarization so that “generation can say I looked at these 100 documents, here’s a summary of the information,” Moore indicated.

  • Natural Language Interaction: NLI is somewhat a conflation of these technologies—although it doesn’t have to involve NLU— in which users communicate and evoke responses from systems via natural language. “It’s your ability to give a command by either typing it or speaking it,” Moore denoted. “That’s the interaction piece. Then it’s going to be able to actually generate a response. It will either be an automated voice back or a typed response.” Examples of the former include digital agents like Alexa or Siri, which produce these responses via NLG.

Unsupervised and Supervised Learning

Machine learning substantially aids natural language with applications of both supervised and unsupervised learning, especially in text analytics. Once NLP understands the terms in a document and their parts of speech, unsupervised learning can determine mathematical relationships between them.

In this instance, unsupervised learning doesn’t necessarily understand the terms or what they mean, but “it’s your first kind of look to say these things are heavily correlated in your corpus of documents,” Moore says. Supervised learning is then based on the results of unsupervised learning’s relationship determinations. The former enables organizations to fine-tune those results with business rules that address the complexity of the findings.

For example, the deployment of Boolean operators and writing Boolean rules is a form of supervised learning, Moore mentions, and is an effective means of implementing rules for text analytics. Those rules are instrumental in creating the models on which text analytics is performed. “Unsupervised learning is when you have to ingest all of these documents and you’ve not given the machine any direction,” Moore says. “On the supervised piece what you’re saying is I want to give you some direction. That’s where the rules come into play. Supervised is more of what I call the intentional output.”

Deep Learning

Partially because of the predominance of deep learning in image recognition systems, Moore stated it was “emerging” in text analytics. Still, it’s assistance of natural language is as considerable as it is multifaceted. Recurrent Neural Networks are gaining traction in certain text analytics platforms for document classification and entity tagging. “If you have a sentiment variable that had not just positive, negative and neutral but love, hate, anger and what have you, that’s basically a predictive outcome that techniques like Recurrent Neural Networks can leverage to give you a very accurate classification for, using the results of parsing,” SAS Product Manager for Advanced Analytics and Artificial Intelligence Simran Bagga said.

According to Moore, when used in conjunction with natural language, deep learning provides higher accuracy levels than machine learning does for sentiment classification and document classification. Deep learning is also being used more for the summaries produced by Natural Language Generation, for which it “provides more singular input and output,” Moore observes. “It’s not necessarily just pulling out the themes, but [for example] if you were to take a picture of your meal and then a machine automates a description of that picture. So, it’s really looking at that sequence of language, that generation output.”

Speech to Text

Deep learning is a foundational technology for conversational, speech recognition systems. Recently, it’s beginning to imbue chatbots with much more sophisticated intelligence than they conventionally had with their simplistic, template-based approaches. Although most chatbots still rely on natural language for text analytics (as opposed to formal speech recognition), the goal is for deep learning deployments to “enhance the experience of chatbots, making it seem more realistic: making it not so frustrating,” Moore comments.

The template approach towards chatbots is gaining credence in business intelligence as a means of speech-to-text user interfaces for accessing information for reporting. Despite the fact that this application of NLI relies on templates, the processing required is advanced with “audio recognition, and then you’re converting the audio into text, which behind the scenes is going back to ones and zeros,” Moore reveals. “And then, going to find that answer, that kind of call and response, and take it from ones and zeros back to text and maybe the automated voice back to you.” The large amounts of math in this procedure necessitate significant customization for speech-to-text systems. However, they’re an integral means of making BI more interactive and responsive to the needs of the business.

The demand for semantic search is another trend projected to impact natural language and machine learning in the coming year.

The need to readily sift through an organization’s collection of documents for specific terms, concepts, and business requirements is critical, particularly in the context of increasing regulatory measures. According to Bagga, more organizations “want to be able to search; they want this intelligence coming from NLP and machine learning into a search based framework, to not only inject back into operations but they want to build intelligent search, or semantic search applications.”

Semantic search involves both NLP and Natural Language Understanding, and requires a granular comprehension of the core ideas contained within text. It’s also aided by certain facets of machine learning, specifically the sort of rules Moore mentioned that are associated with supervised applications. “It’s a direction—a trend—that’s always been there but all of a sudden now it’s jumping forward because you can see how semantic search makes your data accessible to more business users,” Moore explains.

Cognitive Communication

Text analytics will likely remain the most widespread use case for natural language in 2019. However, these technologies will also become more prevalent in use cases involving speech-to-text, intelligent chatbots, and semantic search. Abetted by applications of deep learning, unsupervised and supervised machine learning, the multitude of natural language technologies will continue to sculpt the communication capacity of cognitive computing.

Jelani Harper is an editorial consultant servicing the information technology market, specializing in data-driven applications focused on semantic technologies, data governance and analytics.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like