‘When I joined [Google] 15 years ago, nobody in the research group really talked about AI’
The field of artificial intelligence has changed drastically since the mid-2000s.
When Ryan McDonald joined Google in 2006, the world had only recently been introduced to Honda’s ASIMO robot, and AWS had just launched its cloud services. When he left the search engine giant in January 2021, AI systems were integrated into nearly every facet of the corporation’s work.
The former Google research scientist departed to take the chief scientist role at ASAPP – an AI and research-driven customer experience company.
In a recent interview with AI Business, he noted how the AI world has changed since he joined Google all those years ago, and discussed the impact large language models will have on businesses.
Your call is valuable to us, please hold
The truth is, customers dread ringing up a company for whatever reason – be it to rearrange flight times, book cinema tickets, or query an unexpected bill. That dread stems from either irritating automated phone lines, poor service from human operators, or an endless cacophony of hold music played over and over again – and sometimes, all three.
McDonald and the ASAPP team are trying to use AI to remove that dread and make customer experiences more bearable.
He noted that the industry has high employee attrition – at times, as high as 40 percent. The monotonous and repetitive tasks that agents at call centers have to conduct everyday wear on them, which often leads to low job satisfaction. Meanwhile, some estimate that bad customer experiences costs American companies around $1.6trn just from people switching services, McDonald said.
Pointing to a “big opportunity,” ASAPP’s chief scientist explained that companies are currently spending a lot of money on outdated solutions – “a lot of the legacy tools companies opt for focus on preventing people talking to agents; that’s containment, AI to keep people away from people,” he said.
“It's a great industry to test our main hypothesis: that people need to be at the center of AI, and to obtain a lot of willing partners who are eager to test this.”
“One of ASAPP's core tenants is that data is king. One thing I've seen in my career is that it is very easy to get swept up in advancements in modeling and machine learning and natural language processing, but the big impact comes from working backward from the data and the problems our customers have.”
The average Fortune 500 company still spends 80 percent or more of their contact center budget on voice calls, McDonald said, adding that this costly output worsens customer experiences and outcomes.
"For these big brands to stay competitive, especially right now, companies need to focus on building services that know how and when to talk to customers in the right way."
Noting the changes brought on by the pandemic, McDonald reminded that agent’s lives have changed alongside those of their customers.
"They're sitting at home and because of this, their opportunities for support are limited. We see this as a big area in that AI can make a dent, in particular, looking at the attrition of agents, can we use AI to build better tools so they can develop their skills over time.
“Also building better tools to lower the mental fatigue of those agents and allow them to avoid mundane challenges and increase job satisfaction.”
When asked about his time as part of Google’s research team, McDonald said he witnessed the changes caused by AI firsthand.
“When I joined 15 years ago, nobody in the research group really talked about AI,” he joked, adding that at the time, information retrieval and data mining were the company's focus.
That focus would later transition to speech, NLP, and machine learning, but a constant that would remain throughout his nearly two decades at Google was its healthy opinion about the value of original research.
“A big thing that attracted me to ASAPP was that they shared that opinion,” he said. “I was excited about that narrow focus and the amount of effort that they were putting in to solve this problem.”
Looking at LaMDA
In mid-May, Google unveiled LaMDA – a new language model trained on dialogue. At the time, McDonald said that the company’s model may cause some issues in domain-specific settings.
“Getting to the point where LaMDA could be quite powerful (in potentially future enterprise offerings) will require many of the utterances to be based not on general domain knowledge, but specific to a user, offering, etc. It will require conditioning on more than just the conversation itself,” he told Voicebot.
When quizzed on LaMDA several weeks later, he told AI Business that his thoughts were not particularly controversial "as there's no secret that these models are trained on general data."
“If they have knowledge bases, they're usually things like Wikipedia with general knowledge background information. And in domain-specific settings, these models need to be adjusted accordingly.
“If we took something like LaMDA and just applied it out of the box to customer care, it's not going to work that well – it might seem to be fluent but it's certainly not going to help agents or customers solve their problems. They have to be adapted for the domain.”
A bigger issue for McDonald is what happens outside of the conversation: “It's not just about being a good conversationalist, AI needs to do more than that,” he said.
“The agents have several tools at their disposal, and they're absorbing information from these systems and they're guiding them on next actions, and how to communicate those actions with the end-customer. Any language model off-the-shelf is not going to be able to take advantage of that.”
He suggested that further research would be needed to take such large models and couple them holistically in task-oriented dialogue with all the tools that are available to agents – something to look out for going forward.
When asked about the next few years in AI, McDonald once again pointed to large language models – which he believes will make a huge impact on business.
“In the enterprise space, these huge models and the resources they consume have limitations. Training models and deploying them at cost-effective rates for our customers is a big challenge.
“We've done a lot of work recently in that space and we plan a lot more in the future. We built some architectures; one is called SRU++ and gets similar gains to large language models that have billions of parameters, but is up to five times more efficient.”
His team at ASAPP is currently working on few-shot learning – taking large language models and trying to adapt them to new domains and problems with as little data as possible.
And a final area he pointed to was task-oriented dialogue – conversations between two people that are specifically trying to solve a problem or task.
"This is a key area for us," he said, adding that ASAPP is looking at collecting the right data resources "so that we can have good reliable metrics for measuring progress."