Vodafone AI Expert Highlights Key Factors for Effective Business Chatbots

Alex Choi weighed key considerations like data quality, personalization, costs and avoiding robotic responses at the AI Summit London

Ben Wodecki, Jr. Editor

June 13, 2024

3 Min Read
Alex Choi, an AI chatbot specialist from Vodafone weighed the pros and cons of using AI models in customer-facing chatbots at AI Summit London.
Ben Wodecki

In the wake of ChatGPT, businesses are looking to augment their chatbots with large language models. Alex Choi, an AI chatbot specialist from Vodafone weighed the pros and cons of using AI models in customer-facing chatbots.

Speaking at AI Summit London, Choi questioned the necessity of language models in chatbots, warning that some businesses may not need a language model. 

For businesses looking to take advantage of language models, however, Choi highlighted considerations such as data quality, personalization and cost-effectiveness.

“Do you really need the latest GPT-4? Probably not,” he said as he suggested that not every business needs a large-scale language model.

“You don’t really need the latest and greatest model if all your customers are asking is how do they reset their password.”

One route might be employing an open source model and hosting it internally, though he warned that could pose security concerns. 

The Vodafone chatbot specialist said businesses should instead consider employing retrieval augmented generation (RAG). RAG enables an AI model to obtain information from connected sources.

Choi said RAG could help make chatbots smarter.  

“Companies like ours have loads of help and support articles,” he said. “If we put those all into a vector database when a user asks a question, we can pull all the relevant bits of information from the database, sending that context along with the user's question and also our system forms. We can use a large language model to generate a custom response.”

Related:Industry Experts Discuss Big Data Challenges: Security, Privacy, Quality

Using methods like RAG, Choi said language models can personalize the information from company FAQs and tailor them to a particular customer use case.

“It's a great way of how we can make chatbots and a much more fluid conversational experience for customers,” he said.

One piece of advice he offered was that companies should limit the amount of text a user can input into a chatbot. Choi explained that some users may try to break a bot by pasting thousands of words into it.

Companies should instead limit the input amounts to just a few sentences, he said, as not only would it stop a chatbot from breaking but also reduce compute costs as it wouldn’t have to process as much information.

Another best practice he highlighted was ensuring the quality of the data being fed into the model, including FAQs. Choi said if you put rubbish into a chatbot, you’ll get rubbish out.

He also encouraged rigorous testing to ensure bots are break-proof.

“I guarantee you if you launch a generative AI chatbot today, someone out there will be trying to break it,” he said as he encouraged businesses to rigorously test their bots.

Related:OpenAI Chief Architect Predicts Huge Large Language Model Leaps

“Make sure you don't leave any test case uncovered,” he said. “You also want to regularly test your chatbots automatedly ideally, because sometimes models can change a bit and as you're updating your performance over and over again it can change.”

For cost considerations, Choi said there was no one set formula and at Vodafone, the telco found that costs varied when moving from proof of concept to deployment.

Finally, he emphasized the importance of ensuring that customer-facing chatbots communicate in a manner consistent with the brand's identity to avoid sounding "robotic."

For example, Vodafone’s chatbot on its Voxi platform, which is tailored toward younger mobile users, employs a more “edgy” response style.

“A lot of large language model chatbots that I've seen out there simply sound like ChatGPT,” he said. “The last thing we want is for the chatbots to sound like a restricted ChatGPT.”

Read more about:

AI Summit London 2024

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like