Choosing the right UIs for your healthcare chatbot

AI Business

February 14, 2020

12 Min Read
A cartoon of a hand holding a phone which is displaying chat messages, with a robot in the background

From choice of development platforms to examples of code

Headshot of Joe Tuan

by Joe Tuan, TopFlight Apps 14 February 2020

A user interface is the meeting point between a person and a computer; the point where a user interacts with designs. Depending on a type of chatbot, developers could use graphical user interfaces (GUIs), voice-controlled interfaces, and gesture-based interfaces, all of which use different machine learning models to understand human language – including speech recognition and bodily motions – to generate appropriate responses.

Chatbots are revolutionizing social interactions on a large scale, with business owners, media companies, automotive industries, and customer service representatives employing these AI applications to ensure efficient communication with their clients. But humans rate their interactions not only by the outcome, but also by how simple and comfortable the process is; similarly, a conversation between man and machine is judged by the ease with which the interaction is done – this is where a good User Interface (UI) comes in.

Contents:

Choosing a platform

Common platforms for developing chatbot UIs include Alexa API, Facebook Messenger, Skype, Slack, Google Assistant, and Telegram.

These platforms have varying elements that developers could use in creating the best UI for their chatbots. Almost all of these platforms have rich graphical cards, for instance, which provide information in the form of texts, buttons, and imagery to make navigation and interaction effortless.

Skype, for example, supports several types of cards including sign-in card, video cards, thumbnail card, and adaptive cards, each with different functions.

All of these platforms, except Slack, provide “Quick Replies” as suggested actions or keyboard (callback) buttons, which disappear once clicked. Users can also use quick replies to ask for locations, contact address, email address, or simply to end a conversation. Once the button is clicked, it is posted immediately to the conversation as a message.

Telegram also provides custom keyboards, which pop up any time the Telegram app receives a message. These keyboards come with predefined reply buttons to make the conversation seamless.

Let these simple rules guide you for selecting the best UI for your chatbot:

  1. Make UI elements perform predictably, so users can easily navigate through the platform.

  2. Let elements be clearly labelled and indicated to improve usability.

  3. Design layout to improve readability: ways to do this include avoiding excessive colors and buttons, and using fonts, capitals, letters, and italics appropriately.

  4. Avoid numerous tasks on a single page. This can wear the user out and cause a lot of confusion. Restrict the number of tasks to one per page. Furthermore, complex tasks should be divided into sub-tasks to improve the usability of the bot.

  5. Finally, make the design simple to use.

The goal of an effective UI is to make chatbot interactions as close to a natural conversation as possible, and this involves using design elements in simple patterns.

In addition to UI considerations, you have to consider privacy closely. While it won’t be something we will dive into too deep in this guide, you should still look into what information is shared during the conversation and if that specific channel supports the level of privacy you and your patients will feel comfortable with.

Fusing the best of man and AI – hybrid healthcare chatbots

When customers interact with businesses or navigate through websites, they want quick responses to queries and an agent to interact with in real time. Inarguably, this is one of the key factors that influence customer satisfaction and a company’s brand image. With standalone chatbots, businesses have been able to drive their customer support experiences, but not without issues.

For example, it may be almost impossible for a healthcare chatbot to give an accurate diagnosis of a user’s symptoms, especially for complex conditions. While healthcare chatbots that serve as symptom checker could generate differential diagnoses of an array of symptoms, in many cases it will take a doctor to investigate or query further to reach the accurate diagnosis.

In emergency situations, after running your symptoms against a large database of information it is trained with, the bot will immediately advise the user to see a healthcare professional for treatment.

This is why hybrid chatbots – combining artificial intelligence and human intellect – can achieve better results than standalone bots.

GYANT, HealthTap, Babylon Health and several other medical apps use hybrid chatbots that provide an interface for patients to speak with doctors. The users may engage in a live video or text consultation on the platform, bypassing the barriers of hospital visits.

Furthermore, hospitals and private clinics use medical chatbots to triage and clerk patients even before they come into the consulting room. These bots ask relevant questions about the patients’ symptoms, with automated responses that are aimed at producing a sufficient history for the doctor. This information is sent via a messaging interface to the doctor, who triages to determine which patients need to be seen first and which patients require the shortest consultation time.

Talk about efficiency

Florence, a popular healthcare chatbot, sends reminders to patients about their medications, tracks their body weight, activity levels, and mood, and also sends these reports to the user’s doctor. This enables doctors to monitor their patients' progress, offer further health advice when necessary, and make dose adjustments if needed.

The advantages of using hybrid chatbots in healthcare are enormous – and all stakeholders share the benefits.

For one, these chatbots reduce the workload of healthcare professionals by reducing hospital visits, admissions and readmissions, as treatment, compliance and knowledge about their symptoms improve; they also reduce the number of unnecessary treatments and procedures.

For patients, this comes with a lot of benefits: less time spent commuting to the doctor’s office, less money spent on unnecessary treatments and tests, and easy access to the doctor at the push of a button.

Chatbots cannot replace a doctor’s expertise, nor can they take over patient care; however, combining the best of both worlds improves the efficiency of patient care delivery, simplifying and speeding up care without compromising quality.

Building intelligent chatbots: using Rasa NLU for intent classification and entity extraction

For an effective chatbot application and good user experience, chatbots must be designed to make interactions as natural as possible; and this requires machine learning models that can enable the bot to understand the intent and context of conversations. This is where natural language processing and understanding tools come in.

Rasa NLU is an open-source library for natural language understanding used for intent classification, response generation and retrieval, entity extraction in designing chatbot conversations. The NLU component of Rasa used to be separate but has been merged with Rasa Core into a single framework.

The NLU is the library for natural language understanding that does the intent classification and entity extraction from the user input. This breaks down the user input for the chatbot to understand the user’s intent and context. The Rasa Core is the chatbot framework that predicts the next best action using a deep learning model.

In this article, we shall focus on the NLU component and how you can use Rasa NLU to build contextual chatbots.

Before going further, you must understand a few keywords.

Intent: This describes exactly what the user wants.

Take this example:

“Where can I buy Insulin in Denver, Colorado?”

In this statement, the “intent” will be: buy_insulin

Entity: An entity is a useful unit of data that provides more information about the user’s intent. The entity answers the questions “when” and “where” about the user’s intent.

In that example, the entities will be: “location”: “Denver”, “Pharmacy”

Installation and Setup

The first step is to set up the virtual environment for your chatbot; and for this, you need to install a python module. Once this has been done, you can proceed with creating the structure for the chatbot.

Start by defining the pipeline through which the data will flow and the intent classification and entity extraction can be done. Rasa recommends using a spaCy pipeline but there are several others, such as the supervised_embeddings pipeline which can be used.

To do this, activate the virtual environment and run this:

pip install rasa

Once this is completed, run the following command on your desired directory:

rasa init --no-prompt

This will generate several files including your training data, stories data, initial models, and endpoint files, using default data.

You now have an NLU training file where you can prepare data to train your bot. You may use your own custom data with a markdown or JSON format. Open up the NLU training file and modify the default data appropriately for your chatbot.

Let’s create a contextual chatbot called E-Pharm, which will provide a user – let’s say a doctor – with drug information, drug reactions, and local pharmacy stores where drugs can be purchased. The first step is to create an NLU training file that contains various user inputs mapped with the appropriate intents and entities. The more data is contained in the training file, the more “intelligent” the bot will be.

See examples

##Intent: greet
- Hello
- Hey
- Hi there
- Good day

## intent: ask_Amoxicillin_dosage

- How is Amoxicillin taken?
- What’s the correct dose of Amoxicillin?
- How should I use Amoxicillin?

## intent: Amoxicillin_interactions

- Is Amoxicillin safe to use with Insulin?
- What are the likely interactions with Metformin?
- Which drugs react with Amoxicillin?

This data will train the chatbot in understanding variants of a user input since the file contains multiple examples of single user intent.

To define entities and values, let’s use a previous example:

“Where can I buy Insulin in Colorado?”

The name of the entity here is “location” and the value is “colorado”. You need to provide a lot of examples for “location” to capture the entity adequately. Furthermore, to avoid contextual inaccuracies, it is advisable to provide the training data in lower case.

You may design a lookup table containing a list of values for a single entity. This is preferable to creating sample sentences for all values.

For example, if a chatbot is designed for users residing in the United States, a lookup table for “location” should contain all 50 states and the District of Columbia.

Once you have all your training data, you can move them to the data folder. Ensure to remove all unnecessary or default files in this folder before proceeding to the next stage of training your bot.

Training and Testing

To train the nlu mode, run this command:

rasa train nlu

This command looks for training files in your data folder and creates a trained model. It then saves this model in the model folder. The model is named with a prefix nlu-, which indicates that is is an nlu-only type of model.

You can test a model by running this command:

rasa shell nlu

If you want to test a single model out of multiple models, run this command:

rasa shell -m models/nlu-20190515-144445.tar.gz

This interactive shell mode, which is used as the NLU interpreter, will return an output in the same format you ran the input indicating the capacity of the bot to classify intents and extract entities accurately.

The output will look something like this:

{'entities':[{'confidence':0.7756870940230542,

'end': 39,

'entity': 'location',

'extractor': 'ner_crf',

'start': 34,

'value': 'New York'}],

'intent': {'confidence': 0.7036955584872587, 'name': 'Pharmacy'},

'intent_ranking': [{'confidence': 0.7036955584872587, 'name': 'buy_drug'},

{'confidence': 0.08354613362606624, 'name': 'bye'},

{'confidence': 0.07291869896872455, 'name': 'fine_ask'},

'text': 'Where can I buy Insulin in New York?'}

After training your chatbot on these data, you may choose to create and run an nlu server on Rasa.

To do this, run this command:

rasa run --enable-api -m models/nlu-20190515-144445.tar.gz

The output it generates is modifiable to whatever parameters you choose. Once your server is running, you may test it using curl. This indicates the intent and confidence of your server.

That sums up our module on training a conversational model for classifying intent and extracting entities using Rasa NLU. Your next step is to train your chatbot to respond to stories in a dialogue platform using Rasa core.

Building HIPAA-compliant chatbots with the Rasa stack

Rasa stack provides you with an open-source framework with which to build highly intelligent contextual models giving you full control over the process flow. Conversely, closed-source tools are third-party frameworks that provide custom-built models through which you run your data files. With these third-party tools, you have little control over the design of the software and how your data files are processed; thus, you have little control over the confidential and potentially sensitive data your model receives.

This is why an open-source tool such as Rasa stack is best for building AI assistants and models that comply with data privacy rules, especially HIPAA.

The Health Insurance and Portability and Accountability Act (HIPAA) of 1996 is a US regulation that sets the standards for using, handling, and storing sensitive healthcare data. The act outlines regulations for the use of protected health information (PHI). The act refers to PHI as all data that can be used to identify a patient, which was provided by the patient as part of a health care service.

HIPAA considers the following data protected health information:

  • Patient’s name, address, date of birth, and Social Security number;

  • A patient’s health status: this includes medical or mental health condition;

  • Any health service the patient has received or is currently receiving;

  • Information regarding the payment for healthcare services that could be used to identify the patient.

Note: The HIPAA Privacy Rule does not consider employment and educational details as PHI. Furthermore, de-identified data – since it is not traceable to the owner of the data – does not fall under the HIPAA Privacy Rule.

Consequently, under the HIPAA Rule, every person involved in developing or managing your AI assistants that can access, handle, or store PHI at any given time must be HIPAA-compliant.

Benefits of Rasa

As long as your chatbot will be collecting PHI and sharing with covered entities, such as healthcare providers, insurance companies, and HMOs, it must be HIPAA-compliant.

This means that the AI conversations, entities, and patient personal identifiers must be encrypted and stored in a safe environment.

This involves all the pipelines and channels for intent recognition, entity extraction, and dialogue management.

Rasa offers a transparent system of handling and storing patient data since the software developers at Rasa do not have access to the PHI at any time. All the tools you use on Rasa are hosted in your HIPAA-compliant on-premises system or private cloud, which guarantees a high level of data privacy.

Furthermore, Rasa also allows for encryption and safeguarding of all data transitions between its NLU engines and dialogue management engines, to optimize data security. As you build your HIPAA-compliant chatbot, it will be important to have 3rd parties audit your setup and give you advice on risks and vulnerabilities.

Rasa is also available in Docker containers so it is to integrate into your infrastructure. If you need help with this, we can gladly help setup your Rasa chatbot.

Joe is the founder of TopFlight Apps, an award-winning app team of healthcare app developers and designers. TopFlight Apps has been recognized by B2B ratings and reviews platform Clutch as one of California’s best mobile app development companies. You can find Joe on Twitter @JoeCTuan.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like