The Empathy Gap: Designing Intelligent Healthcare SystemsThe Empathy Gap: Designing Intelligent Healthcare Systems
The Empathy Gap: Designing Intelligent Healthcare Systems
March 6, 2018
An 'empathy gap' prevails between users and healthcare systems. AI-assisted intelligent healthcare must address this challenge.
Experience designers need to better understand the patient's journey through a system - and the lifecycle of their condition - in order to accomplish this.
By involving experience designers, intelligent healthcare can focus on delivering better patient outcomes with AI technology.
By Jane Vance and Tim Caynes
LONDON, UK - The volume of data created and analysed in today’s world has allowed artificial intelligence (AI) to develop and learn to a point of being able to nearly replicate human interaction. Intelligent healthcare with the help of AI provides a gnificant commercial opportunity to practitioners and patients.
Venrock partners Bob Kocher and Bryan Roberts publish an annual list of healthcare predictions. Their predictions for 2018 were big on the impact of intelligent healthcare. They talked about AI ‘making a difference', and considering 8 out of 10 of the predictions they made in 2017 were correct, it’s not a far stretch to think their 2018 predictions will be accurate.
Their healthcare predictions on AI in 2018 specifically predicted that there will be a maturity of current products. For this to happen, healthcare providers need to design better, more intelligent systems to enable patients to get the help they need, but with care and compassion in potentially difficult times. We need to bring systems and humans together for better outcomes. To achieve this, we must address the biggest challenge and opportunity for learning systems in healthcare: closing the gap between AI and human empathy.
Working through the gaps
As experience designers, we tend to live and work in the space inbetween people and technology. We bridge the gap between healthcare and technology providers, and by working together, we can create better outcomes for patients. This means that we have a holistic view of the patient experience in mind - we don’t get bogged down in technological capability alone.
Leading healthcare specialists and service providers are exploring the potential of AI and learning systems for intelligent healthcare. Deepmind Health, working with the NHS in the UK, is mining patient data to optimise radiotherapy planning for complex tumour treatments and on the early detection of breast cancer. IBM Watson's WatsonPaths, meanwhile, is looking at ways to provide interfaces to practitioners to make better decisions and get deeper insights using structured and unstructured data from millions of electronic medical records (eEMRs).
Before healthcare providers can apply intelligent healthcare systems at scale, they need to understand the structure of how to relate technologies to experiences; intelligence to outcomes. When it comes to any system, the most important thing for us to understand is the human journey through that system, because if we can understand that, we may be able to identify the opportunities for AI to enhance that experience for the patient.
Related: GSK Sets Sights On Intelligent Healthcare with AI
Traditionally, we have architected systems based on relatively predictable, linear pathways through those systems. The journey is set from the outset. But AI and learning systems open up new possibilities. We can create dynamic pathways through the system, based on human behaviours and interactions. An intelligent system can infer outcomes. It can create the most desired path through the system.
Context is crucial in intelligent healthcare
To achieve this, it is crucial for us to understand a patient’s context when they are interacting with systems, as context allows us to infer better outcomes. But understanding context is difficult.
We tend to ask people three questions at any point in a journey to try and understand their context - where are people in their journey, what did they do to get there, and where can they go next? Based on the answers to those questions, an intelligent system can make inferences. It can determine outcomes and deliver outputs. But it’s not real, it’s artificial by definition.
Human outcomes are complex. We often don’t have all the answers or even know all of our questions. The decisions we make are influenced by human relationships and connections, comprises human empathy. Our suggestion is that the provision of many healthcare products and services have a digital empathy gap between AI and human empathy. So, the goal becomes supporting the emotional needs of the patient and individual in intelligent systems.
Every minute somebody is diagnosed with a condition - one which they potentially know nothing about. Suddenly, they are in a situation where they need to learn a lot about a new condition. Whether a condition is easily curable or lifelong, people need to know what will happen next; what medication they need, what tests they will undergo and what they need to do going forward.
This information will be delivered when they are experiencing a great deal of stress and emotional difficulty, which makes the understanding of information, the navigation of complexity, and the retention of information difficult.
Redesigning patient support and addressing patient trust: a fine balance
In this situation, patients currently rely on healthcare practitioners to tell them everything they need to know about how to live with the condition, and the tests they will have now and in the future. But, with healthcare practitioners under ever-increasing stress they often don't have the time to answer and explain everything. People are increasingly reliant on sourcing information for themselves, largely online, to educate them on this new condition.
However, people often don’t know if they can trust information found online. People instead look for reputable sources in order to know it’s accurate information. When it comes to health, people are warier about where they place their trust.
We need to design new ways of supporting people, which understand the experience people go through with different medical conditions, which can see the journey individual patients go through, which can define the phases of that journey, and which can also determine the sorts of information and support people need across those phases.
Learning systems which provide a frictionless experience and bridge the empathy gap are already being created by healthcare providers. But designing this journey is far from easy and is far from complete. Simplyhealth’s Care for Life is one example of a developing success story striving to bridge the empathy gap.
Simplyhealth have released an innovative new product, designed with experience design agency Foolproof, to provide information, advice, and comfort to aging loved ones and their carers. User groups are segmented which means that the system has the ability to give the right advice to the right individual at the right time to meet their healthcare needs.
The system achieves this because it uses intelligence to understand where an individual has been, where they are now, and what they need to know next and then adapts to the individual accordingly – at the level of content, copy, and design. The system does all of this by seamlessly transitioning between the elements of the user’s journey with the product.
The healthcare sector, as a whole, needs to address ‘how’ the outputs of these design systems are delivered if the empathy gap is to be closed and for AI to bring true value to healthcare providers and users. By involving experience designers, intelligent healthcare can focus on delivering better patient outcomes with AI technology.
Tim Caynes is Principal Designer at experience design agency Foolproof. He works to create evidence-based designs for brands such as Sage, Santander, Coke, Apple, Sky, HSBC, Eli Lilly and Simplyhealth. As principal designer, he is focused on understanding user needs and behaviours, contexts of use, and interactions with information systems, to do smart things for good people.
As a principal designer at Foolproof, Jane Vance is responsible for the creative output of user experience design projects. She’s interested in using insight to solve problems through defining information architecture and designing services that best support people.
About the Author(s)
You May Also Like