The future of workplace wellbeing: Leveraging AI

As we readjust to changing workflows and lifestyles, AI presents an unparalleled opportunity to revolutionize mental healthcare at a crucial moment in time. Koa Health's Oliver Harrison explains how

August 9, 2021

9 Min Read

Companies are already reaping the benefits of AI. According to research by PwC, AI could contribute up to $15.7 trillion to the global economy in 2030, more than the current output of China and India combined.

Further, 25% of companies with widespread AI adoption expect to see increased revenue during 2021.

AI use cases are beginning to filter into healthcare, too, including the optimization of hospital staffing, patient monitoring, screening scans for abnormalities, and supporting clinician decision-making.

Given the huge supply-demand gap in mental healthcare, there is real potential for AI to play a role.

Customizable technology: The future of personalization

There is no one-size-fits-all in mental health. Every individual has distinctly different needs and goals, and as such, mental health tools must reflect this.

Good clinicians do this instinctively, channeling their training and years of experience. However, there is an alarming shortage of clinicians to meet mental health demands.

It is estimated that one in four people will experience a mental health problem of some kind each year in England  and one in six people report experiencing a common mental health problem (such as anxiety and depression) in any given week in England.

Technology can help address this yawning gap in supply, but this won't help if the solutions provided aren't tailored to the unique needs of the individual.

This is where AI can play a role. Smartphones, and increasingly wearables, are able to capture data from which algorithms can generate insights that can be used to personalize care, leading to better mental health.

We've invested in technology (protected by more than a dozen patents) and powerful algorithms that can track symptoms, emotions, and activities to power a recommender system that gets the right tool to the right person at the right time.

AI-enabled capabilities 

The unique ability of AI to rapidly and autonomously analyze vast volumes of data also means that it can add capabilities that no human could hope to achieve.

For example, the medical records of many individuals with a mental health illness can be extensive, running to pages of notes capturing months if not years of care.

Time-stretched clinicians simply cannot hope to digest all of this information before every appointment, and this is where AI can help.

We’ve recently created a prototype AI-enabled mental health crisis prediction model. This powerful tool continuously reviews patient notes across the hospital’s medical record system and provides clinicians with suggestions on which patients might be at risk of crisis.

The clinician can then take a closer look at the notes, and if they agree with the suggestion, contact the patient to intervene before there is a crisis.

We are looking forward to taking this prototype into clinical trials, but our initial study showed that the predictive power was as good as a clinician given all the time they needed to review a patient’s notes.

How to build trust and confidence in AI

The power of AI can only be harnessed through accessing personal data.

However, security and privacy concerns surround the idea of hackable data that can be manipulated or stolen. 2020 was a year like no other; more and more business was being conducted online and many had to take some shortcuts with security.

Increased use of technology as many worked from home opened up the pathway for cybercriminals as they evolved the sophistication of their attack methods. In fact, confirmed data breaches in the healthcare industry increased by 58% in last year, according to Verizon.

Developers must ensure that enhanced security is front and center of all product development, if we are to reap the benefits of AI in mental health with all products and solutions compliant with HIPAA, GDPR, and ISO27001. All data must be processed at the edge where possible, utilizing industry-leading encryption where data is shared.

Protecting data is not enough though. Technology companies must also ensure that any AI that they deploy avoids creating harm through unwanted bias. For instance, an algorithm that predicted crises well for one ethnic group but not others could lead to significant harm as one group continued to suffer crises that could have been prevented.

The solution here is not to simply avoid collecting data about different groups in society (gender, ethnicity, sexuality, etc.) and imagine that this will avoid any bias from arising. There are many cases whereby the algorithm can learn a proxy for a particular group and create harmful bias against it.

Instead, developers and researchers must take the time to think about the potential biases and harms that an algorithm might create, and proactively work to avoid these at every stage: data collection, creating the algorithm, and deploying the algorithm.

More digital health providers must build ethical practices into the very heart of their offerings and at Koa Health, we employ a rigorous ethical framework with all of our research and products.

Artificial Intelligence technologies have been penetrating a wide range of industries, but are currently underutilized in healthcare, and particularly in mental healthcare.

As we readjust to changing workflows and lifestyles, AI presents an unparalleled opportunity to revolutionize mental healthcare at a crucial moment in time. But, we must seize that opportunity in an ethical and trustworthy manner.

Oliver Harrison is the CEO at Koa Health. He previously spent six years as the head of public health and director of strategy at the Abu Dhabi department of health.

Companies are already reaping the benefits of AI. According to research by PwC, AI could contribute up to $15.7 trillion to the global economy in 2030, more than the current output of China and India combined.

Further, 25% of companies with widespread AI adoption expect to see increased revenue during 2021.

AI use cases are beginning to filter into healthcare, too, including the optimization of hospital staffing, patient monitoring, screening scans for abnormalities, and supporting clinician decision-making.

Given the huge supply-demand gap in mental healthcare, there is real potential for AI to play a role.

Customizable technology: The future of personalization

There is no one-size-fits-all in mental health. Every individual has distinctly different needs and goals, and as such, mental health tools must reflect this.

Good clinicians do this instinctively, channeling their training and years of experience. However, there is an alarming shortage of clinicians to meet mental health demands.

It is estimated that one in four people will experience a mental health problem of some kind each year in England and one in six people report experiencing a common mental health problem (such as anxiety and depression) in any given week in England.

Technology can help address this yawning gap in supply, but this won't help if the solutions provided aren't tailored to the unique needs of the individual.

This is where AI can play a role. Smartphones, and increasingly wearables, are able to capture data from which algorithms can generate insights that can be used to personalize care, leading to better mental health.

We've invested in technology (protected by more than a dozen patents) and powerful algorithms that can track symptoms, emotions, and activities to power a recommender system that gets the right tool to the right person at the right time.

AI-enabled capabilities 

The unique ability of AI to rapidly and autonomously analyze vast volumes of data also means that it can add capabilities that no human could hope to achieve.

For example, the medical records of many individuals with a mental health illness can be extensive, running to pages of notes capturing months if not years of care.

Time-stretched clinicians simply cannot hope to digest all of this information before every appointment, and this is where AI can help.

We’ve recently created a prototype AI-enabled mental health crisis prediction model. This powerful tool continuously reviews patient notes across the hospital’s medical record system and provides clinicians with suggestions on which patients might be at risk of crisis.

The clinician can then take a closer look at the notes, and if they agree with the suggestion, contact the patient to intervene before there is a crisis.

We are looking forward to taking this prototype into clinical trials, but our initial study showed that the predictive power was as good as a clinician given all the time they needed to review a patient’s notes.

How to build trust and confidence in AI

The power of AI can only be harnessed through accessing personal data.

However, security and privacy concerns surround the idea of hackable data that can be manipulated or stolen. 2020 was a year like no other; more and more business was being conducted online and many had to take some shortcuts with security.

Increased use of technology as many worked from home opened up the pathway for cybercriminals as they evolved the sophistication of their attack methods. In fact, confirmed data breaches in the healthcare industry increased by 58% in last year, according to Verizon.

Developers must ensure that enhanced security is front and center of all product development, if we are to reap the benefits of AI in mental health with all products and solutions compliant with HIPAA, GDPR, and ISO27001. All data must be processed at the edge where possible, utilizing industry-leading encryption where data is shared.

Protecting data is not enough though. Technology companies must also ensure that any AI that they deploy avoids creating harm through unwanted bias. For instance, an algorithm that predicted crises well for one ethnic group but not others could lead to significant harm as one group continued to suffer crises that could have been prevented.

The solution here is not to simply avoid collecting data about different groups in society (gender, ethnicity, sexuality, etc.) and imagine that this will avoid any bias from arising. There are many cases whereby the algorithm can learn a proxy for a particular group and create harmful bias against it.

Instead, developers and researchers must take the time to think about the potential biases and harms that an algorithm might create, and proactively work to avoid these at every stage: data collection, creating the algorithm, and deploying the algorithm.

More digital health providers must build ethical practices into the very heart of their offerings and at Koa Health, we employ a rigorous ethical framework with all of our research and products.

Artificial Intelligence technologies have been penetrating a wide range of industries, but are currently underutilized in healthcare, and particularly in mental healthcare.

As we readjust to changing workflows and lifestyles, AI presents an unparalleled opportunity to revolutionize mental healthcare at a crucial moment in time. But, we must seize that opportunity in an ethical and trustworthy manner.

Oliver Harrison is the CEO at Koa Health. He previously spent six years as the head of public health and director of strategy at the Abu Dhabi department of health.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like