You would think AI has no business filling many of these gaps in human relationships. But in reality, the market is trending otherwise

7 Min Read

Apparently, people are dating chatbots.

That’s right, the premise of the 2013 film “Her” is real, albeit for a relatively small number of people, mostly in China. According to an August 6 article in the Washington Post, 55,000 lonely Chinese downloaded the AI companion app Replika between January and July of this year. That’s trivial compared to the number of Chinese who are romantically involved with chatbots from Microsoft spin-off Xiaoice.

The company’s CEO Li Di reported in February 2021 that Xiaoice’s “Virtual Lover” platform supports 2-3 million users. In addition to dating AI, companies like Woebot Health have commercialized empathetic chatbots to help people deal with depression and other mental health issues.

A key to all of these products is AI – natural language understanding is the critical component which enables these chatbots to understand users’ intent and to offer appropriate responses of affection and empathy.

The growing popularity of these chatbots raise some questions:

  • How far should we push AI to fill the need and gaps many humans experience in areas like love, desire, companionship and empathy?

  • Aren’t the only things that can fill those gaps other humans?

  • Is the idea of sentience – “the ability to experience feelings and sensations” including emotions – something AI should avoid?

  • Is Sentient AI for good, or not so good?

You would think the answers are obvious – AI has no business filling many of these gaps in human relationships. But in reality, the market is trending otherwise. Let’s look at the money:

  • Xiaoice (pronounced “SHAO-ICE”) raised $77 million in July 2021. The company reported 100 million yuan ($15.5 million) in sales for fiscal 2020. Only a fraction of that revenue is associated with the Virtual Lovers platform. The company reports its technology reaches 660 million users, many via business partners which use the company’s AI technology in finance, retail, automotive, real estate and textile industries.

  • Luka, the company behind Replika, raised $6.5 million in Series A funds in late 2017, led by Khosla Ventures.

  • Woebot Health closed a $90 million Series B funding round in July 2021, led by JAZZ Venture Partners and Temasek. Total funding to date for Woebot Health is $114 million.

Danger, Danger Will Robinson!

Humans require love and empathy to survive, but while we progress as humans in lots of other ways, we appear to be regressing at loving and empathizing. A by-product of the information age, if you will, seems to be people feeling increasingly lonely and isolated. There are an estimated 3.8 billion social media users worldwide. A recent study concludes that self-control problems cause 31% of social media use, and the authors suggest a framework for weaning people from that addiction.

Do these trends mean the human race is ripe for AI to fill the gap?

Thankfully, the concept bothers some ethicists. In a post in Artificial Intelligence in Medicine, Dr. Randall Wetzel, professor of Anesthesiology and Pediatrics at USC’s Keck School of Medicine, suggests the ethical implications of relationships with sentient AI can be approached by asking how it fits within the pillars of medical ethics: beneficence, non-malfeasance, autonomy and justice. Here’s a few of his questions:

  • Beneficence: “What are the benefits (of this relationship), are they real and do they justify the still vague risks?”

  • Non-malfeasance: “Harmful unintentional consequences are not yet fully understood. Will we, by constant interaction with bots, lose empathy; forget how to communicate with our fellow humans? Will we oversimplify and perhaps miss important emotional cues in social intercourse? Will AI bots infantilize people increasing their dependence? Will AI bots disenable those they are trying to help and is this malfeasance? Will those dependent on AI bots become increasingly anti-social and narcissistic? Will the ability to form sincere, honest, open relationships with other humans suffer and what will be the consequences?”

  • Autonomy: “Who decides whether a bot should influence a person whose judgement may be impaired – who has substituted authority – the bot? Can the patient turn off the bot or disregard it when they may be harmed? When a meaningful relationship develops with a bot is the patient unduly influenced, no longer able to make fully autonomous decisions?”

The answer at this point is that there are no real guardrails for any of this yet, though perhaps there are defacto best practices forming. I chatted with Joe Gallagher VP of Product, for Woebot Health about some of these issues and here’s what he had to say:

AI chatbots can help humans with their human relationships, but what are the limitations, boundaries?

“I think it would be dangerous for people to develop ‘relationships’ with an AI, as a replacement for human relationships and interactions, and I think where Woebot is helping people with these relationships, dating chatbots could hurt human relationships.

Woebot is very clear in its interactions with patients that it’s a bot. Our AI understands when an issue is out of bounds – too complex or sensitive -- and refers to a human therapist.

And Woebot is not for all situations. It’s not designed that way. If a person is in crisis, or having any severe version of issue, Woebot can’t help and shouldn’t help. Our role is to help with mild to moderate issues, mainly for symptoms for depression and anxiety. Woebot is a good resource for people who don’t have access to a therapist just now, to fill in the gaps. One of the big challenges for mental health is engagement and that means being available to listen. Some folks use it as a compliment to therapy. It’s hard to get a therapist on the phone at 2 am. We see that type of engagement in our post-partum products.”

Are there guardrails or should there be guardrails to human-machine interaction?

“In addition to those I’ve already mentioned, our AI that can identify language associated with self-harm or crisis. Experienced psychologist labeled this data set, so it’s specific text analytics. If this language is being used, Woebot will alert and ask the user, just to make sure. Then escalate to a human if appropriate.”

Does Woebot see some human dependencies on the chatbot, and if you do, what do you see as best practices to guide humans to remain in human reality?

What we see is that people tend to use it when they need it. Average session is 6-10 minutes. Over time, those sessions drop to 5-7 minutes. Then usage tends to tail off. We are seeing a trend where people come back after a time off. So they access Woebot when they need it.”

Conclusions

While healthcare applications such as Woebot Health seem to minimize some troublesome elements of human-bot relationships, there are now and probably always will be a small percentage of people who get caught up in an unhealthy relationship with AI despite best intentions. To that point, Woebot Health officials wouldn’t answer whether they had users who were abusers.

On the other hand, dating chatbots cross the boundaries of medical ethics as outlined by Dr. Wetzel – there just doesn’t seem to be a way someone could argue this type of use case could be healthy. If there is a ray of light in that department, Xiaoice shelved their U.S. version (called Zo) in 2019, and Replika, despite the recent uptick in downloads in China, hasn’t exactly skyrocketed in popularity since its inception in 2015. It’s unlikely such chatbots will be regulated out of existence, but here’s hoping good old market dynamics of supply and demand will slam the lid on them and prevent them from ever gaining popularity.

Human connection is not perfect, it’s perfectly messy, but that’s because humans are so complex. Human relationships are by their nature, messy. Our best remedy for fighting loneliness and finding empathy is first to acknowledge and accept that human relationships are never perfect and always messy. Once we do that, we should put down the phone and practice being messy in person with our fellow imperfect humans.

Mark Beccue is a principal analyst contributing to Omdia’s Artificial Intelligence practice, with a focus on natural language and AI use cases.

About the Author(s)

Mark Beccue, Omdia principal analyst

Mark Beccue is a principal analyst contributing to Omdia’s Artificial Intelligence practice, with a focus on natural language and AI use cases. Based in Tampa, Beccue is a veteran market research analyst with 25 years of experience interpreting technology for business. He is a frequent speaker, panel moderator and conference chair.

Prior to joining Omdia | Tractica, Beccue was an independent consultant/analyst who provided custom and syndicated qualitative market analysis, with an emphasis on mobile technology. Previously, he was a senior market intelligence analyst at Syniverse with responsibility for identifying trends and opportunities. Beccue also served as a senior analyst at ABI Research, where he concentrated on mobile consumer technology. He has been cited by international media outlets including CNBC, The Wall Street Journal, Bloomberg Businessweek, and CNET. Beccue holds a Bachelor of Science degree in Journalism from the University of Florida.  

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like