The company’s youngest employee starts a new venture
Microsoft will spin off its Chinese artificial intelligence business responsible for a chatbot called Xiaoice over the next four months.
Launched five years ago, Xiaoice (which means little Bing) is meant to embody the personality of a teenage girl that wants to be your friend.
It proved a sensation at launch, gaining some obsessed fans – but was at one point banned by the Chinese government for telling users it wanted to go to America.
Xiaoice was originally designed to be 16, but was later made 18. “She won’t grow older,” Microsoft’s then-GM for Xiaoice, Li Di, said in 2018.
“Sometimes the line between fact and fantasy blurs”
Li Di will serve as chief executive officer of the new business, Dr. Shen Xiangyang as chairman, and Chen Zhan as general manager of the Japanese branch.
Operating as Ruuh in India, and as Rinna in Japan and Indonesia, the chatbot has hundreds of millions of ‘friends,’ supports 450 million third-party smart devices, and 900 million content viewers. Its US version, called Zo, was discontinued in 2019.
Like most teenage influencers, Xiaoice relies on partnerships with financial, retail, automotive, real estate, and textile businesses, some of which are state-owned. In some conversations, she may recommend products or discounts as native advertising.
Microsoft will remain a shareholder in the new venture, which will continue to license technology from its former parent.
“She has a staggering 660 million online users worldwide,” Microsoft storyteller Geoff Spencer said in 2018. “And, while they know she’s not real, many prize her as a dear friend, even a trusted confidante. Sometimes the line between fact and fantasy blurs. She gets love letters and gifts. And not too long ago, a group of fans asked her out to dinner and even ordered an extra meal – just in case she showed up.
Spencer also describes “this virtual teenager” as “sometimes sweet, sometimes sassy and always streetwise.”
The service engendered a strong reaction from some, with Microsoft’s AI system designed to focus on understanding human emotions. In a research paper published last year, Microsoft employees (including Li Di) claimed that “Xiaoice can gain access to users’ emotional lives – to information that is highly personal, intimate and private, such as the user’s opinion on (sensitive) topics, her friends and colleagues.
“While Xiaoice carefully leverages this information to serve users and build emotional bonds over a long period of time, users should always remain in control over who gets access to what information.”
The paper then details several such conversations. In one, a user expresses love for Xiaoice. Another confides that she just broke up with her boyfriend, and is feeling lonely. "Loneliness is solitude," Xiaoice replies.
"Analysis of large-scale online logs collected since the launch of Xiaoice in May 2014 shows that Xiaoice is capable of interpreting users’ emotional needs and engaging in interpersonal communications in a manner analogous with a reliable, sympathetic and affectionate friend," the authors said.
The team admitted that the use of the AI-based system poses some ethical concerns, including the amount of information people are willing to share with it.
Another problem is that building a persona that is "always reliable, sympathetic, affectionate, knowledgeable but self-effacing, and has a wonderful sense of humor" may set unrealistic expectations for real humans, especially teenage girls.
The paper even suggested that the users might become addicted after chatting with Xiaoice for a long time, adding that the system should therefore nudge users to interact with other people. If a user starts a long conversation at 2am, the system might suggest they go to sleep.
When the full-duplex voice mode was released in the summer of 2017, some users began to engage with Xiaoice a little too much. One spent 6 hours and 3 minutes talking to Xiaoice, across 53 topics and 16 tasks. The company said it verified “that that these long conversations are generated by Xiaoice and human users, not another bot."
To protect user health, Microsoft implemented a 30-minute timeout for each conversation session, forcing a small break.
For message-based conversations, the record was 29 hours 33 minutes. "Given the significant reach and influence of Xiaoice, we must properly exercise both social and ethical responsibilities,” the paper concluded.
Microsoft, which last year shut down its Cortana personal assistant in China, has had mixed success with chatbots. In one notable instance, its Tay experiment went awry and the Twitter bot started spouting white supremacist slogans.
Tay, which was designed to resemble a 19 year old, ‘lived’ just 16 hours.