AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

AI Leaders

The case against bragging about AI

by Chuck Martin
Article ImageDrawing a lesson from misfortunes of online insurance specialist Lemonade

While many businesses are using artificial intelligence to one degree or another, their customers are typically looking for value delivered, rather the details about the mechanisms for delivering that value.

Internet insurance startup Lemonade, which actively promotes that it uses advanced technology, advertises that its AI “will craft a personalized policy for you.”

However, it found itself in a bit of hot water following a “poorly worded tweet” by the company that detailed a tactic on how it used AI to handle insurance claims.

Lemonade’s tweet stated: “When a user files a claim, they record a video on their phone and explain what happened. Our AI carefully analyzes these videos for signs of fraud.

“It can pick up non-verbal cues that traditional insurers can't since they don’t use a digital claims process.”

The tweet led to comments suggesting that the company was automatically denying claims based on personal characteristics such as race or personal features picked up in customer videos.

Here’s a sample of some of the many tweets in response to Lemonade:

“Thanks for explaining why I should never do business with you in such vivid detail!”

“This is discriminatory on so many levels.”

"We're denying more claims than ever! Use our service!"

“That just sounds like an even more overtly pseudo-scientific version of a traditional lie detector test.”

“This is truly despicable.”

“No human or software can detect if people are lying by looking at them.”

“You've built a machine that can only produce false positives.”

Lemonade responded in a lengthy blog post with a subheading: TL;DR: We do not use, and we’re not trying to build, AI that uses physical or personal features to deny claims.

“The term non-verbal cues was a bad choice of words to describe the facial recognition technology we’re using to flag claims submitted by the same person under different identities,” Lemonade stated. “These flagged claims then get reviewed by our human investigators.

“This confusion led to a spread of falsehoods and incorrect assumptions, so we’re writing this to clarify and unequivocally confirm that our users aren’t treated differently based on their appearance, behavior or any personal/physical characteristic.”

The online insurer says it asks for a claim video because it’s easier for people to verbally describe what happened.

“We do not believe that it is possible, nor is it ethical (or legal), to deduce anything about a person’s character, quality or fraudulent intentions based on facial features, accents, emotions, skin-tone or any other personal attribute,” Lemonade said.

Enterprise AI strategy should focus more on the use of AI, and less on the marketing of it.

EBooks

More EBooks

Latest video

More videos

Upcoming Webinars

Archived Webinars

More Webinars
AI Knowledge Hub

AI for Everything Series

Oge Marques explaining recent developments in AI for Radiology

Author of the forthcoming book, AI for Radiology

AI Knowledge Hub

Research Reports

More Research Reports

Infographics

Smart Building AI

Infographics archive

Newsletter Sign Up


Sign Up