The case against bragging about AI

Drawing a lesson from misfortunes of online insurance specialist Lemonade

Chuck Martin, Editorial Director AI & IoT

June 2, 2021

2 Min Read

Drawing a lesson from misfortunes of online insurance specialist Lemonade

While many businesses are using artificial intelligence to one degree or another, their customers are typically looking for value delivered, rather the details about the mechanisms for delivering that value.

Internet insurance startup Lemonade, which actively promotes that it uses advanced technology, advertises that its AI “will craft a personalized policy for you.”

However, it found itself in a bit of hot water following a “poorly worded tweet” by the company that detailed a tactic on how it used AI to handle insurance claims.

Lemonade’s tweet stated: “When a user files a claim, they record a video on their phone and explain what happened. Our AI carefully analyzes these videos for signs of fraud.

“It can pick up non-verbal cues that traditional insurers can't since they don’t use a digital claims process.”

The tweet led to comments suggesting that the company was automatically denying claims based on personal characteristics such as race or personal features picked up in customer videos.

Here’s a sample of some of the many tweets in response to Lemonade:

“Thanks for explaining why I should never do business with you in such vivid detail!”

“This is discriminatory on so many levels.”

"We're denying more claims than ever! Use our service!"

“That just sounds like an even more overtly pseudo-scientific version of a traditional lie detector test.”

“This is truly despicable.”

“No human or software can detect if people are lying by looking at them.”

“You've built a machine that can only produce false positives.”

Lemonade responded in a lengthy blog post with a subheading: TL;DR: We do not use, and we’re not trying to build, AI that uses physical or personal features to deny claims.

“The term non-verbal cues was a bad choice of words to describe the facial recognition technology we’re using to flag claims submitted by the same person under different identities,” Lemonade stated. “These flagged claims then get reviewed by our human investigators.

“This confusion led to a spread of falsehoods and incorrect assumptions, so we’re writing this to clarify and unequivocally confirm that our users aren’t treated differently based on their appearance, behavior or any personal/physical characteristic.”

The online insurer says it asks for a claim video because it’s easier for people to verbally describe what happened.

“We do not believe that it is possible, nor is it ethical (or legal), to deduce anything about a person’s character, quality or fraudulent intentions based on facial features, accents, emotions, skin-tone or any other personal attribute,” Lemonade said.

Enterprise AI strategy should focus more on the use of AI, and less on the marketing of it.

About the Author(s)

Chuck Martin

Editorial Director AI & IoT

Chuck Martin, a New York Times Business Bestselling author, futurist and columnist, is Editorial Director at Informa Tech, home of AI Business, IoT World Today and Enter Quantum. Martin has been a leader in emerging digital technologies for more than two decades. He is considered one of the foremost Internet of Things (IoT) experts in the world and his latest book is titled "Digital Transformation 3.0" (The New Business-to-Consumer Connections of The Internet of Things).  He hosts a worldwide podcast titled “The Voices of the Internet of Things with Chuck Martin,” where he converses with top executives from the companies driving the Internet of Things.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like