An insurer is criticized for using its AI to reject claims based on customer facial expressions
In its search for possible insurance fraud, Lemonade Insurance faces an avalanche of criticism on networks for bragging about its artificial intelligence system to reject claims.
The insurer Lemonade Insurance has wanted to show off its digital procedure and artificial intelligence systems on Twitter, but has met with a bitter response. The company has tried to defend itself against accusations of discrimination and prejudice.
” When a user files a claim, they record a video on their phone and explain what happened. Our AI carefully analyzes these videos for signs of fraud. It can pick up non-verbal signals that traditional insurers cannot, ” the company said in that thread of Twitter that has raised blisters.
In this thread they ensure that their system records and analyzes 100 times more data than traditional insurance companies , but they do not explain what data they refer to, how or when they collect it. With all of them, artificial intelligence would make a risk estimate for each case in search of fraud.
Following the criticism, Lemonade Insurance has deleted the thread, although it is available on the web archive and some users have shared screenshots. In the messages, the company boasts of the economic benefits it has achieved with this system. They explain that before he paid more than he earned and that with this AI he has now been able to considerably reduce his loss rate by rejecting a large part of the claims for his clients.
Android Auto returns to the past with an application that will bring great memories to many drivers : An insurer is criticized for using its AI to reject claims based on customer facial expressions
” It’s incredibly insensitive to celebrate how your company saves money by not paying claims (in some cases to people who are probably having the worst day of their lives), ” said Caitlin Seeley George , campaign manager for the digital rights advocacy group. Fight for the Future, a Recode.
Jon Callas , director of technology projects for the Electronic Frontier Foundation, has called these claims “pseudoscience” and “phrenology” to ZDNet. He denies that the AI is ready to do what Lemonade Insurance asks of it and stresses that other companies are investing millions in similar systems that end up showing biases by gender or skin color.
The insurer has apologized for networks and denies that the indications of its AI are applied automatically. “Our systems do not evaluate claims based on history, gender, appearance, skin tone, disability, or any physical characteristics (nor do we evaluate any of these by proxy), ” he said.
Still, statements from Shai Wininger , Lemonade’s co-founder and COO a year ago have resonated louder than ever: ” At Lemonade, one million customers translates into billions of data points, feeding our AI to ever-increasing speed . “
Artificial intelligence experts emphasize that this technology is not yet ready to detect the emotional or mental state of a person who is going through a traumatic moment such as a house fire or a car accident. Navin Thadani , CEO of digital accessibility company Evinced, acknowledges that ” AI is meant to do things better, faster, more efficiently, with fewer errors than human interaction, but what it lacks is judgment. human understanding and consideration of factors beyond what you are programmed to evaluate . ” An insurer is criticized for using its AI to reject claims based on customer facial expressions