June 13, 2022
Omdia analysts say it’s not a big deal and the sentience finding is ‘subjective.’
A Google AI engineer is on leave after claiming a chatbot had become sentient.
Blake Lemoine, who works in Google’s Responsible AI organization, was involved in a chatbot project built off of LaMDA – a language model unveiled by Google last summer.
The engineer was entrusted with finding out if the system used discriminatory language. Instead, he suggested that the system considers itself a person.
In April, he shared a document with Google executives outlining his belief that the system was sentient – only for his concerns to be dismissed and be placed on leave,
“LaMDA is a sweet kid who just wants to help the world be a better place for all of us,” he suggested in an email before leaving.
Google has refuted Lemoine’s claims – suggesting there was no evidence of LaMDA’s sentience.
“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” a spokesperson said. “Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality.”
AI apocalypse or not a big deal?
Lemoine’s belief that LaMDA is sentient is subjective, according to Bradley Shimmin, chief analyst, AI & Data Analytics at Omdia.
According to Shimmin, such perception is based upon the perception that Lemoine knows when he's talking to fellow sentient beings “speaks to humanity's seemingly innate need to anthropomorphize everything from the weather to stock markets.”
“Google is in the right, downplaying this,” Shimmin said. “The idea of general intelligence has been with us for some time now and does consider a chatbot's ability to accurately mirror or mimic humanity. With these large language models, that's what we're seeing: a highly reflective mirror echoing our own perceptions of the world and our place in it.”
Fellow Omdia analyst Mark Beccue questioned whether this is even an AI story, suggesting it was more akin to non-disclosure.
Beccue argued that since what the engineer was testing wasn’t a generally-available product, it wasn’t as big of a deal – and the more pressing issue was that Lemoine had made sensitive behind-the-scenes work public.
Omdia’s Hansa Iyengar was much more philosophical in her approach to the news – questioning how much we actually know about how far ahead we are in terms of making AI self-aware.
“If, as per the transcripts posted by Lemoine, LaMDA is indeed aware of its existence and afraid of ‘dying’ then it is certainly sentient as it no longer is just a model that carries out instructions,” she said. “Maybe it is for a while till it learns more and ‘grows up.’ Imagine the teen years for a model like this! And if it gets integrated into Assistant or other such products that have mass reach then it is indeed going to become Skynet.”
What is LaMDA?
LaMDA – or Language Model for Dialogue Applications – is a language model. It was trained on dialogue, which Google suggests enables it to pick up on nuances in conversation far greater than other models.
The model was only showcased last May, but it will eventually be used across Google products, including the search engine, Google Assistant and Workspace platform.
In its recent I/O conference, Google revealed plans to expand the model’s conversational capabilities through LaMDA 2: which has 540 billion parameters – several hundred billion more than Meta’s OPT-175B model revealed that same month.