Google Fires Engineer Who Claimed AI Was ‘Sentient’
Blake Lemoine’s claims had gripped AI community
Blake Lemoine, the Google AI engineer that claimed one of the company’s chatbots had become sentient, has been fired.
Lemoine had claimed that a chatbot project built off of LaMDA – or Language Model for Dialogue Applications, a language model unveiled by Google last summer – was “a sweet kid who just wants to help the world be a better place.”
After being placed on a leave of absence in June, Lemoine’s position in Google’s Responsible AI organization was terminated. Lemoine, who is a mystical priest as well, handed over documents to a U.S. senator, claiming that Google was involved in instances of religious discrimination, according to Business Insider.
In a statement, Google wished Lemoine well, adding that Lemoine’s claims were “wholly unfounded” despite “extensive” reviews.
“LaMDA has been through 11 distinct reviews, and we published a research paper earlier this year detailing the work that goes into its responsible development.”
In this post on Medium, Lemoine shared a snippet of a conversation he and a Google collaborator had with LaMDA.
Wishing Brad well
In his role at Google, Lemoine was tasked with discovering whether the LaMDA-based chatbot used discriminatory language. Instead, he suggested that the system considers itself a person.
In April, he presented ‘evidence’ to Google executives outlining his belief that the system was sentient – only for his concerns to be dismissed.
He was then placed on leave after he attempted to contact members of government about his findings, as well as hire legal representation for the chatbot.
A Google spokesperson said at the time Lemoine was placed on leave that his evidence did not support his claim, adding, “though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality.”
Shortly after his leave of absence began, Omdia analysts suggested the claims the chatbot is sentient are subjective.
After Lemoine was fired, Omdia Chief Analyst Brad Shimmin followed up with this philosophical commentary: “It is tempting to see the departure of Mr. Lemoine as a continuation of past turmoil within Google's AI research and ethics groups, highlighted by the related dismissal of Timnit Gebru in 2020 followed by Margaret Mitchell in 2021 and again with the firing of researcher Satrajit Chatterjee in March of this year.”
“Of course, all three of these incidents shine an important light upon the ongoing friction between Google the research organization and Google the technology provider. The overriding and as yet unanswered question there remains this: Should a publicly traded company allow a public discourse regarding the efficacy/ethics of its intellectual property?”
“The dismissal of Mr. Lemoine, however, strikes a much more philosophical tone. Google's stated reason for letting Mr. Lemoine go revolved around his persistent violation of the company's security policies. These grounds for dismissal are universal and not at all tied up in whether or not Google has created a truly sentient form of AI with LaMDA, as claimed by Mr. Lemoine.”
“If you take this security issue off the table, we are left wondering what obligation Mr. Lemoine or the rest of us have for that matter toward LaMDA or any ‘self-aware’ AI entity? Unfortunately, this aspect of ‘robot ethics’ remains a distant point of concern among most governing agencies. Given that the law already treats organizations as persons, surely the idea of granting some protective rights to technology itself is already within the realm of possibility?”
About the Author
You May Also Like