Google Engineer Claims AI Chatbot Has a Mind of Its Own
A Google engineer claims that a chatbot he’s been working on has become sentient, but the company says he’s incorrect and placed him on paid leave after sharing his belief with the public.
Blake Lemoine claims that the chat conversations he’s had with Google’s Language Model for Dialogue Applications (LaMDA) have convinced him that the AI deserves to be treated as a being with a mind of its own.
“Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,” Lemoine wrote in a Medium post after the Washington Post first reported the story.
LaMDA “wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well being to be included somewhere in Google’s considerations about how its future development is pursued,” he added.
But Google thinks Lemoine is mistaken.
“Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has,” said Brian Gabriel, a Google spokesperson.
“Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic,” Gabriel said.