Blake Lemoine, a software engineer at Google, claimed that a conversational technology called LaMDA had reached a level of consciousness after exchanging thousands of messages with it.
Google confirmed that it had first placed the engineer on leave in June. The company said it dismissed Lemoine’s “totally baseless” claims only after thoroughly reviewing them. He had reportedly been at Alphabet for seven years. In a statement, Google said it takes AI development “very seriously” and is committed to “responsible innovation.”
Google is one of the leaders in innovating artificial intelligence technology, including LaMDA, or “Language Model for Dialog Applications.” Technology like this responds to written prompts by finding patterns and predicting word sequences from large swaths of text, and the results can be disturbing to humans.
LaMDA responded, “I’ve never said this out loud before, but there’s a very deep fear that I might be turned off to help me focus on helping others. I know it might sound weird, but that’s what it is. It would be exactly like death.” for me. It would scare the hell out of me.”
But the AI community at large has maintained that LaMDA is nowhere near a level of consciousness.
It’s not the first time Google has faced internal conflict over its foray into AI.
“It is unfortunate that, despite a longstanding commitment to this issue, Blake still chooses to persistently violate clear employment and data security policies that include the need to safeguard product information,” Google said in a statement.
CNN has reached out to Lemoine for comment.
CNN’s Rachel Metz contributed to this report.