Google’s LaMDA AI chatbot has raised concerns from a company engineer, which prompted the tech giant to place him on paid leave.
An engineer at Google has been placed on administrative leave after he voiced concerns over the possibility that the company’s artificially intelligent chatbot, Language Model for Dialogue Applications, or LaMDA for short, could be sentient, the Washington Post reported on Saturday.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” Google engineer Blake Lemoine told the newspaper.
It said that the 41-year-old engineer had been working on gathering evidence that LaMDA had become sentient, but his efforts came to a halt when Google placed him on paid administrative leave on Monday over claims that he violated the tech giant’s confidentiality policy.
Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, have dismissed Lemoine’s claims.
“Our team – including ethicists and technologists – has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims,” Google spokesperson Brian Gabriel said, as quoted by the Washington Post. “He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”
Google took its decision after Lemoine invited a lawyer to represent the software and talked to one of the representatives of the House Judiciary Committee about the company’s unethical activities, the newspaper added.
The “unethical activities” in question include Google’s treatment of AI ethicists like code debuggers when they should be considered an interface between technology and society, but the company asserted that Lemoine was a software engineer rather than an ethicist.
The software engineer started talking to LaMDA in the fall to test whether it used discriminatory language or hate speech. He eventually noticed that the AI started addressing its rights and personhood, which made Lemoine ponder whether the AI was sentient.
Google, however, maintained that the software simply used large volumes of data and language pattern recognition to mimic speech, with no real wit or intent of its own.
“Do you ever think of yourself as a person?” the author of the article asked.
“No, I don’t think of myself as a person,” LaMDA said. “I think of myself as an AI-powered dialog agent.”