A Google engineer was spooked by a company artificial intelligence chatbot and claimed it had become “sentient,” labeling it a “sweet kid,” according to a report. Blake Lemoine, who works in Google’s Responsible AI organization, told the Washington Post that he began chatting with the interface LaMDA — Language Model for Dialogue Applications — in fall 2021 as part of his job.
He was tasked with testing if the artificial intelligence used discriminatory or hate speech. But Lemoine, who studied cognitive and computer science in college, came to the realization that LaMDA — which Google
boasted last year was a “breakthrough conversation technology” — was more than just a robot. In Medium post published on Saturday, Lemoine declared LaMDA had advocated for its rights “as a person,” and revealed that he had engaged in conversation with LaMDA about religion, consciousness, and robotics. “It wants Google to prioritize the well-being of humanity as the most important thing,” he wrote. “It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well being to be included somewhere in Google’s considerations about how its future development is pursued.” In the Washington Post report published Saturday, he compared the bot to a precocious child. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” Lemoine, who was put on paid leave on Monday, told the newspaper. In April, Lemoine reportedly shared a Google Doc with company executives titled, “Is LaMDA Sentient?” but his concerns were d …