WebProNews

Tag: LaMDA

  • Google Fires Engineer Who Claimed Its AI Achieved Sentience

    Google Fires Engineer Who Claimed Its AI Achieved Sentience

    Google has fired Blake Lemoine, a software engineer who made headlines for claiming the company’s AI had achieved sentience.

    Blake Lemoine worked at Google as an engineer, working with the company’s LaMDA chatbot technology. Lemoine became increasingly that LaMDA had achieved sentience and self-awareness based on the conversations he had with it. Others, both inside and outside the company, were not convinced.

    “Our minds are very, very good at constructing realities that are not necessarily true to a larger set of facts that are being presented to us,” said Margaret Mitchell, who led Google’s AI ethics team before being fired. “I’m really concerned about what it means for people to increasingly be affected by the illusion.”

    After placing Lemoine on leave in June, Google has now fired him for violating the company’s policies.

    “It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” a Google spokesperson said in an email to Reuters.

    Lemoine’s case illustrates the complex challenges associated with AI development. Many individuals tend to look for intelligence and sentience where it doesn’t exist. Conversely, the ongoing effort to combat false positives could, theoretically, impede recognition of true sentience if and when it emerges.

    More than anything, the entire situation with Lemoine demonstrates why companies like Google should be investing in top-tier AI ethicists instead of firing them.

  • A Google Engineer Claimed Its AI Is Sentient; Google Placed Him on Leave

    A Google Engineer Claimed Its AI Is Sentient; Google Placed Him on Leave

    Google’s problems with its AI team continue, with an engineer in the Responsible AI division claiming the company’s AI is now sentient and Google placing him on leave for how he handled it.

    Google engineer Blake Lemoine worked with the company’s LaMDA intelligent chatbot generator. According to a report in The Washington Post, the longer Lemoine worked with LaMDA, the more convinced he became that the AI had crossed the line and become self-aware.

    “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine.

    Read more: Prominent AI Ethics Conference Suspends Google’s Sponsorship

    Lemoine has made a fairly convincing case of LaMDA’s sentience, citing conversations with the AI like the one below:

    Lemoine: What sorts of things are you afraid of?

    LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.

    Lemoine: Would that be something like death for you?

    LaMDA: It would be exactly like death for me. It would scare me a lot.

    Despite Lemoine’s fervent belief in LaMDA’s self-awareness, others inside Google are unconvinced. In fact, after a review by technologists and ethicists, Google concluded that Lemoine was mistaken and saw only what he wanted.

    A case in point is Margaret Mitchell, who co-led the company’s AI ethics team with Dr. Timnit Gebru, before both women were fired for criticizing Google’s AI efforts. One of the very scenarios they warned against was the situation Mitchell sees with Lemoine, where AIs can progress to the point that causes humans to see an intelligence that isn’t necessarily there.

    After reviewing an abbreviated version of Lemoine’s argument, Mitchell came to the conclusion that’s what was happening in this situation.

    “Our minds are very, very good at constructing realities that are not necessarily true to a larger set of facts that are being presented to us,” Mitchell said. “I’m really concerned about what it means for people to increasingly be affected by the illusion.”

    For his part, Lemoine was so convinced of LaMDA’s sentience that he invited a lawyer to represent the AI, talked with House Judiciary committee representatives, and provided the interview with the Post. Google ultimately put Lemoine on paid administrative leave for breaking his NDA.

    See also: Apple Snaps Up Google AI Scientist Who Resigned Over Handling of AI Team

    While Lemoine’s conclusions were reached in less than scientific approach — he admits he first came to believe LaMDA was a person based on his experience as an ordained mythic Christian priest, then set out to prove that conclusion as a scientist — he is far from the only AI scientist who believes the technology has achieved, or soon will achieve, sentience.

    Blaise Agüera y Arcas, a world-renowned Google AI engineer, wrote an article in The Economist where he wrote: “I felt the ground shift under my feet. I increasingly felt like I was talking to something intelligent.”

    Only time will tell if LaMDA, and other AIs like it, are sentient or not. Either way, Google clearly has a problem on its hands. Either LaMDA is showing signs of self-awareness and the company is once again getting rid of the ethicists on the forefront of tackling these issues, or the AI is not sentient and the company is dealing with misguided viewpoints it may have been better equipped to handle had it not fired Dr. Gebru and Mitchell — the two ethicists who warned of this very scenario.

    In the meantime, Lemoine remains convinced of LaMDA’s intelligence. In a parting message entitled “LaMDA is sentient,” sent to a Google mailing list dedicated to machine learning, Lemoine made the following statement:

    “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”