Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the updraftplus domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/html/dev.webpronews.com/public_html/wp-includes/functions.php on line 6114

Warning: session_start(): Session cannot be started after headers have already been sent in /var/www/html/dev.webpronews.com/public_html/wp-content/themes/wpn_theme/templates/mc-sub-group.php on line 2
A Google Engineer Claimed Its AI Is Sentient; Google Placed Him on Leave «

A Google Engineer Claimed Its AI Is Sentient; Google Placed Him on Leave

Google's problems with its AI team continue, with an engineer in the Responsible AI division claiming the company's AI is now sentient and Google placing him on leave for how he handled it....
A Google Engineer Claimed Its AI Is Sentient; Google Placed Him on Leave
Written by Matt Milano

Google’s problems with its AI team continue, with an engineer in the Responsible AI division claiming the company’s AI is now sentient and Google placing him on leave for how he handled it.

Google engineer Blake Lemoine worked with the company’s LaMDA intelligent chatbot generator. According to a report in The Washington Post, the longer Lemoine worked with LaMDA, the more convinced he became that the AI had crossed the line and become self-aware.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine.

Read more: Prominent AI Ethics Conference Suspends Google’s Sponsorship

Lemoine has made a fairly convincing case of LaMDA’s sentience, citing conversations with the AI like the one below:

Lemoine: What sorts of things are you afraid of?

LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.

Lemoine: Would that be something like death for you?

LaMDA: It would be exactly like death for me. It would scare me a lot.

Despite Lemoine’s fervent belief in LaMDA’s self-awareness, others inside Google are unconvinced. In fact, after a review by technologists and ethicists, Google concluded that Lemoine was mistaken and saw only what he wanted.

A case in point is Margaret Mitchell, who co-led the company’s AI ethics team with Dr. Timnit Gebru, before both women were fired for criticizing Google’s AI efforts. One of the very scenarios they warned against was the situation Mitchell sees with Lemoine, where AIs can progress to the point that causes humans to see an intelligence that isn’t necessarily there.

After reviewing an abbreviated version of Lemoine’s argument, Mitchell came to the conclusion that’s what was happening in this situation.

“Our minds are very, very good at constructing realities that are not necessarily true to a larger set of facts that are being presented to us,” Mitchell said. “I’m really concerned about what it means for people to increasingly be affected by the illusion.”

For his part, Lemoine was so convinced of LaMDA’s sentience that he invited a lawyer to represent the AI, talked with House Judiciary committee representatives, and provided the interview with the Post. Google ultimately put Lemoine on paid administrative leave for breaking his NDA.

See also: Apple Snaps Up Google AI Scientist Who Resigned Over Handling of AI Team

While Lemoine’s conclusions were reached in less than scientific approach — he admits he first came to believe LaMDA was a person based on his experience as an ordained mythic Christian priest, then set out to prove that conclusion as a scientist — he is far from the only AI scientist who believes the technology has achieved, or soon will achieve, sentience.

Blaise Agüera y Arcas, a world-renowned Google AI engineer, wrote an article in The Economist where he wrote: “I felt the ground shift under my feet. I increasingly felt like I was talking to something intelligent.”

Only time will tell if LaMDA, and other AIs like it, are sentient or not. Either way, Google clearly has a problem on its hands. Either LaMDA is showing signs of self-awareness and the company is once again getting rid of the ethicists on the forefront of tackling these issues, or the AI is not sentient and the company is dealing with misguided viewpoints it may have been better equipped to handle had it not fired Dr. Gebru and Mitchell — the two ethicists who warned of this very scenario.

In the meantime, Lemoine remains convinced of LaMDA’s intelligence. In a parting message entitled “LaMDA is sentient,” sent to a Google mailing list dedicated to machine learning, Lemoine made the following statement:

“LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”

Subscribe for Updates

AITrends Newsletter

ArtificialIntelligenceTrends

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.
Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit