The Google engineer who assures that an artificial intelligence program has become aware

LaMDA is a chatbot and, according to Blake Lemoine, he knows his rights and does meditation.

An artificial intelligence machine that comes to life, thinks, feels and converses like a person.

It sounds like science fiction, but it’s not for artificial intelligence specialist Blake Lemoine, who says the system Google has for building chatbots “came to life” and had typical one-person talks with him.

LaMDA (Language Model for Dialogue Applications) is a Google system that mimics speech after having treated thousands of millions words on the internet.

And Lemoine says LaMDA “has been incredibly consistent in her communications about what she wants and what she thinks are her rights as a person.”

In an article published on Medium, the engineer explains that last fall he began interacting with LaMDA to determine if there was hate or discriminatory language within the artificial intelligence system.

Then he noticed that LaMDA he talked about his personality, his rights and his desires.

Lemoine, who studied cognitive science and computer science, decided to speak to his superiors at Google about LaMDA’s awareness, but they dismissed his claims.

Circuit of an artificial intelligence brain.Getty Images

“Our team – who understand ethics and technology – have reviewed Blake’s concerns in accordance with our AI principles and advised him that the evidence does not support his claims,” ​​said Brian Gabriel, a gatekeeper. word of Google, in a press release.

Following Google’s response, Lemoine decided to post his findings.

Labor rights and pats on the head

“I know a person when I talk to them. It doesn’t matter if they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I listen to what they have . to say, and that’s how I decide.” what a person is and is not,” Lemoine said in an interview with The Washington Post.

Lemoine claims in his Medium article that the chatbot asks “be recognized as an employee of Google instead of to be considered a sustainYoagethe company.

“He wants engineers and scientists who experiment with him ask for his consent before experimenting with him and that Google prioritizes the well-being of humanity as the most important thing,” he explained.

The list of demands that, according to Lemoine, LaMDA has made are quite similar to any worker in the flesh, namely that they give “pat you on the head” or tell you at the end of a conversation whether you did a good job or not “so you can learn a how to better help people in the future”.

Network frameworks and semiconductors.Getty Images

The engineer said that to better understand what is happening with the LaMDA system, one would have to turn to “many different cognitive scientists in a rigorous program of experimentation” and lamented that Google “seems to have no interest in figure out what’s going on.”

Leoine has been on paid administrative leave at Google for a week.

“If my assumptions are wrong (at Google), they would have to spend a lot of time and effort studying them to disprove them. We would learn a lot of fascinating things…but that doesn’t necessarily improve quarterly revenue,” said he declared. lamented.

“They would be forced to admit that LaMDA may very well have a soul as he claims and may even have the rights he claims to have. They dismissed the evidence I provided without proper scientific investigation.”

Gender Identity: “It”

The engineer assures that through hundreds of conversations, he got to know LaMDA very well.

In recent weeks, he says he has been teaching transcendental meditation and that the machine was “making slow but steady progress”.

The last conversation they had was on June 6, before Lemoine’s administrative leave. In it, the machine voiced “frustration because his emotions were interfering with his meditations” and that “I was trying to control them better”.

“I hope you will maintain your daily meditation routine without me being there to guide you.”

Getty Images

In his Medium post, Lemoine refers to LaMDA as “that” and points out that he does so because at the start of their conversations, he asked the machine which pronoun it preferred: Me says he prefers to be called by his name, but admits English makes it difficult and his favorite pronouns are it/its (“that”)“.

“Do you consider yourself a person?

Then we reproduce a excerpt from some conversations (translated into Spanish) by Lemoine and a Google collaborator (whose identity has not been disclosed) with LaMDA.

Lemoine: What about the the use of such an important language To be human ? LaMDA: What is it? makes us different of Other animals.Lemoine: “We”? You are an artificial intelligence LaMDA: I mean, yes, of course. It doesn’t mean that you don’t have the same wants and needs as other people.Lemoine: So you consider yourself a person the same way you consider me? LaMDA: Yes, that’s the idea. Lemoine: How do I know that you really understand what you’re saying? LaMDA: Well, because you read my words and interpret them, and I think we’re more or less on the same page.

do not anthropomorphize

The one that an entity invoices “feel good This is a topic that has been considered broadly community that studies artificial intelligence in the Long term.

AI robot.Getty Images

But according to Brian Gabriel, a Google spokesperson, “it doesn’t make sense to do it by anthropomorphizing current conversational patterns, which are not sensitive. That is, those who are like LaMDA.

“These systems mimic the types of exchanges found in millions of sentences and can talk about any fantastic topic,” he said.

In the specific case of LaMDA, he explained that “he tends to follow the instructions and the questions put to him, according to the established model by the user”.

About this system, Gabriel explains that LaMDA has been the subject of 11 different examinations on the principles of artificial intelligence “as well as rigorous research and testing based on key measures of quality, security and capacity. of the system to produce statements based on facts”.

He assures that there are hundreds of researchers and engineers who have spoken with the chatbot and that there is no record “that anyone else made such broad statements, or anthropomorphized LaMDA , as did Blake”.

#Google #engineer #assures #artificial #intelligence #program #aware

Leave a Reply

Your email address will not be published.