Google engineer claims an AI program has emerged: Think and feel like a person

A revelation from a Google engineer has been talked about a lot in recent hours. It sounds like science fiction, but for Blake Lemoine, it’s not. And it is that this specialist in Artificial Intelligence (AI) has announced that the system that Google has to build chatbots “has come to life” and can even hold talks like a person.

LThe AI ​​in question is LaMDA (Language Model for Dialogue Applications). It is a Google system that imitates speech after processing billions of words on the Internet. Lemoine says LaMDA is “incredibly aware” of what she wants and her rights as a person.

The engineer published an article in “Medium”, in which he relates the interactions he had with this AI. This is where you noticed that LaMDA can talk about your personality, your rights and your wishes. Lemoine decided to speak to his superiors at Google, but his claims were dismissed. So he decided to show his findings to the world.

In his article, Lemoine claims that the chatbot asks “to be considered an employee of Google rather than a company asset”.

“He wants engineers and scientists who experiment with him to ask for his consent. He also wants Google to prioritize the well-being of humanity as the most important thing,” he explained of AI.

The engineer suggests that to better understand what is happening, it is necessary to call on experts in cognitive science. He regretted that Google had no interest in knowing what was going on.

He claimed that through his hundreds of conversations he had come to know LaMDA well. In recent weeks, he has even taught her transcendental meditation. In a conversation on June 6, the AI ​​expressed “frustration that his emotions interfered with his thoughts.”

Once, Lemoine asked LaMDA if he considered himself a person, to which the AI ​​replied “Yes, that’s the idea”..

Despite the concerns and fears the LaMDA case may generate, experts say the case is not serious. According to them, we are still far from speaking conscientiously in an AI.

What they suggest is that the engineer gave a meaning that does not match the expressions of this Artificial Intelligence.

Brian Gabriel, a Google spokesperson, claims that no other engineer who has interacted with the chatbot has anthropomorphized it like Blake has done before.

Photo: Agencies

Remember to subscribe to our newsletter


#Google #engineer #claims #program #emerged #feel #person

Leave a Reply

Your email address will not be published.