LaMDA, Google's AI that can hold conversations and apparently confused its own creators

(CNN Business) — All companies constantly promote the capabilities of their artificial intelligence (AI) constantly improving. But Google he quickly dismissed claims that one of his programs had progressed so far that he had become conscious.

According to a revealing article by Washington Post Saturday, a google engineer said that after hundreds of interactions with an unprecedented state-of-the-art AI system called LaMDA, he believed the program had reached a level of consciousness.

In interviews and public statements, many AI community dismissed the engineer’s claims, while some noted that his version highlights how technology can cause people to attribute human attributes to him. But the belief that Google’s AI might be sentient highlights both our fears and expectations about what this technology can do.

Read also: The EU has agreed new rules for Google, Twitter and Facebook to limit their fake accounts

TheMDAwhich stands for “Language Model for Dialog Applications”, is one of many large-scale artificial intelligence systems that it has been trained on large amounts of Internet text and that it can respond to written prompts.

These systems are basically responsible for finding patterns and predicting which word or words should come next. Such systems have become increasingly efficient at answering questions and writing in a way that can seem humanly convincing, and Google himself featured LaMDA last May in a blog post as one that may “engage fluently on a seemingly endless number of topics“.

But the results can also be goofy, bizarre, disturbing, and prone to rambling.

Google.

According to reports, the statement engineer, Lemon BlakeHe told the Washington Post who shared with Google proof that TheMDA knew about it, but the company disagreed.

In a statement, Google said Monday that its team, which includes ethicists and technologists, “reviewed the concerns of Blake in accordance with our AI Principles and informed you that the evidence does not support your claims. »

On June 6, Lemoine posted on Medium that Google had placed him on paid administrative leave “as part of an investigation into the AI ​​ethics issues he was raising within the company” and that he could be fired ‘soon’. It evokes the experience of Margaret Mitchellwho had led Google’s ethical AI team until Google fired her in early 2021 following her outspokenness about the late 2020 departure of then-co-leader Timnit Gebru.

Gebru was expelled after internal fights, including one related to a AI Leadership Research Paperin which the company told him to give up presenting at a conference, or delete his name.

Read also: All about the Chilean astronomy and data science project: we spoke with Guillermo Cabrera from ALeRCE

A Google spokesperson confirmed that Lemoine remains on administrative leave. According Washington Postwas suspended for violating the company privacy policy.

Lemoine was unavailable for comment on Monday.

The continued emergence of powerful computer programs trained on massive treasure data has also raised concerns about the ethics governing the development and use of this technology. And sometimes progress is seen through the prism of what can happen, rather than what is currently possible.

Responses from members of the AI ​​community to Lemoine’s experiment ricocheted across social media over the weekend and generally came to the same conclusion: Google’s AI is far from conscious. Ababa Birhanesenior fellow at IA Trustworthy at Mozilla, posted on Sunday: “We have entered a new era of ‘this neural network is aware’, and this time it will take a lot of energy to debunk it.”

Gary Marcusfounder and CEO of Geometric intelligencewhich was sold to Uberand author of books including Reboot AI: building an artificial intelligence we can trust, called LaMDA’s idea “clever nonsense” in a tweet. He quickly wrote a blog post pointing out that all these AI systems it matches patterns by extracting them from huge linguistic databases.

Blake Lemoine poses for a portrait at Golden Gate Park in San Francisco, Calif., Thursday, June 9, 2022.

In an interview Monday with CNN BusinessMarcus said the best way to think of systems like LaMDA is as a “glorified version” of auto-complete software that you can use to predict the next. word in a text message. If you write, “I’m really hungry, I want to go to a,” that might suggest “restaurant” as the next word. But this is a prediction made using statistics.

“No one should think that self-completion, even on steroids, is conscious,” he said.

In an interview, Gebru, who is the founder and CEO of the Institute for Distributed AI Research, or DAIR, said Lemoine is a victim of many companies that claim sentient AI or artificial general intelligence, an idea that refers to AI that can perform human-like tasks and interact with us in meaningful waysthey are not very far.

Read also: A more immersive experience: Google will launch a map with the most realistic visualization you’ve ever seen

For example, he noted, Ilya Sutskeverco-founder and chief scientist of OpenAI, tweeted in February that “it may be that today’s large neural networks are somewhat self-aware”.

For his part, the vice-president and member of Google Research, Blaise Aguera and Arcaswritten in an article for The Economist that when he started using LaMDA last year, “I felt more and more like I was talking to something clever“. This article now includes an editor’s note noting that Lemoine has since “allegedly been furloughed after saying in an interview with the Washington Post that LaMDA, the chatbot of Google, had become “aware”.

“What’s happening is there’s a race to use more data, more computing, to say you’ve created this general thing that knows everything, answers all your questions or whatever, and it’s is the drum you beat,” said Gebru. “So how can you be surprised when this person goes to extremes?”


In his statement, Google noted that LaMDA underwent 11 “separate reviews of AI principles” as well as “rigorous research and testing” related to quality, safety and ability to make evidence-based claims. “Of course, some in the AI community broader are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so anthropomorphize current conversational patterns, which are unaware,” the company said.

“Hundreds of researchers and engineers have spoken with LaMDA and we don’t know of anyone else making sweeping claims, or anthropomorphizing LaMDA, as Blake did,” Google said.

.
#LaMDA #Googles #hold #conversations #apparently #confused #creators

Leave a Reply

Your email address will not be published.