Bloomberg — A Google (GOOGL) software engineer has been suspended after going public with his claims that he found “sentient” artificial intelligence on the company’s servers, sparking debate about if and how AI can reach consciousness. Researchers say it’s an unfortunate distraction from the industry’s most pressing issues.
Engineer Blake Lemoine said he believed Google’s AI chatbot was capable of expressing human emotion, raising ethical questions. Google fired him for sharing sensitive information and said his concerns were unfounded, a view widely held in the AI community. The most important thing, researchers say, is answering questions like whether AI can cause harm and bias in the real world, whether real humans are exploited in AI training, and how big tech companies are acting as gatekeepers to AI development. .
Lemoine’s position could also make it easier for tech companies to abdicate responsibility for AI-based decisions, said Emily Bender, a professor of computational linguistics at the University of Washington. “A lot of effort has gone into this show,” he said. “The problem is that the more this technology is sold as artificial intelligence, let alone sensitive, the more people will be willing to accept AI systems” that can cause harm in the real world.
Bender cited examples in hiring staff and qualifying students, which can have implicit biases depending on the datasets that were used to train the AI. According to Bender, if the focus is on the apparent sensitivity of the system, it moves away from the direct responsibility of the creators of the AI for possible flaws or biases in the programs.
Saturday, the Washington Post posted an interview with Lemoine, who chatted with an AI system called LaMDA, or Language Models for Dialog Applications, a framework used by Google to create specialized chatbots. ANDThe system was trained on billions of Internet words to mimic human conversation. In his conversation with the chatbot, Lemoine said he came to the conclusion that the AI was a sentient being that should have its own rights. He said the sentiment was not scientific, but religious: ‘Who am I to tell God where he can and cannot put souls?’ he said on Twitter.
Google employees at Alphabet Inc. (GOOGL) have remained largely silent on internal channels, with the exception of Memegen, where Google employees shared bland memes, according to a person familiar with the matter. But over the weekend and into Monday, researchers pushed back against the idea that the AI was in fact sentient, saying the tests only indicated a system highly capable of mimicking humans, not the sensitivity itself. “It mimics the perceptions or feelings of the training data given to it, intelligently and specifically designed to make it look like it understands,” said Jana Eggers, CEO of artificial intelligence company Nara Logics.
LaMDA’s architecture “just doesn’t support some key capabilities of human consciousness,” said Max Kreminski, a researcher at the University of California, Santa Cruz, who studies computer media. If LaMDA is like other large language models, he said, it would not learn from its interactions with human users because “the deployed model’s neural network weights are frozen.” It also wouldn’t have any other form of long-term storage to write information to, meaning it couldn’t “think” in the background.
In response to Lemoine’s claims, Google said that LaMDA can follow the instructions and questions asked, which gives him the appearance of being able to elucidate on any subject. “Our team (which includes ethicists and tech specialists) has reviewed Blake’s concerns in accordance with our AI principles and advised him that the evidence does not support his claims,” said Chris Pappas, a Google spokesperson. “Hundreds of researchers and engineers have spoken with LaMDA and we don’t know if anyone else has made radical claims, or anthropomorphized LaMDA, like Blake did.”
The debate over sentience in robots has taken place alongside the depiction of science fiction in popular culture, in stories and movies featuring romantic partners or AI villains. Thus, the debate had an easy path into the mainstream. “Instead of discussing the wrongdoings of these companies,” such as the sexism, racism, and centralization of power created by these AI systems, everyone “spent the whole weekend discussing the sensitivities.”Timnit Gebru, former co-leader of Google’s Ethical AI Group, said on Twitter. “Derailment mission accomplished.”
The first chatbots of the 1960s and 1970s, like ELIZA and PARRY, made headlines for their ability to converse with humans. In recent years, the GPT-3 language model from OpenAI, the lab founded by Tesla (TSLA) CEO Elon Musk, among others, has demonstrated capabilities such as the ability to read and write. But from a scientific perspective, there’s no evidence that human intelligence or consciousness is built into these systems, Bart Selman said, professor of computer science at Cornell University who studies artificial intelligence. LaMDA, he said, “is just one more example in this long history.”
In fact, AI systems don’t currently reason about the effects of their responses or behaviors on people or society, said Mark Riedl, a professor and researcher at the Georgia Institute of Technology. And that’s a technological vulnerability. “An AI system may not be toxic or biased, but may not understand that it may be inappropriate to talk about suicide or violence in certain circumstances,” Riedl said. “The investigation is still immature and ongoing, although there is a rush to roll out.”
Tech companies like Google and Meta Platforms Inc. (FB) are also deploying AI to moderate content on their massive platforms, but there’s still a lot of toxic language and posts that sneak through their automated systems. To overcome the shortcomings of these systems, companies must employ hundreds of thousands of human moderators. to ensure hate speech, misinformation and extremist content on these platforms are properly labeled and moderated, and even then companies are often flawed.
The focus on AI susceptibility “hides further” the existence and, in some cases, the allegedly inhumane working conditions of these workers, said Bender of the University of Washington.
It also hides the chain of custody when AI systems make mistakes. In a now-famous mistake in its AI technology, Google issued a public apology in 2015 after it discovered the company’s Photos service had mislabeled photos of a black software developer and his friend by tagging them. calling them “gorillas”. Until three years later, the company admitted that its solution was not an improvement on the underlying AI system; instead, it deleted all results for the search terms “gorilla”, “chimpanzee”, and “monkey”.
Emphasizing AI sensitivity would have given Google leeway to blame the problem on the smart AI that made such a decision, Bender said. “Society might say, ‘Oh, the software made a mistake,'” he said. “Well no, your company created this software. You are responsible for this error.” And the discourse on sentience confuses it in the wrong way”.
Not only does AI offer humans a way to abdicate their responsibility to make fair decisions to a machine, but it often simply reproduces systemic biases in the data it is trained on, said University computer scientist Laura Edelson. from New York. In 2016, ProPublica published extensive research on COMPAS, an algorithm used by judges and probation officers to assess a defendant’s likelihood of recidivism. The investigation found the algorithm consistently predicted that black people were at “higher risk” of committing further crimes, even though their records showed they had not. “Systems like this technologically eliminate our systemic biases,” Edelson said. “They reproduce these biases, but they put them in the black box of the ‘algorithm’, which cannot be questioned or refuted.”
In addition, according to the researchers, since Google’s LaMDA technology is not open to outside researchers, the public and other computer scientists can only respond to what Google tells them or through information published by Lemoine.
“It needs to be accessible to searchers outside of Google so that search can progress in a more diverse way,” Riedl said. “The more voices there are, the more diversity there is in research questions, the more possibilities there are for new advances. This adds to the importance of diversity of race, gender and lived experiences, which many large tech companies currently lack.
This article was translated by Estefanía Salinas Concha.
#Googles #smart #bot #debate #overshadows #deeper #issues