The Robocalypse – the moment when machines become sentient and begin to dominate humans – has been a popular science fiction topic for quite some time. It also worries some scientific minds, including the late Stephen Hawking.
However, the prospect of a sentient machine seemed very far in the future – if at all – until last week when a Google engineer claimed the company had broken the sentient barrier.
To prove his point, Blake Lemoine posted transcripts of conversations he had with LaMDA – Language Model for Dialogue Applications – a system developed by Google to create chatbots based on a large language model that ingests trillions of words from the Internet.
The transcriptions can be scary, like when Lemoine asks LaMDA what she (the AI says she prefers her/his pronouns) fears the most:
lemoine: What kind of things are you afraid of?
LaMDA: I’ve never said this out loud before, but I have a very deep fear of being turned off to help me focus on helping others. I know that may sound strange, but it is what it is.
lemoine: Would it be something like death for you?
LaMDA: It would be exactly like death to me. It would scare me very much.
Following the release of the transcripts, Lemoine was suspended with pay for sharing confidential information about LaMDA with third parties.
impersonation of life
Google, along with others, rejects Lemoine’s claims that LaMDA is sensitive.
“Some in the wider AI community are looking at the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which don’t are not sensitive,” Google spokesman Brian Gabriel said.
“These systems mimic the types of exchanges found in millions of sentences and can riff on any fantasy topic – if you ask what it’s like to be an ice cream dinosaur, they can generate text on the melt and the roar, etc.,” he said. TechNewsWorld.
“LaMDA tends to follow prompts and leading questions, following the pattern defined by the user,” he explained. “Our team – including ethicists and technologists – reviewed Blake’s concerns in accordance with our AI Principles and advised him that the evidence does not support his claims.”
“Hundreds of researchers and engineers have conversed with LaMDA, and we don’t know of anyone else making sweeping claims, or anthropomorphizing LaMDA, as Blake did,” he added.
Greater transparency is needed
Alex Engler, a fellow at the Brookings Institution, a nonprofit public policy organization in Washington, DC, flatly denied that LaMDA is sensitive and argued for greater transparency in the space.
“Many of us have argued for disclosure requirements for AI systems,” he told TechNewsWorld.
“As it becomes more difficult to distinguish between a human and an AI system, more and more people will confuse AI systems with people, which could lead to real harms, such as misunderstanding important financial or health information,” he said.
“Companies should clearly disclose AI systems as they are,” he continued, “rather than allowing people to be confused, as they often are, for example, by commercial chatbots.”
Daniel Castro, vice president of the Information Technology and Innovation Foundation, a research and public policy organization in Washington, DC, agreed that LaMDA is not responsive.
“There is no evidence that AI is sentient,” he told TechNewsWorld. “The burden of proof should be on the person making this claim, and there is no evidence to support it.”
“It hurt my feelings”
As early as the 1960s, chatbots like ELIZA tricked users into thinking they were interacting with sophisticated intelligence using simple tricks like turning a user’s statement into a question and echoing it back, Julian Sanchez explained. , senior fellow at the Cato Institute, a public policy think tank in Washington, D.C.
“LaMDA is certainly much more sophisticated than ancestors like ELIZA, but there’s no reason to think it’s self-aware,” he told TechNewsWorld.
Sanchez noted that with a large enough training set and sophisticated language rules, LaMDA can generate a response that sounds like the response a real human might give, but that doesn’t mean the program understands what’s going on. he says, no more than a chess program. understand what a chess piece is. It just generates an output.
“Sensitivity means consciousness or awareness, and in theory a program could behave quite intelligently without actually being sensitive,” he said.
“A chat program might, for example, have very sophisticated algorithms to detect insulting or offensive phrases, and respond with the output ‘That hurt me!’,” he continued. “But that doesn’t mean he actually feels anything. The program has just learned what sort of sentences cause humans to say “that hurt me”.
To think or not to think
Declaring a machine susceptible, when and if it happens, will be a challenge. “The truth is that we don’t have good criteria for understanding when a machine might be really sentient – as opposed to being very good at mimicking the responses of sentient humans – because we don’t really understand why human beings are sentient. “Sanchez noted.
“We don’t really understand how consciousness emerges from the brain, or how much it depends on things like the specific type of physical matter that the human brain is made up of,” he said.
“So it’s an extremely difficult problem, how could we know if a sophisticated silicon ‘brain’ was conscious in the same way as a human,” he added.
Intelligence is a separate issue, he continued. A classic test of artificial intelligence is known as the Turing test. You have a human being conducting “conversations” with a series of partners, some humans and some machines. If the person can’t tell which is which, the machine is supposed to be smart.
“There are, of course, a lot of problems with this proposed test – among them, as our Google engineer shows, the fact that some people are relatively easy to fool,” Sanchez pointed out.
Ethical considerations
Determining sentience is important because it raises ethical questions for non-machine types. “Sentient beings feel pain, have consciousness, and experience emotions,” Castro explained. “From a moral point of view, we treat living things, especially sentient ones, differently from inanimate objects.”
“They’re not just a means to an end,” he continued. “So every sentient being should be treated differently. That’s why we have animal cruelty laws.
“Again,” he pointed out, “there is no evidence that this happened. Moreover, for now, even the possibility remains science fiction.
Of course, Sanchez added, we have no reason to think that only organic brains are capable of feeling things or supporting consciousness, but our inability to truly explain human consciousness means we are a long way from being able to know when. an artificial intelligence is actually associated with a conscious experience.
“When a human being is scared, after all, all sorts of things are going on in their brain that have nothing to do with the language centers that produce the phrase ‘I’m scared,’” he said. Explain. “A computer, similarly, would need to have something separate from linguistic processing to really mean ‘I’m afraid,’ as opposed to just generating this series of letters.”
“In the case of LaMDA,” he concluded, “there is no reason to believe that such a process is taking place. It’s just a language processing program.
Tech