Connect with us

Google hasn’t started the Robocalypse yet




TECHNOLOGY

Google hasn’t started the Robocalypse yet

The Robocalypse — the time when machines become conscious and begin to dominate humans — has been a popular science fiction topic for quite some time. It also worries some scientific minds, most notably the late Stephen Hawking.

However, the prospect of a sentient machine seemed very distant — if at all — until last week, when a Google engineer claimed the company had broken the barrier to sensitivity.

To prove his point, Blake Lemoine published transcripts of conversations he had with LaMDA – Language Model for Dialogue Applications – a system developed by Google to create chatbots based on a large language model that takes trillions of words from the Internet.

The transcriptions can be chilling, like when Lemoine asks LaMDA where it (the AI ​​says it prefers the pronouns it/her) is most afraid:

lemoine: What things are you afraid of?

LaMDA: I’ve never said this out loud, but there’s a very deep fear of being knocked out to help me focus on helping others. I know that might sound weird, but it is.

lemoine: Would that be like death to you?

LaMDA: It would be exactly like death for me. It would really scare me.

After posting the transcripts, Lemoine was suspended with payment for sharing confidential information about LaMDA with third parties.

Imitation of life

Google, as well as others, ignores Lemoine’s claims that LaMDA is sensitive.

“Some in the wider AI community are considering the long-term possibility of conscious or general purpose AI, but there’s no point in doing so by anthropomorphizing today’s conversational models, which are not conscious,” noted Google spokesperson Brian Gabriel. .

“These systems imitate the kinds of exchanges found in millions of sentences and can riff on any fantastic subject — if you ask what it’s like to be an ice dinosaur, they can generate text about melting and roaring and so on,” he said. . TechNews All.

“LaMDA tends to go along with prompts and leading questions, according to the pattern set by the user,” he explained. “Our team — including ethicists and technologists — have assessed Blake’s concerns against our AI principles and informed him that the evidence does not support his claims.”

“Hundreds of researchers and engineers have spoken to LaMDA, and we’re not aware of anyone else making the elaborate claims, or anthropomorphizing LaMDA, as Blake has,” he added.

Need more transparency

Alex Engler, a guy with The Brookings Institutea public policy nonprofit in Washington, DC, emphatically denied that LaMDA is aware and advocated for greater transparency in the space.

“Many of us have argued for disclosure requirements for AI systems,” he told TechNews All.

“As it becomes more difficult to distinguish between a human and an AI system, more people will confuse AI systems for humans, potentially leading to real harm, such as misunderstanding important financial or health information,” he said.

ADVERTISEMENT

“Companies should clearly disclose AI systems as they are,” he continued, “rather than let people confuse them, as is often done by commercial chatbots, for example.”

Daniel Castro, vice president of the Information Technology and Innovation Foundation, a research and government organization in Washington, DC, agreed that LaMDA is not aware.

“There is no evidence that the AI ​​is conscious,” he told TechNews All. “The burden of proof should be on the person making this claim, and there is no evidence to support it.”

‘That hurt my feelings’

As early as the 1960s, chatbots loved ELIZA fooled users into thinking they were dealing with advanced intelligence by using simple tricks like turning a user statement into a question and bouncing it back to them, explains Julian Sanchez, a senior fellow at the Cato Institutea public policy think tank in Washington, DC

“LaMDA is certainly much more advanced than ancestors like ELIZA, but there’s no reason to think it’s conscious,” he told TechNews All.

Sanchez noted that with a large enough training set and some advanced language rules, LaMDA can generate a response that sounds like the response a real human might give, but that doesn’t mean the program understands what it’s saying, any more than a chess program does. understand what a chess piece is. It just generates an output.

“Feeling means consciousness or awareness, and in theory a program could behave quite intelligently without actually being conscious,” he said.

“For example, a chat program may have very sophisticated algorithms to detect offensive or abusive sentences and respond with the output ‘That hurt my feelings!'” he continued. “But that doesn’t mean it actually feels anything. The program just learned what kind of phrases make people say, ‘They hurt my feelings.’”

To think or not to think

It will be challenging to consciously explain a machine, when and if that ever happens. “The truth is, we don’t have good criteria for understanding when a machine can be really conscious — as opposed to being really good at imitating the reactions of feeling humans — because we don’t really understand why humans are conscious,” Sanchez noted. .

“We don’t really understand how consciousness arises from the brain, or how much it depends on things like the specific type of physical matter that makes up human brains,” he said.

“So it’s an extremely difficult problem, how would we ever know if an advanced silicon ‘brain’ was conscious in the same way as a human brain,” he added.

Intelligence is a separate issue, he continued. A classic test of machine intelligence is known as the Turing test. You have a human having “conversations” with a series of partners, some people and some machines. If the person can’t tell which is which, the machine is probably intelligent.

“There are, of course, many issues with that proposed test, including, as our Google engineer shows, the fact that some people can be fooled relatively easily,” noted Sanchez.

Ethical Considerations

Determining the feel is important because it raises ethical questions for non-machine types. “Sensitive beings feel pain, have awareness, and experience emotions,” explains Castro. “From a morality perspective, we treat living beings, especially conscious ones, differently from inanimate objects.”

“They are not just a means to an end,” he continued. “So each sentient being must be treated differently. That’s why we have laws against animal cruelty.”

“Again,” he stressed, “there is no evidence that this happened. Moreover, even the possibility remains science fiction for now.”

ADVERTISEMENT

Of course, Sanchez added, we have no reason to think that only organic brains are capable of feeling things or sustaining consciousness, but our inability to really explain human consciousness means we’re a long way from knowing when. a machine intelligence really is. associated with a conscious experience.

“After all, when a person is afraid, all kinds of things happen in that person’s brain that have nothing to do with the language centers that produce the sentence ‘I am afraid’,” he explained. “A computer would similarly have to have something going on that differs from linguistic processing to really mean ‘I’m scared’, rather than just generate that string of letters.”

“In the case of LaMDA,” he concluded, there is no reason to believe such a process is underway. It’s just a language processing program.”

Continue Reading
You may also like...

More in TECHNOLOGY

To Top