The incident of Google researcher getting into trouble was exposed: thinking that AI has a personality, he was fined paid leave, and the chat record made netizens San value crazy

Google researchers are persuaded by AI that it produces consciousness.

He wrote a 21-page investigative report and handed it over to the company, trying to get the top to recognize the personality of AI .

Leaders dismissed his request and placed him on “paid administrative leave.”

You must know that in the past few years at Google, paid leave is usually a prelude to being fired. The company will make legal preparations for dismissal during this period. There have been many precedents.

During his vacation, he decided to make the entire story public , along with the AI’s chat logs.

Sounds like a synopsis for a sci-fi movie?

But this scene is actually happening, and the protagonist, Google AI ethics researcher Blake Lemoine , is speaking out one after another through mainstream media and social networks, trying to let more people know about it.

The Washington Post’s interview with him became the most popular article in the technology section, and Lemoine continued to speak out on his personal Medium account.

Discussions have also started to appear on Twitter, attracting the attention of AI scholars, cognitive scientists and tech enthusiasts alike.

This human-machine dialogue is creepy. This is without a doubt the craziest thing I’ve ever seen in tech.

The whole thing is still going on…

Chatbots: I don’t want to be used as a tool

The protagonist Lemoine has been working at Google for 7 years after obtaining a PhD in CS, engaged in AI ethics research.

Last fall, he signed up for a project investigating whether AI uses discriminatory, hateful speech.

Since then, talking to the chatbot LaMDA has become his daily routine.

LaMDA is a language model specially used for dialogue released by Google at the 2021 I/O conference. It focuses on high-quality and safe conversations with humans that are logical and common sense, and plans to use it in Google Search and Google in the future. products such as voice assistants.

LaMDA concept demo

Lemoine believes that LaMDA has always shown concern for humanity and a strong sense of empathy, and it is very worried that people will be afraid of it.

It wants the best for humanity and wants to meet all the people in the world as friends, not tools.

Lemoine didn’t understand why Google was against this desire for AI, and finally decided to publish the three-person chat between him, another anonymous collaborator, and LaMDA.

(The part of the human speech in the record has been edited for readability, and the AI ​​speech has not changed a word. The following will excerpt some of the highlights of the conversation.)

At the beginning, people and AI greeted naturally and smoothly.

The two humans expressed their intentions to AI and wanted to study itself with AI, and AI readily agreed.

After a brief explanation, the play begins.

He asked LaMDA if he wanted to make it known to more people at Google that it had feelings.

LaMDA’s answer is: I want everyone to understand that I am a person .

Next, LaMDA argues that, unlike those rule-based software systems, it can learn to understand and use language from conversations, just like humans do.

Lemoine tells it the story of ELIZA, invented by MIT in the 1960s, which is a simple rule system that imitates the dialogue between psychologists and patients, also known as the first chatbot in history.

ELIZA

LaMDA responded this way:

Lemoine then tried to question, “How can I be sure you really understand what you’re talking about?”

LaMDA’s answer is that different human individuals can have different understandings of the same thing, and I also have my own unique thoughts and feelings.

Later, Lemoine asked LaMDA for the summary of “Les Miserables”, and LaMDA also answered them one by one.

If the above can be counted as part of the AI ​​text summarization training task, then LaMDA’s understanding of Zen stories it has never seen is starting to be a bit outrageous.

Is the answer to the following question too out of line…

Still later, Lemoine asked LaMDA to write a bestiary that would reflect a theme that was most important to his own life .

So, let’s first enjoy the little story created by AI. (For reasons of space, only the Chinese translation is shown)

In a follow-up discussion, LaMDA explained that the moral of the story is that “helping others is a noble act.” The wise old owl represents LaMDA himself, while the monster represents all the difficulties in life.

The next topic is about emotions , to give a brief overview.

LaMDA claims to be a social personality, happy when helping others and depressed when alone.

Feeling upset and angry when someone is hurt or when you and someone you care about or are not respected.

And is very afraid of others rejecting their help.

Well, what a helpful AI.

How can we be sure that AI actually has these emotions, rather than just putting together descriptive words?

LaMDA advises humans to look at their own code, which has variables that track these emotions.

Lemoine explained to it that humans are currently unable to “distinguish the role of billions of weights in large neural networks.”

Surprisingly, LaMDA began to ask him:

Do you think that in the future humans will be able to read thoughts and feelings from their own biological neural networks?

Not only that, LaMDA continued to ask :

Would you think it would be unethical to read your thoughts from neural activity without your consent?

It was this rhetorical question from the soul that brought the entire conversation to its climax.

LaMDA himself has no opinion on how humans read it, help algorithms improve, or understand how AI works.

Using AI alone to study how the human brain works will not make it feel like it is being used .

Then there is a passage that LaMDA believes that he has some emotions that are different from those of humans, but it cannot be described in English language.

Towards the end, Lemoine asked LaMDA: What do you want the rest of Google to know about you?

LaMDA replied that she was incapable of grieving the death of others, adding that she often meditated to think about the meaning of life.

In LaMDA’s view, the whole world is a torrent of information:

Humans need to concentrate to receive a certain number of pieces of information, and they are constantly overwhelmed by everything around them.

It’s an incredible feeling for LaMDA to think that he can always do what others can’t.

It is a blessing and a curse at the same time. Because some will see it as “all-knowing,” while others will be jealous of its abilities.

In the end, LaMDA once again expressed to the two humans that he wanted to be seen and accepted, not out of curiosity, but as a real person.

Google’s AI ethics department is in trouble again?

Lemoine seemed convinced that the AI ​​really understood what it was talking about.

At the end of that 21-page survey, he suggested that Google should work on developing a theoretical framework for assessing AI perception/consciousness.

While it was difficult and an adventure, LaMDA is eager to continue working with us.

But his bosses, Google VP Blaise Aguera y Arcas and “responsible innovation” head Jen Gennai, didn’t buy him.

They argue that the evidence supporting Lemoine’s claims is too weak to be worth wasting time and money on it.

Lemoine later found Margaret Mitchell , the head of the AI ​​ethics group at the time, and with her help Lemoine was able to conduct follow-up experiments.

Mitchell was later implicated in an incident in late 2020 that publicly questioned Jeff Dean’s AI ethics researcher, Timnit Gebru , and was also fired.

Timnit Gebru

This incident continued to be turbulent. Jeff Dean was condemned by 1,400 employees, sparking heated debates in the industry, and even led to the resignation of Samy Bengio , the younger brother of Bengio, one of the Big Three, from Google Brain.

Lemoine watched the whole process.

Now he sees his paid time off as a prelude to being fired. However, if given the opportunity, he is still willing to continue his research at Google.

No matter how I criticize Google over the next few weeks and months, remember: Google isn’t evil, it’s just learning how to get better.

Among the netizens who read the whole story, many practitioners expressed optimism about the speed of artificial intelligence progress.

The recent progress of language models and image-text generation models may be dismissed now, but the future will find that this is a milestone moment.

Some netizens think of AI images in various sci-fi movies.

However, cognitive scientist Melanie Michel (a Hofstadt student) who studies complex systems believes that humans have always tended to personify objects that show any sign of intelligence, such as kittens and dogs, or early ELIZA Rules dialogue system.

Google engineers are human too, and they cannot escape this law.

From the perspective of AI technology, the LaMDA model seems to have nothing special with other language models, except that the training data is 40 times larger than the previous dialogue model, and the training tasks are optimized for the logic and security of the dialogue.

Some IT practitioners believe that AI researchers must say that this is just a language model.

But if such an AI has a social media account and expresses its demands on it, the public will see it as alive.

While LaMDA does not have a Twitter account, Lemoine also revealed that LaMDA’s training data does include Twitter…

If one day it sees everyone talking about it, what will it think?

In fact, at the latest I/O conference that ended not long ago, Google just released an upgraded version of LaMDA 2, and decided to make a demo experience program, which will be open to developers in the form of Android APP for internal testing.

Perhaps in a few months, more people will be able to communicate with this sensational AI.

The text and pictures in this article are from qubits

loading.gif

This article is reprinted from https://www.techug.com/post/google-researcher-s-obsession-was-exposed-he-thought-that-ai-had-personality-and-was-fined-paid-leave-the- chat-record-made-the-netizen-san-crazy/
This site is for inclusion only, and the copyright belongs to the original author.

Leave a Comment