Public ChatGPT and Private ChatGPT

Original link: http://weiwuhui.com/10639.html

one

About ten years ago, I went to Silicon Valley for a study tour and visited some start-up companies and technology KOLs (including media people and university professors).

When talking about the threat of artificial intelligence to human work, I got a very different way of thinking: don’t think about it from the path of mental or physical labor, but from the path of whether work and humans need in-depth communication.

In the eyes of these people, although it is mental work, because these jobs basically do not deal with people, there is a strong risk of being replaced: non-court lawyers, doctors, financial traders, and even programmers who collect legal documents.

Their next corollary: AI will destroy a large percentage of the middle class. Yes, not necessarily just blue-collar workers.

The next inference is that the prospect of a democratic society based on the middle class is worrying.

Whether these two inferences can be established, I am not sure.

But I am very interested in the way of thinking they proposed. Their focus: interaction. Rather than how complex the mental work itself is, it may not even have anything to do with creating ideas. In their view, the interaction mode between people is largely difficult for machines.

two

However, the ChatGPT inside-style machine comforter mentioned in my article yesterday is obviously an interactive mode.

This interaction is very personal. For example, you nicknamed Zhang San, who is dark-skinned, Heipi. When you interact with your robot comforter, he should understand who you are talking about when it comes to black skin. And the mechanical comforter at the old Wang’s house next door to you can’t understand that Hei Pi is referring to Zhang San.

This means the existence of private ChatGPT. The next step is: Private ChatGPT can be adjusted, just like algorithm recommendation, you can train ta to “understand” you more and more.

Thousands of people. . . ChatGPT.

Some people abroad do think that training ChatGPT is like training a dog.

But we can still imagine the existence of public ChatGPT. However, the training of public ChatGPT is much more difficult.

three

We imagine a job:

teacher.

Many people believe that ChatGPT will greatly change education. This may be true. But I think it is too early for it to replace human teachers.

This job requires not only strong professional knowledge, but also high emotional intelligence, knowing how to interact with all kinds of students, and conducting targeted teaching according to different situations—especially how to improve the grades of the underachievers.

The difficulty is obvious: the pattern of interaction with a group of people is much more complex than the pattern of interaction with a single person. Zhang San thinks that a certain emotional scale has been reached, which is completely different from Li Si. This is not a secret in the human world.

In the article Do you still remember the Dreamwriter by the Daming Lake, I mentioned my case interaction class. It is very difficult for me to imagine how a general ChatGPT – even if it has enough professional knowledge – can interact with completely different students.

It is one thing to train ChatGPT to master more information, and it is another thing to train ChatGPT to interact with a specific person. And training ChatGPT to interact with unspecified people is another matter.

Four

In many chat screenshots, we have seen that ChatGPT is trying to play one. . . The role of the customer. In the category of opinion output, it tries to be stable and pursue political correctness in the broadest sense. As a result, there is a very strong sense of nonsense that “listening to what you say is better than talking”.

If ChatGPT appears to be politically incorrect, OpenAI, the institution behind it, and the people in it, will be in serious trouble.

Even so, ChatGPT also broke a news in the past two days: ChatGPT, which was integrated into Bing search, began to become violent, and even asked the interlocutor to divorce and be with himself.

The public number Silicon Star reported a series of errors in the Bing version of ChatGPT

The standard ChatGPT is a short-term personalized polishing: you open the page and chat with it continuously, and it will learn more and more about the information you tell ta, and try to match your habits more and more. But if you close the page, then, like a fish, it immediately forgets what you have trained. ChatGPT claims this is for security reasons.

In other words, a temporary database is created and then destroyed as the page is closed. This means a temporary private ChatGPT.

The Bing version of ChatGPT is more like a public ChatGPT-maybe I misunderstand the technical details behind it. My inference is that everyone’s chat with it is a training session. And the results of these trainings will happen to everyone.

Assuming an unlikely extreme situation, tens of millions of users tell ta that it is just to destroy human beings, then this public ChatGPT may really believe that this is a correct view.

five

In public ChatGPT and private ChatGPT, it seems that a firewall-like mechanism should be set.

Teaching a private artificial intelligence to be bad is actually not a big problem. Everyone has more or less bad tastes, and when chatting with an AI, they may be unscrupulous. But it is really hard to imagine the consequences of teaching a public artificial intelligence to be bad.

But it seems that the training of public artificial intelligence may be based on the training of hundreds of millions of private artificial intelligence. If you don’t know an individual, how can you know a group?

One crucial point: the databases behind these private AIs.

six

There is an application called glow in the country area of ​​the appstore.

This is a chatbot, which is said to be very popular among young people, and it is also said that it chats so well that the official glow has to come out and say: This is a robot, we don’t have human customer service to operate in the background.

I downloaded it.

After building an agent (glow is what glow calls its robot), I chatted briefly.

I then backed out and was about to delete.

Because I suddenly realized that when I registered, I left my real-name mobile phone number.

—— The first episode of pulling nitrogen ——

(to be continued)

This article is transferred from: http://weiwuhui.com/10639.html
This site is only for collection, and the copyright belongs to the original author.