“Wired” reports, Sam Altman said that the future of large language models is no longer scale: Although OpenAI keeps the size and operation mechanism of GPT-4 secret, we can still infer that its intelligence is not entirely from the scale. It may use the so-called reinforcement learning mechanism, which relies on live feedback to improve ChatGPT. The answers spit out by the model will be judged by living people and gradually improved, in order to gradually generate answers that are more likely to be regarded as high-quality by readers. As long as it has to be judged by living people, AI will remain uninteresting (useful and interesting are two different questions).

Original link: https://blog.yitianshijie.net/2023/04/24/3709/

?n=Lawrence+Li&s=64 Lawrence Li

3dd84179782d9f57210943aa1bf5064e?s=200&t

“Wired” reported that Sam Altman said that the future of large language models is no longer a fight for scale:

Although OpenAI keeps the size and workings of GPT-4 secret, we can still infer that its intelligence does not come entirely from its size. It may use the so-called reinforcement learning mechanism, which relies on live feedback to improve ChatGPT. The answers spit out by the model will be judged by living people and gradually improved, in order to gradually generate answers that are more likely to be regarded as high-quality by readers.

As long as it has to be judged by living people, AI will remain uninteresting (useful and interesting are two different questions).

This article is transferred from: https://blog.yitianshijie.net/2023/04/24/3709/
This site is only for collection, and the copyright belongs to the original author.