After being closed by OpenAI CEO, Yann LeCun attacked again: ChatGPT has a very superficial grasp of reality

Heart of the Machine Report

Editors: Egg Jam, Du Wei

Although Yann LeCun or some scholars do not have a high evaluation of ChatGPT, its commercial success is still unstoppable.

The relationship between the bosses is sometimes really confusing.

Yesterday, it was discovered that OpenAI CEO Sam Altman had unfollowed Meta’s chief artificial intelligence scientist Yann LeCun on Twitter.

It is difficult for us to determine the specific time point of this unofficial event, but we can basically determine the cause of the incident. A few days ago, Yann LeCun expressed his views on ChatGPT at a small online gathering of media and executives some time ago:

‘As far as the underlying technology is concerned, there is nothing particularly innovative about ChatGPT, nor is it revolutionary. Many research labs are using the same technology to do the same work. ‘

In ZDNet’s ‘ChatGPT is ‘not particularly innovative,’ and ‘nothing revolutionary’, says Meta’s chief AI scientist’ report, some details of LeCun’s speech were revealed. Some of them are amazing reviews:

  • ‘Compared with other labs, OpenAI is not particularly advanced. ‘

  • ‘The Transformer architecture used by ChatGPT is pre-trained in this self-supervised way. Supervised learning is something I’ve been advocating for a long time, even before OpenAI came along. ‘

  • ‘Transformer is a Google invention, and work on this type of language project goes back decades. ‘

In this way, Sam Altman’s clearance action is also justifiable.

Four hours after the discovery of ‘Take Off’, Yann LeCun updated his status and reposted an article from ‘Yin Yang’ ChatGPT:

Why can a large language model like ChatGPT spout nonsense? their grasp of reality is very superficial

Some disagree: ‘ChatGPT is a source of extensive knowledge and enormous creativity, having been trained on numerous books and other sources of information. ‘

In this regard, LeCun also expressed his point of view: ‘No one said that LLM is useless. I said this myself during the brief launch of FAIR’s Galactica. People crucify it because it produces bullshit. ChatGPT does the same. But again, that doesn’t mean they’re not useful. ‘

In fact, the Atlantic article was a review of a paper from the Cognitive Science Group at MIT. Let’s look at the specific research content.

What does this paper say?

The title of this paper is “Dissociating Language and Thought in Large Language Models: a Cognitive Perspective”, and the authors are from the University of Texas at Austin, MIT and UCLA.

Paper address: https://ift.tt/5LnrpJq

We know that today’s large language models (LLMs) are often able to generate coherent, grammatical, and meaningful-looking passages of text. This achievement has fueled speculation that these networks are already, or will soon be, ‘thinking machines’, performing tasks that require abstract knowledge and reasoning.

In this paper, the author considers two different aspects of language use performance to observe the ability of LLM, as follows:

  • Formal language competence, including knowledge of the rules and patterns of a given language;

  • Functional language competence, the set of perceptual abilities required for language comprehension and use in the real world.

Drawing on evidence from cognitive neuroscience, the authors show that human formal abilities depend on specific language processing mechanisms, while functional abilities require multiple abilities beyond language, which constitute formal reasoning, world knowledge, situational modeling, and social cognition thinking ability. Similar to the distinction between the two abilities in humans, LLMs perform well (albeit imperfectly) on tasks that require formal language abilities, but tend to fail on many tests that require functional abilities.

Based on this evidence, the authors argue that first, modern LLMs should be taken seriously as models of formal language skills, and second, that models for mastering real-life language use require the incorporation or development of core language modules and the diverse Language-neutral cognitive abilities.

In conclusion, they argue that the distinction between formal and functional language abilities helps to clarify discussions surrounding the potential of LLMs and provides avenues for building models that understand and use language in a human-like manner. The failure of LLMs on many nonlinguistic tasks does not diminish them as good models for language processing, and if the human mind and brain are an analogy, future AGI progress may depend on combining language models with models that represent abstract knowledge and support complex reasoning Combine.

ChatGPT math level still needs to be improved

LLM is lacking in functional capabilities other than language (such as reasoning, etc.), OpenAI’s ChatGPT is an example. Although it was previously announced that the mathematics ability has been upgraded, netizens complained that they can only be proficient in addition and subtraction within ten.

Recently, in a paper “Mathematical Capabilities of ChatGPT”, researchers from Oxford University, Cambridge University and other institutions tested the mathematical capabilities of ChatGPT on publicly available and hand-crafted datasets, and measured it against mathematical corpora such as Minerva. Performance of other models trained. At the same time, we test whether ChatGPT can be called a useful assistant for professional mathematicians by simulating various use cases that occur in mathematicians’ daily professional activities (question answering, theorem search).

Paper address: https://ift.tt/YQnKpH9

The researchers introduced and released a new dataset, GHOSTS, the first natural language dataset produced and curated by mathematics researchers, covering graduate-level mathematics and providing a comprehensive overview of the mathematical capabilities of language models. They benchmark ChatGPT on GHOSTS and evaluate performance against fine-grained criteria.

Test results show that ChatGPT’s math ability is significantly lower than that of ordinary math graduate students, and it can often understand questions but fail to give correct answers.

$20 per month, ChatGPT Plus big member goes live

In any case, the commercial success of ChatGPT is obvious to all.

Just now, OpenAI announced ‘ChatGPT Plus’, a new $20 per month paid membership service.

Subscribers will get some benefits:

  • ChatGPT can be used universally, even at peak times;

  • faster response time;

  • Priority access to new features and improvements.

OpenAI said it would send out invites for the service “in the coming weeks” to people in the US and on its waitlist, and said it would expand the service to other countries.

More than a week ago, it was reported that OpenAI would launch a plus or pro version of the ChatGPT service at a price of $42 per month, but the final $20 per month apparently made the service available to a wider group of people, including students and businesses.

In a way, this will set the payment bar for any AI chatbot that wants to launch in the market. Given that OpenAI is a pioneer in this field, if other companies try to release a robot that pays more than $20 per month, they must first explain one thing-why is their chatbot more valuable than ChatGPT Plus?


(Disclaimer: This article only represents the author’s point of view, not the position of Sina.com.)

This article is reproduced from: https://finance.sina.com.cn/tech/csj/2023-02-02/doc-imyehtzq6651038.shtml
This site is only for collection, and the copyright belongs to the original author.