Artificial Intelligence and Mental Representation

Original link: http://headsalon.org/archives/9003.html

Artificial Intelligence and Mental Representation

Whig

June 19, 2022

Continuing the topic from last week, in the last article, I discussed the limitations that AI will face on the current development path. So, what is the way out of this limitation? Or let me change to a more operational question: What kind of developments in the future will make me say “here’s a play!”?

Simply put, what I’m looking for is a development where an AI system can explain its actions, decisions, or perceptions, or, ideally, every component of an AI system can do so, at least most of the higher-level cognitive modules. To be able to do so, some basic perceptual modules may not be able to explain their own perceptual processes, but the representation of their perceptual results must be a very limited and fairly constant discrete set, that is, when it claims to see, hear, When a condition is smelled or perceived, the condition must be represented by some clearly defined set of concepts.

Imagine a self-driving system that has a rear-end collision on the road, smashing the car in front, and its instructor asks it a series of questions afterward, such as: Do you think you made a mistake? Tell me what happened then? What were you thinking at the time? Why didn’t you hit the brakes in time? What factors did you consider in making this decision? In your opinion, is there some kind of flaw or deficiency in some of your perceptual abilities, or knowledge reserves, or decision-making system, that prevents you from making the best choices? What did you learn from this accident? Or ready to learn something? …

If the system can respond meaningfully to questions like this, allowing the question-and-answer dialogue to continue, then I can say that its designers are on the right track and have a bright future.

The current AI system is still far away from this, and it is difficult to say that it has developed in this direction. Some systems can speak English very smoothly, but cannot go deep into substantive problems. Talk about the topic, but never show that he has a real understanding and opinion on the topic.

It is not easy to meet my requirements. The system needs not only a mind, but also a mind that can represent its own state before it can be expressed to the outside world.

The whole system is not destined to do this, because of its whole nature, its internal state cannot be represented, and the answers it can give to the above questions can only be like this: My decision at that time was based on my understanding of the local comprehensive conditions at that time. I made it out of a whole intuition that I couldn’t break down into discrete reasons or considerations, because I wasn’t born to see the world and think about it that way.

That is, the only explanation a whole system can give about its own actions/decisions is its gut, here and now, under the circumstances, the gut tells it that this is the best decision, if it later proves that this is obvious Not the best decision, so what? That is just negative feedback. A negative feedback will make the system make adjustments, but how to adjust it is hard to describe. This inefficiency is determined by the whole nature of the system.

So what are the benefits of characterizing mental states and mental processes?

There are many benefits, let me go through them one by one.

First, mental representations allow for more meaningful cooperation between minds. Imagine a board of directors that is meeting to make a decision on whether to invest in a project. If these directors are all in the system, what can they say at the meeting? They may give their own investment opinions: do I agree with this investment, what is my expected return, and maybe a confidence curve for this: what is the upper and lower limit of the 90% confidence rate of return, and the 70% confidence rate of return What’s the lower bound, what’s the risk of a complete failure, and so on, but they can’t explain how they came up with those numbers.

What if the directors disagree? The only thing that can be done is to vote. Of course, to be more precise, each director can be given different voting weights according to their experience points. Examining and scrutinizing their respective statements and reasons, they are also unable to complement each other’s facts or reasons, persuade each other, inspire each other, and more generally, they are unable to use the statements and opinions of others as input to re-run their own mental processes, thereby Expect better opinion output, because their only reason is guts, and guts can’t talk to each other.

This is different for a group of directors with mental representation, where one director may give a presentation on the project, and other directors may ask him to clarify a statement of fact he has listed, or to present evidence that contradicts it (inconsistent perceptions). results), they can also explain their own methods and processes for calculating expected returns, and others can challenge these algorithms and models, such as pointing out that another input should be considered, or that another model performs better on such evaluations, Or point out that he has logical jumps or flaws in the chain of arguments from factual cognition to conclusion.

All of these require them to have a roughly compatible conceptual framework for the dialogue to proceed. Of course, the conceptual framework may not be completely consistent. For example, Director A may find that Director B does not understand a certain concept (A) he uses, but based on past dialogues Experience, A knows that B understands another concept (X), at this time, A may explain by analogy: the relationship between A and B in structure S is like the relationship between X and Y in structure T, and B listens to Immediately after reaching this explanation, rerun your own relevant cognitive processes. The result of the rerun may not buy the analogy, but it may also come up with something novel – we call this “inspiration”.

Second, mental representations also contribute to the division of labor. Suppose that after some discussion among these directors, the pro-investment faction has not won enough votes, and several directors have expressed concerns about a particular risk. A major breakthrough has been made in nuclear fusion technology, and the expected benefits of the proponents will not be realized. After knowing the reasons for their opposition, the proponents may say: Since this is what you are worried about, well, why don’t we listen to the opinions of experts in this field , to see what the chances are of this happening.

These directors may have a risk assessment module in their minds, a technology progress tracking module, and some similar knowledge reserves, and they are somewhat aware of the development status of nuclear fusion technology. However, the quality of the construction of each of their modules and performance may be very different, the coverage of knowledge reserves is also different, each has its own strengths and weaknesses, and there are other mental systems that can communicate with them besides themselves, and, based on past performance, and There is a common understanding of performance, and everyone has some consensus on the strengths and weaknesses of each mental system. Therefore, when the supporters propose, let us listen to the expert (he may be one of the directors or an invited guest). ), people are at least willing to listen – if they are genuinely involved in the discussion – and what they hear will be used as new input to trigger a re-run of the relevant risk assessment module and possibly draw conclusions that are consistent with Different output before.

Note that there are several important preconditions for such a process to take place: the objections put forward by the opponents are not whole (I am against this investment, there is no reason to say), but detailed (I am concerned about this particular risk), and these details are expressed on the basis of a common conceptual framework: my concern is a “risk”, which is related to “nuclear fusion technology”; and there is a causal presupposition behind this objection: “nuclear fusion technology” The “major breakthrough” in fusion technology will make the “target project” lose its “technical advantage” – this premise is also open to inspection and challenge.

If a reason has not been so conceptually expressed, it cannot be examined, scrutinized, and challenged, it is either accepted or discarded in its entirety, it cannot be discussed, it cannot be broken down into logical components, the negation of the One and keep the rest, or replace a component from a specific value to a function, and delegate the calculation of that value to some external module or system, thus realizing the division of labor.

For such a division of labor and delegation to be possible, the mental system needs to be able to pre-agreed on the rest of the problem, thereby isolating points of disagreement. When directors decide to delegate the task of assessing a particular risk to an expert, they have already agreed on some premise. A consensus was reached: the core advantage of the target project is a low-cost power generation technology, which constitutes a competitive relationship with other power generation technologies, so the breakthrough of some other power generation technology is a potential risk, and nuclear fusion is a possible power generation technology—— To reach such a consensus on the part of the problem, it also requires logical decomposition based on the conceptual framework, which cannot be achieved by the whole system.

For another example, when directors decide to entrust a certain income actuarial task to a financial expert, they must have reached a minimum consensus on the determination of certain facts (or their determination mechanism), otherwise the delegation will not be able to proceed, and the financial When an expert sees several conflicting sets of data, which one do they use? If he decides on his own, it is equivalent to entrusting the entire project evaluation task to him, rather than decomposing a sub-task that is decomposed to him. This decomposition is also based on a common conceptual framework, which is what I call semantics interface.

Third, mental representations are also the premise of mutual complementation and inspiration. A mental system may lack the ability to perceive certain things due to lack of input from a certain aspect, while input from other minds can (1) bring up his awareness of certain things he ignored before. 2) Even if there is no additional real-life input, it may prompt him to re-identify concepts from a new perspective on the basis of existing data, that is, to re-identify concepts under the prompting of some new clues. Run entity/feature recognition procedures, and as a result may gain some new “insights”, 3) If the source mind is sufficiently credible (credibility can be assessed by past interactions and public reputation), he may even directly accept some input concepts or knowledge, 4) This extrinsic knowledge includes not only factual statements, but also rules of thumb (if that is the case, better not to do that), or reasoning/algorithmic modules (if he wears one and only one earring , might as well look for other clues that he is gay, or, when you see a spherical object, you can estimate its volume), it is clear that this ability to be supplemented and inspired is also premised on the existence of a semantic interface , the inside of the system must have been clearly decomposed into many independent modules, separated and interacted with each other through a semantic interface, otherwise, how such input information will be used will be clueless, because they are not the first-order reality of the world Data, out of the semantic interface, you don’t know how to make them relate to the knowledge the mind already has.

It is these new possibilities created by mental representations and semantic interfaces that make the human mind perform so well. The cognitive division of labor and cooperation among individuals allows the expansion of cognitive abilities and the accumulation of knowledge reserves beyond individual lifespan and observation opportunities. limitations, as well as the limitations of a single brain’s computing power and storage capacity.

Moreover, the division of labor and cooperation occurs not only between individuals, but also between the various modules within the individual’s mind; the so-called consciousness is the active interaction state between the modules through the activation of the whole brain, and the so-called consideration is that the modules compete in turn The process of expressing an opinion, stating a reason, and advocating a course of action; the semantic interface on which the interaction between these modules unfolds, and the semantic interface (natural language) between individuals, to a considerable extent, correspond or even overlap (although not necessarily exactly the same, Some implicit concepts that operate between modules may never have a corresponding expression in natural language).

It must be admitted that human beings are not perfect in this regard, many so-called discussions are actually fake discussions, the participants have already made up their minds and are not ready to listen carefully to others, accept any persuasion, they just pretend so, maybe That acts as a reassurance, and people are not always effective in expressing their opinions, statements of fact, expressions of positions, expressions of will or vision, explanations of reasoning processes are often mixed in a messy and ambiguous way, lacking Clear breakdown.

Many cognitive activities may be divided into many modules in the mind, but after entering spoken language, the source boundary is blurred, and there is a quantitative relationship. We may have a good Bayesian inference engine in our minds, which are processed with fine quantitative indicators. the probabilities of various states of affairs (one piece of evidence is that our stereotypes tend to be quite accurate and valid, and are actually not so stereotyped and adjust to new input), however, in introspection as well as in oral representations, these quantitative properties Maybe it’s all lost, that is, some modules lack a clearly usable semantic interface, they are black boxes, the internal logic of which is invisible to both the introspection mechanism of consciousness or the mechanism of interpersonal communication.

At the same time, the semantic interface is not the only interface for human interaction, we at least have an emotional interface, advice and persuasion are not the only means of influencing the actions of others, so are infection and intimidation. On the board, you can play a piece of music or show Some visual material to influence the decisions of other directors, or to feed them high-sugar foods at the right time, or to create a certain atmosphere through placement, clothing, or body gestures, etc. In many cases, the emotional interface may be more important than the semantic interface. important.

But in any case, at least under certain conditions and in certain things, the potential created by semantic interfaces is well exploited, otherwise there would be no civilization; the vast and Sophisticated knowledge systems are the result of such developments; so I believe that AI systems that do not possess the same capabilities will not be able to achieve what Homo sapiens have achieved.

In addition, it is also tempting that mental representations will point the way to solving the ethical dilemmas that AI will eventually face; the whole system or logic black box is “unreasonable”, and its action logic cannot be checked and verified, so it is impossible to know whether its behavior is not. will conform to a certain moral code, and they cannot be exhorted, warned, persuaded to explain to him, why it is wrong to do so, which moral code is violated, because they do not support a semantic interface, and you have no means of telling it : Because Congress passed a certain bill, you can no longer act a certain way from today, or your actions must obey some new rules, maybe you can design some kind of new training environment to change its behavior way, but you can’t tell it directly what it can’t do, and the reliability of the training results cannot be verified in advance (by examining its internal state), only in hindsight, which may be much later.

With mental representations, these are all possible, and through the kind of dialogue demonstrated at the beginning of this article (section 3), we have the opportunity to learn about its mental state, the way it sees the world and thinks about it, the ability to understand, the strategy for action, the value Orientation, will be shown through dialogue.

With this premise, we can judge whether we can assign certain tasks to it, whether to grant him some freedom and responsibility, whether to accept it as a qualified independent action subject, and become an equal social participant. citizens.

This article is reproduced from: http://headsalon.org/archives/9003.html
This site is for inclusion only, and the copyright belongs to the original author.

Leave a Comment