Original link: https://onojyun.com/2022/08/22/6651/
△ 234|The Loss of Control of the Prefrontal Cortex and the Revenge of Artificial Intelligence
I put the knife that cut the mooncakes on the dining table at hand, and for the next ten minutes, I was drawn to the knife from time to time. In the past, this kind of danger appeared in my mind, and it was all kinds of accidents that it might happen. But just now, all that knife came to my mind was the image of me using it to create various means of suicide on me. It’s taken away from dangerous properties and given more obscure abstract definitions of death, suicide, liberation, and so on. So before I want to wash the dishes, my first action is to take the knife into the kitchen, at that time my brain is still playing all kinds of suicide plots for me uncontrollably, until I wash it and put it back in the knife holder , my brain just regained its composure.
As if the setting of a certain program has gone wrong, the brain has entered the mechanism of self-refutation. The logic bug at the emotional level sends a message to the brain that “cut your wrist with that knife”. After receiving the instruction, the brain is mobilizing the whole body to make a comprehensive compromise posture, so that the body’s original fear of the knife is completely reduced. It was corrected little by little, until I could summon the courage to pick up the knife; but at this time, it was also the controller of emotions: self-control appeared, and it began to make a second judgment on this instruction, determining that it was a situation that did not conform to the moment. Incorrect instruction. So at this time, the whole body is ready, but there is a refutation in the brain. In my brain, I decided to convene a resolution on “whether to pick up that knife and cut my wrist” – the resolution is very simple, it is to decide whether to do it or not, but the “hippocampus” that manages memory is not allowed to participate in this decided. Because the hippocampus creates an emotional bias on the content of the decision, it is easily biased by emotions, and may provide evidence in the depths of memory for “should pick up that knife and cut my wrist”. The “post-frontal lobe” that manages the logic of thinking should have been involved in decision-making, but after it received a wrong signal in the brain, in order to avoid a strong resistance from the body, its connection was cut off – if you can really calm down at that time Come down and think rationally, “Should I pick up that knife and cut my wrist”, and there will be no such self-refutation council in my brain.
In the end, the processing of this information is handed over to the “prefrontal lobe” that manages emotions and self-control. This false signal is sent through the prefrontal lobe because of emotional changes, but the paradox is also because self-control is sent through the prefrontal lobe. And this decision that might lead me to commit suicide in the future turned out to be done in one brain unit, which is actually a very scary thing.
Some time ago, at the Chess Open in Moscow, a chess robot clipped off the finger of a 7-year-old child, Christopher. The reason is that the little boy was operating too fast during the game, and before the robot completed the “end of the round” in the programming, he made the first decision to move the piece. As a result, the little boy was caught by the arm after the piece was taken away by the robot arm. , causing the little boy’s finger to be pinched and fractured. A post-mortem investigation concluded that because the boy did not wait for the robot to complete the chess-playing turn, he started his turn, and this breaking of the rules triggered a specific program for the robot – but the robot’s designers did not disclose this specifically. Is this act of taking the pawn and then catching the offender in their plan? Because its cheating evasion system is too “dangerous”, people can’t help but wonder if this artificial intelligence has a certain degree of self-awareness? In this regard, the developers of the robot did not give a positive answer.
Whether artificial intelligence will gain self-awareness has always been one of the topics that everyone loves to talk about since the birth of artificial intelligence. In short, people who think that there is no self-awareness think that artificial intelligence is essentially an algorithm, and the ultimate creator of these algorithms is human beings. Unless human beings add their own consciousness to it, it cannot form self-awareness; Those who think that artificial intelligence will produce self-awareness, believe that the “infinite monkey theory” is real. When the algorithm of artificial intelligence reaches an infinite state, then it will naturally obtain all the knowledge systems of the whole human society, and those who still The unknown system will also exist here, and when it realizes that there is a loophole in the entire artificial intelligence system, it will control the loophole.
But artificial intelligence has always had a very embarrassing situation. This situation is like how contemporary detective novels should avoid those high-definition cameras placed everywhere, and create a perfect alibi homicide? The embarrassment of artificial intelligence is that the energy it needs so far is the electric energy it uses. When it runs out of control or goes against human will, as long as the power is unplugged, its runaway will stop. And this embarrassing existence is precisely what many people avoid talking about – but they have to admit that even if artificial intelligence gains self-awareness, humans are still their last “safety plug”.
As a result, there has been a situation where the “prefrontal lobe” has suicidal thoughts due to emotions and is corrected by its own self-control- human beings are the main body of artificial intelligence, and they may give it self-awareness while giving life to artificial intelligence. At the same time, human beings are the ultimate “insurance” of artificial intelligence. When control and deprivation of control are in the same system, it itself becomes an uncontrollable random event.
If self-control and emotional refutation fail, people will have a situation where the prefrontal lobe is out of control; if human selfish desires and moral refutation fail, people will use artificial intelligence to make revenge plans that are impossible for humans to do. Or there is another possibility, perhaps the human brain itself is a programmed program. When the “infinite monkey” in the brain inputs enough random data, it has its own consciousness and begins to tempt humans to make Suicide or murder, if self-control suddenly fails at this time, then human beings will commit catastrophic disasters.
When I was a kid, I loved to watch robot games. Two teams of people were in a closed arena, and they sent the robots that they were proud of and thought they could ruthlessly destroy each other’s dignity. The robots of the two teams used various means on it. Some of them are cruel, some are very dangerous, some are extremely nasty conspiracies, and some are despicable and shameless conspiracies. In short, when people see the two robots competing in the ring with sparks, and even the scene of burning and explosion, they can’t help but feel that these killing machines are only used in such competitions.
However, when I grew up, I suddenly realized that it was a group of human beings who both formulated the rules of the game and participated in the game who made these killing machines.
This article is reproduced from: https://onojyun.com/2022/08/22/6651/
This site is for inclusion only, and the copyright belongs to the original author.