Original link: https://topbook.cc/overview?selectedArticle=1945
Before the video starts, let me ask you a question, which of the following photos are real people?
The answer is, none of them are real.
All the photos you just saw were randomly generated by AI, but the people in the photos looked as real as us. Studies have shown that humans are easily addicted to AI-created faces and perceive them as more trustworthy than real people in other photos.
A website called Generated Photos has developed a face generator. Click to enter, we can get a randomly generated face.
The menu on the left has a series of options that allow us to change the face as we wish, determining his gender, head position, age, expression, skin color, hair color and even hair loss, whether he wears glasses, eyes and lips. Makeup, just a few clicks, drag a few sliders, you can get any real but non-existent face.
If we need a lot of portrait photos, Generated Photos even has a face library with more than 2.68 million AI-generated faces. Likewise, users can buy in bulk based on their needs for facial features, with pricing ranging from $2.99 a piece to $1,250 for 1,000 pieces. Then import these photos to a website called Reallussion, and we can quickly turn 2D photos into 3D animations.
The principle that AI generates these portrait photos is called Generative Adversarial Network. In 2014, by Ian J. Goodfellow et al.
We can understand it as two networks, one is called the generation network and the other is called the discriminative network. First, we give the generation network a series of portrait photos, let it learn and generate new photos, and then the discriminant network will make a true and false judgment, regenerate, and then judge until the discriminant network cannot distinguish between true and false. Through the game of two networks, the generated photos are difficult to distinguish between true and false.
Generative adversarial networks have created deepfakes. The word Deepfake comes from the split combination of deep learning and fake. It produces intelligently through artificial intelligence, manipulates or modifies data, and finally achieves dissemination. In other words, AI face-making and face-changing are both deepfake technologies.
Deepfake techniques are often divided into four categories – reproduction, replacement, editing, and synthesis.
Reproducing means making A’s expressions and movements appear on B’s body. For example, reproduce the narrator’s movements and expressions on the Mona Lisa, and let the Mona Lisa tell her own story.
Replace, use A’s face to replace B’s face, which is the face-changing technology we often see. Face-changing is not only used to spoof videos, but also used to replace bad actors in film and television. For example, Lu Deguang, whose face was changed in “Breakthrough”.
Editing is changing the target character by adding, changing or removing attributes. For example, change the target object’s hairstyle, clothes, beard, age, weight and race, etc. Altering facial features, like over-beauty, can often create an illusion of illusion.
Synthesis is to directly generate faces without basic characters. The randomly generated faces we see in the video are synthetic techniques, and editing techniques are used to modify the facial features.
Deepfakes are techniques that make what we see not necessarily believing. Reenactment can be used for impersonation, false propaganda; substitution is actually heavily used in pornography; editors can give false characteristics to mislead the other party, such as changing a person’s race; and composite can be used by Scammers use avatars to defraud.
Before we know it, deepfake technology has sneaked into our lives. In May 2018, netizens used deepfake technology to make a fake video of Trump announcing the Belgian government’s withdrawal from the global climate change agreement, causing public outrage among the Belgian people. In January 2021, the liar pretended to be a classmate of a woman by changing her face through AI, and defrauded her of 3,000 yuan on the grounds of borrowing money. In May 2021, a man was extorted 110,000 yuan by a fraudulent indecent video forged by a fraudster using AI to change his face.
These incidents all illustrate that deepfakes have reached the point where we cannot discern them with the naked eye. We may even wonder, can such a clever deepfake technology deceive face recognition?
In fact, at the 315 party in 2017, the host used a moving photo to unlock the app that requires blinking for face recognition, and then used AI to change faces to open other people’s phones. More criminals purchased personally identifiable information and used other people’s photos to forge 3D faces to deceive the face recognition system of Alipay at that time.
Since the core of face recognition is to convert facial features into data and then match, as long as the deep forgery technology is clever enough to perfectly reproduce facial features, it is possible to deceive the face recognition system.
But deepfakes are not indestructible. Generally, we have two methods to identify deepfakes. One, called artifact recognition, uses machine learning and forensic analysis to identify some of the artifacts in fake images. For example, artifacts created when content is fused into image frames that are often indistinguishable to the human eye. Another approach, called the undirected approach, lets the neural network determine the features to analyze.
Smart deepfake technology may break through face recognition, and advances in face recognition will once again prevent deepfakes. Deepfakes and face recognition are like a spear and a shield, always competing against each other.
Behind the confrontation between the two technologies, we are more worried about the leakage of human faces brought about by the popularity of deepfakes. What does deepfake technology have to do with our own real faces, you might ask?
But what you don’t know is that we may have already voluntarily handed in our face information unintentionally. Before that, a face-changing app called Zao suddenly became popular. After a large number of users poured into this app to play the face-changing game, everyone discovered that this app states in the terms that users need to click “read and understand” that the portrait uploaded by the user will be permanently free, indefinite and irrevocable. The land belongs to the app; at the same time, the user is also required to ensure that they own the portrait rights of the replaced object.
That is to say, if we change Liu Haoran’s face, we will not only transfer our own portrait rights, but also guarantee to buy Liu Haoran’s portrait rights, otherwise we will bear all the consequences.
Although this app was eventually interviewed and the terms of the overlord were changed, there were still other apps that did the same. And how many of us have paid attention to the treaty proposed by the app when registering? And those leaked faces may be traded, used, re-engineered by deepfake technology, and used for illegal acts in a certain corner of the world.
But it is by no means impossible to prevent. We can prevent face leaks by enforcing deepfake watermarks, or restricting public access to the technology, and so on. What each of us can do is to be more vigilant and reject the overlord clause to protect our portrait rights.
When it comes to deepfake technology, we may feel unfamiliar. But when it comes to AI face-making and face-changing, we are no strangers to it. Since the Generative Adversarial Network was proposed in 2014, it has been less than eight years since the face-making and face-changing have reached the point where the fake is real.
On the one hand, we worry about a world in which the future becomes invisible. But at the same time, those synthetic and edited faces we see at the beginning of the video have been widely used in various fields such as machine learning, games, film and television, and even academic and medical research. Not only that, deepfakes may help us “resurrection” of historical figures, giving us a chance to re-communicate with the deceased through replacement and reproduction.
Perhaps, for deepfake technology, we should look at it selectively, not only to actively draw the line between it and reality, but also to combine it with other technologies to create more possibilities for the future of society. Through the game between technologies, using deepfake technology as a handy tool, its side effects can be controlled and prevented, what do you think?
This article is reproduced from: https://topbook.cc/overview?selectedArticle=1945
This site is for inclusion only, and the copyright belongs to the original author.