In the era of AI, we should talk more about “safety”|The first artificial intelligence safety competition ends

In the era of digital economy, digital technology represented by artificial intelligence empowers industrial transformation and pushes the entire social economy into a new stage of intelligence. At the same time, security issues such as network system attacks, privacy leaks, and data ownership arising from the application of artificial intelligence have also attracted the attention of the entire industry.

On September 16, the first AISC Artificial Intelligence Safety Competition with the theme of “Building AI Safely and Enjoying an Intelligent Future Together” came to a successful conclusion. In this competition, the champions of the three core tracks of face recognition safety, autonomous driving safety and deep forgery safety were decided. It is understood that this competition is jointly sponsored by the National Industrial Information Security Development Research Center, the Artificial Intelligence Research Institute of Tsinghua University and Beijing Ruilai Smart Technology Co., Ltd. It is the first national artificial intelligence security competition.

AI security risks are not future challenges, but immediate threats

Just imagine, someone puts a “magic sticker” on the face, which can make the face recognition access control system mistaken for you, thus opening the door easily; the same “magic sticker” is placed on the glasses , you can unlock your phone’s face recognition in 1 second, and explore your privacy like no one else. This is not the imagination of a sci-fi blockbuster, but a real attack and defense scene displayed at the award ceremony of the first artificial intelligence security competition.

740 Competition staff demonstrate adversarial sample attack on face recognition access control system

Tian Tian, ​​CEO of Beijing Ruilai Smart Technology Co., Ltd. believes that the scope of artificial intelligence technology risks is gradually expanding with the widening of application scenarios, and the possibility of risk occurrence continues to increase with the growth of its application frequency. In his view, the current security risks of artificial intelligence can be mainly analyzed from the perspectives of “people” and “systems”.

To evaluate the security of AI from a human perspective, the first and foremost problem is the duality of technology, and there is the problem of AI abuse or even “weaponization”. In terms of the application of artificial intelligence, the most typical representative is deep forgery technology, and its negative application risk continues to increase and has caused substantial harm.

The on-site face recognition cracking demonstration reveals the risk of the system, which comes from the vulnerability of the deep learning algorithm itself. The second generation of artificial intelligence with deep learning algorithms as the core is a “black box”, which is unexplainable, which means that the system has structural loopholes and may be exposed to unpredictable risks, such as the “magic stickers” demonstrated on site. , is actually an “adversarial sample attack”, which makes the system make wrong judgments by adding disturbances to the input data.

This vulnerability also exists in the autonomous driving perception system, and Relais Wisdom demonstrated the use of adversarial samples to attack autonomous vehicles. Under normal circumstances, after identifying targets such as roadblocks, signs, pedestrians, etc., the self-driving vehicle will stop immediately, but after adding an interference pattern to the target object, the vehicle’s perception system will make an error and slam into it.

Combining with the specific competition questions, the autonomous driving safety competition and face recognition safety competition start from the perspective of attack. By allowing players to attack the target model to mine algorithm loopholes, the aim is to find a more stable attack algorithm to better achieve accurate evaluation. Model security. The deep forgery security question is to analyze the similarity of forged audio and video to trace whether different forged content comes from the same or the same type of biometric generation software, so as to promote the development of deep forgery detection technology, which has important practical significance.


Tian Tian, ​​CEO of Relais Wisdom

The essence of security lies in confrontational escalation, and building security requires a process of continuous offensive and defensive evolution. Tian Tian said that the competition focuses on typical vulnerabilities and risks in the real application scenarios of artificial intelligence, promotes construction and research through competition, and explores new security demand scenarios by assessing the vulnerability discovery and vulnerability mining capabilities of participating teams to promote AI attack and defense. Technological innovation provides support for strengthening the AI ​​governance system and building security assessment capabilities.

Building a safe ecosystem of artificial intelligence requires the continuous evolution of technology on the one hand, and the construction and training of specialized technical talents on the other hand. Tian Tian said that since artificial intelligence security research is still an emerging field, there are few specialized talents, and there is a lack of systematic research teams. This competition will use the method of actual combat to verify and improve the actual combat ability of the players in an all-round way, in order to cultivate a group of high-level talents. The horizontal and high-level artificial intelligence security new talent team provides a “fast track”.

It is understood that since the registration opened in July, the competition has attracted more than 400 teams from more than 70 universities, research institutes, and enterprises across the country, and a total of more than 600 players have actively participated. After three months of fierce competition, in the end, Shanghai Jiaotong University team “AreYouFake” and Beijing Jiaotong University team “BJTU-ADaM” won the deep forgery safety and autonomous driving safety track championships respectively. Beijing Institute of Technology team “DeepDream” and CCB Jinke team “Tian Quan & LianYi” jointly ranked first in the face recognition track.

740 Top 3 deepfake security competition

740 Top 3 in Autonomous Driving Safety Competition

740 Top 3 Facial Recognition Security Competition

In addition, at the event, the “White Paper on the Security Development of Artificial Intelligence Computing Power Infrastructure” jointly written by the National Industrial Information Security Development Research Center and jointly written by Huawei Technologies Co., Ltd. and Beijing Ruilai Smart Technology Co., Ltd. was officially released.

The white paper conducts in-depth research on the significance, connotation and system architecture of the security development of artificial intelligence computing infrastructure, the current situation of security management, and development suggestions. The white paper points out that artificial intelligence computing power infrastructure is different from traditional computing power infrastructure. It is both “infrastructure”, “artificial intelligence computing power” and “public facilities”, with three attributes of infrastructure, technology and public attributes. Correspondingly, to promote the security development of artificial intelligence computing infrastructure, we should focus on strengthening our own security, ensuring operational security, and assisting security compliance. By strengthening our own reliability, availability, and stability, we should ensure the confidentiality of the algorithm during operation. and integrity, improve the user’s security control, recognition and compliance in eight areas, build a solid artificial intelligence security defense line, create a credible, usable, and easy-to-use artificial intelligence computing power base, and create a safe, healthy, and compliant The developing artificial intelligence industry ecology.

At the scene, Chen Kai, deputy director of the State Key Laboratory of Information Security, Chinese Academy of Sciences, Liu Xianglong, deputy director of the State Key Laboratory of Software Development Environment, Beihang University, and Su Jianming, an expert in charge of the Security Attack and Defense Laboratory of the ICBC Financial Research Institute, shared themes. In the roundtable dialogue session, Jing Liping, Professor of Beijing Jiaotong University and Deputy Dean of the School of Computer and Information Technology, Zhu Jun, Director of the Basic Theory Research Center of the Institute of Artificial Intelligence, Tsinghua University, Deng Weihong, Deputy Dean of the School of Cyberspace Security, Beijing University of Posts and Telecommunications, University of Science and Technology Beijing He had an in-depth exchange with Chen Jiansheng, a professor from the School of Communication Engineering, and Tang Wen, an expert in Huawei’s trusted AI security solution, on topics such as AI security technology development and AI security governance.

Experts believe that in the long run, the security issues of artificial intelligence still need to break through from the principles of algorithm models. Only by continuing to strengthen basic research can we solve core scientific problems. At the same time, they emphasized that the future development of artificial intelligence needs to ensure that the entire The effectiveness and positive promotion of social and national development requires the coordination of government, industry, academia and research. (Leifeng Network)

This article is reprinted from:
This site is for inclusion only, and the copyright belongs to the original author.