Ant Group’s three trusted AI practice solutions were selected as benchmark cases of China Academy of Information and Communications Technology

On August 16, the “2022 Trusted AI Summit” hosted by the China Academy of Information and Communications Technology was held in Beijing. At the meeting, the selection results of the “2022 Trusted AI Practice Cases” were announced. Ant Group has three trusted AI practice programs selected as benchmark cases.

The three benchmark cases selected by Ant Group are “Ant Explainable Machine Learning Platform FinINTP”, “Ant Explainable AI Safety Risk Control Practice” and “Ant AI Safety Detection Platform” based on trusted AI technology.

740 Picture: Ant Group’s three cases were selected as benchmark cases in the “2022 Trusted Artificial Intelligence Practice Cases” of the China Academy of Information and Communications Technology

In recent years, security and trustworthiness have become the bottlenecks restricting the development of artificial intelligence technology in the next stage. As early as 2017, the State Council’s “New Generation Artificial Intelligence Development Plan” focused on the security and credibility of artificial intelligence. Since May this year, the China Academy of Information and Communications Technology has collected 2022 credible artificial intelligence practice cases from the whole society, and finally selected 35 excellent cases and 13 benchmark cases, providing demonstration guidelines for the industry to carry out credible AI application work.

At present, AI is more and more involved in important decisions, and its thinking cannot be a “black box”. The “Ant Interpretable Machine Learning Platform FinINTP” independently developed by Ant Group, combined with rule learning, machine learning and other technologies, not only realizes the construction of interpretable models and rules for various scenarios such as intelligent risk control, but also develops a variety of disturbance-based models. Interpreters and gradient-based post-hoc interpreters that enable early awareness of batch anomalous risk factors.

Ant Group has developed an explainable AI system based on the security risk control practice of explainable AI. The system adopts a distributed interpretable framework from feature-based, model-based to logic-based to increase the interpretable calculation speed of millions of forecast samples by more than 100 times; at the same time, it ensures AI decision-making by combining artificial experience and machine learning interaction. Natural interpretability of the results. At present, this system has been applied in scenarios such as Ant Intelligent Risk Control, Complaint Trial, and Anti-Money Laundering.

How to improve the robustness of AI, that is, the ability and stability to resist attacks, in order to cope with the complex and changeable data environment, is also an important topic of trusted AI. The “Ant AI Security Detection Platform” selected as a benchmark case this time is the industry’s first AI security detection platform for all data types in industrial scenarios. The platform can automatically assess AI security risks of different data and attack types, and provide security enhancement solutions. It will open the robustness assessment for free in September, and will continue to increase the detection of interpretability, algorithm fairness, and privacy protection. ability.

Li Qi, an associate professor at Tsinghua University, said that the issue of AI trustworthiness has attracted widespread attention from academia and industry. AI interpretability is a key technology to achieve trustworthy AI, but the current industry exploration is mostly at the theoretical level, lacking a general interpretable framework. . The innovative application of Ant Group in the field of trusted AI technology provides a broader idea for the development of trusted AI technology.

According to public information, Ant Group has been investing in trusted AI technology research since 2015. In 2016, it launched a comprehensive artificial intelligence risk control defense strategy. At present, it has implemented anti-fraud, anti-money laundering, anti-theft, joint enterprise risk control, and data privacy protection. A number of scenarios have been implemented, forming a new generation of intelligent risk control system “IMAGE” to support the prevention and control of all risk areas.

This article is reprinted from:
This site is for inclusion only, and the copyright belongs to the original author.

Leave a Comment