Alibaba Cloud releases CIPU, cloud computing enters the third stage reported on June 13 that Zhang Jianfeng, president of Alibaba Cloud Intelligence, officially released the CIPU (Cloud infrastructure Processing Units), the cloud infrastructure processor, at the summit.

This is a cloud processor that connects the server’s internal hardware with virtualized resources deployed on the cloud.

In the past ten years, the development of cloud computing technology has experienced two innovations in distributed technology and resource pooling technology: distributed and virtualized technologies have replaced mainframes and met the scale of computing power required by enterprises at that time.

The resource pooling technology pools computing, storage, and network resources separately through the computing and storage separation architecture, laying the foundation for the data center to provide ultra-large-scale cloud computing services.

But with the development of data centers, the needs of customers are also undergoing new changes.

With the popularization of data-intensive computing scenarios, users’ demands for low latency and high bandwidth are getting higher and higher, and traditional CPU-centric computing architectures cannot adapt to this trend.

In recent years, the DPU (Data Processing Unit), which has been widely mentioned in the industry, came into being, which can share a part of the work for the CPU, so that the CPU can focus on more important calculations and improve the efficiency of the data center.

The function of the CIPU released by Alibaba this time is no different from that of the DPU. In terms of product naming, the CIPU is more similar to the IPU (Infrastructure Processing Unit) that Intel released last year. Guido Appenzeller, chief technology officer of Intel’s Data Platform Division, told, “There is no fundamental difference between DPU and IPU in terms of function, but the names are different.


Guido Appenzeller believes that IPU brings three significant advantages. First, the architecture of adding IPU can clearly distinguish the tenant area and the cloud service provider area. Second, infrastructure functions can be transferred to specially optimized IPUs to achieve significant performance improvements. Finally, the IPU turns the data center into a diskless architecture, eliminating the need to equip each server with disks.

The CIPU not only has the ability to manage virtualized resources, but also solves the problem of data migration bandwidth.

Different from DPU/IPU provided by third-party providers, CIPU not only has software-defined and module acceleration functions, but more importantly, it can be more closely integrated with Alibaba Cloud’s own Apsara system to build a complete cloud architecture.

Ali pointed out that the CIPU is a dedicated processor designed for the new data center, and is specially designed for the Apsara system. In Ali’s plan, in the future, the CIPU will replace the CPU and become the control core of cloud computing.

It is understood that Alibaba Cloud’s related R&D team started technical research as early as 2015, and launched the industry’s first Dragon Cloud server with zero virtualization loss in 2017. After years of self-research and iteration, core technologies such as Shenlong and elastic RDMA have been continuously integrated vertically, and a new architecture centered on the CIPU has evolved, and cloud computing has entered the third stage.

Ali believes that after the addition of the CIPU, the data center architecture will undergo new changes: the new architecture will no longer use the CPU as the core, but use the CIPU as the center to connect the SSD, RDMA, CPU, GPU and heterogeneous parts.

Under this new architecture, the CIPU rapidly cloudizes and accelerates the computing, storage, and network resources of the data center downwards. Upwards, it is connected to the Feitian cloud operating system, connecting millions of servers around the world.

This new computing architecture system with the addition of CIPU shows better performance in general computing, big data, artificial intelligence and other scenarios. In the field of general distributed computing, the performance of Redis increased by 68%, MySQL increased by 60%, and Nginx increased by 30%. When applied to high-throughput Internet services, the throughput is increased by 30% compared with self-built physical machines, while the peak latency is reduced by 90%. For data centers, this improvement is undoubtedly huge.


740 believes that Alibaba Cloud’s self-developed CIPU also fully demonstrates that system manufacturers can further enhance their competitiveness through self-developed chips based on their own understanding of business needs.

In fact, in recent years, Alibaba Cloud has been continuously building the core to support its products, and has established an integrated software and hardware infrastructure such as self-developed chips, servers, and Feitian operating systems. At present, there are four cores: Shenlong Computing, Pangu Storage, Luoshen Network and security kernel.

This article is reprinted from:
This site is for inclusion only, and the copyright belongs to the original author.

Leave a Comment

Your email address will not be published.