Since computing power has become another economic indicator after electricity, computing power infrastructure has been the main tone in recent years. But computing is also divided into various types, dealing with different loads. For data centers, it is difficult to take the lead in computing methods that consume a lot of energy, and energy optimization methods are the optimal solution to the problem.
The industry has set its target on the human brain. The human brain consists of about 85 billion neurons, which are connected by one thousand trillion synapses and can perform one billion operations per second. Power consumption for everyday tasks is only 20W. And turning the human brain into a chip is called Neuromorphic Computing. At present, the industrialization promoter of Intel’s technology has always been a front-line leader.
The benefits of imitating the human brain
Neuromorphic computing refers to an architecture built with reference to the neuron structure and thinking processing mode of the biological brain. It is an advanced computing form that jumps out of the traditional von Neumann architecture. The chip designed according to this architecture is a neuromorphic chip. .
According to Cao Lu, a senior researcher at Intel China Research Institute, neuromorphic computing and brain-like computing have similar meanings. From the word Neuromorphic, Morphic means “deformation”, and Neuro is “neuron”, which is synthesized together. The word is “neuromorphic”. Intel still uses the naming method of “neuromorphic” to define the research direction. “Brain-like” involves a wider range of meanings. Anyone who is related to the brain can use the word “like”. No matter, or the structure is similar, as long as any feature can be similar to the brain, it is actually a kind of “brain-like”.
The commercial value of neuromorphic computing lies in continuous self-learning with low power consumption and a small amount of training data, and ideally, neuromorphic chips consume less energy than traditional CPUs or GPUs for the same AI task more than a thousand times. It has the potential to become the current savior and solve the three major problems in front of the industry: first, the amount of data is large; second, the digital form is becoming more and more diversified, and many data can no longer be solved by manual editing input or manual processing, requiring intelligent processing; third It is because the application has increasingly strong requirements for delay, and the traditional single computing architecture will encounter bottlenecks in performance and power consumption.
Neuromorphic computing has four characteristics: first, it draws on the structure of the human brain, integrates storage and computing, and adopts very fine-grained parallelism, using many, very small computing units in parallel to solve a big problem; second, event-driven , when dealing with problems, it does not work all the time, but starts to calculate and consume energy when an event arrives, and completes the corresponding tasks, thereby reducing power consumption; third, the calculation mode is low-precision mode; fourth, there is adaptive Sexuality and the capacity for self-correction, continuous learning and transformation.
It should be emphasized that neuromorphic computing is different from traditional standard computing – such as parallel computing based on CPU or GPU architecture based on von Neumann architecture. First, neuromorphic computing can continuously learn from small sample data and has strong plasticity; secondly, for the current circuit architecture design, it is an event-based asynchronous processing method based on pulse processing; in addition, it is parallel. A sparse, sparse computational pattern of instant, sometimes-absent, unestimable frequency of occurrence.
Due to the complexity of living organisms, so far, there have been no clear research results on the human brain, or even the brains of lower organisms. Only starting with the simplest basic structures and features opens up new ways of learning and computing paradigms.
Algorithmically, neuromorphic computing is based on the spiking neural network SNN, which is different from the artificial/deep neural network ANN/DNN of the deep learning special processor. The former approaches the biological brain from the structural layer, focusing on referring to the human brain neuron model and its organization. The chip structure is designed based on the structure, and the latter is not a neuron organizational structure, but is designed around mature cognitive computing algorithms.
“Compared with DNN, SNN can show a better energy efficiency ratio. In some resource-constrained situations, especially when batteries are used to provide energy, the lower the energy consumption, the longer the standby time can be maintained. In the case of limited resources, the SNN method may have better functions.” Cao Lu said.
Cao Lu emphasized that SNN does not yet have a recognized training method and framework. Intel started from a method that can be used for reference: one is to convert from DNN to SNN or the SNN training method borrowed from DNN, and the other It is to train from the neural dynamic field theory and train SNN based on biological spatiotemporal plasticity.
How to make this a reality? For Intel, it is the research chip Loihi.
Intel’s neuromorphic chip progress
At the end of 2017, Intel Research Institute launched the first-generation Loihi chip. This chip uses traditional CMOS semiconductor technology and combines innovative architecture to make an attempt and breakthrough. It is realized by stacking neuron nuclei, and its first-generation product has 128 neuron nuclei, each neuron core has 1,000 neurons, and the entire chip area is about 60 square millimeters.
By September 2021, after more than three years of experience accumulation and problem research, Loihi 2 will be iterated.
The biggest feature of Loihi 2 is the difference in chip shape. Compared with the previous generation, the chip area of Loihi 2 has been reduced from 60 square millimeters to 31 square millimeters, which can accommodate more resources. On the chip, the single core still retains 128 neuron cores, but the number of neurons in each neuron core has increased from 1000 to 8000, and the total number of neurons supported by the entire single chip has increased from 128,000 to 100 Ten thousand. In addition, since the on-chip network bandwidth inside Loihi 2 has been significantly optimized compared to before, the overall bandwidth has been significantly improved.
In terms of hardware design, Loihi 2 uses the Intel 4 process. Under this process, the density is further improved, the size is smaller, and the integration level is higher; secondly, the number of low-power CPU cores inside is doubled compared to before. In addition, after the redesign and optimization of the connection between chips, the bandwidth has been greatly improved; finally, the internal resources used to be separate, but now in the same core By sharing memory resources, read and write and resource utilization will be higher than before.
In terms of function, Loihi 2 has three functional updates: First, it can support a wide range of pulses, that is, not only 2 values, but also an accurate value of an integer. When doing deep learning tasks, the accuracy is better than that of the Loihi generation. There is a significant improvement; the second is to introduce more programmability, microcode can be used to program the structure of neurons, and more types of neuron abstractions can be supported. From the previous one, it only supported the leaky integrated firing model (LIF). Up to now, more models can be supported, such as resonance release model, LIF++, ALIF, etc. The third is to introduce regulatory factors in learning ability, which can better realize online learning.
In the expansion direction, it is no longer just a 2D expansion method, but a vertical dimension is added, which becomes a 3D stacking form, and the density can be higher than before. Although the design of asynchronous circuits has many advantages, there may be language barriers when interconnecting with other devices, and now Loihi 2 can also communicate directly with other computing devices through the network port.
Making neuromorphic computing available
Since neuromorphic computing is so powerful, if we want to promote industrialization, we must allow more people to actually use it and conduct in-depth research.
In order to make Loihi 2 practical, Intel has two preparations, one is to release the development board Kapoho Point, and the other is to provide the software framework Lava.
Kapoho Point, a development board based on Loihi 2, will be released in September 2022. It has 4 chips on the front and back. It adopts a compact stackable and scalable design. It can realize direct connection between multiple Kapoho Point boards through connectors. More resources to do more complex things. A single PCB board can reach 8 million neurons, can run AI models with up to 1 billion parameters, and can also solve optimization problems covering up to 8 million variables. It is now gradually open to members of the community for delivery and experimentation.
In fact, it is difficult to put the chip into practical application. Software is the soul of the chip, and so is neuromorphic computing. Lava was launched at the same time as Kapoho Point. It is a modular, open source, multi-platform software development framework that can be interoperable with other software.
Lava’s layered structure includes Magma, the bottom layer of Magma is related to hardware, and the upper layer is the hardware abstraction layer. At this layer, by introducing the standard process scheduling library, the upper-layer library and application can be mapped to the underlying hardware. This set of frameworks not only supports neuromorphic chips, but also supports the simulation of neuromorphic applications on CPUs and GPUs.
It is worth mentioning that the Lava framework supports the writing of low-level code in Python, C or C++ with higher execution efficiency, which can support users more friendly and lower the research threshold.
Speaking of the relationship between Kapoho Point, Lava and Loihi 2, Cao Lu said for example that Kapoho Point and Lava are equivalent to a notebook with a system installed, rather than using an Intel CPU directly.
The future of neuromorphic computing
“From years of research, Loihi’s neuromorphic computing has obvious advantages in some application fields.” Cao Lu said, one is to combine with near sensors to complete sensing and perception-related processing, including gesture recognition and Smell analysis will get better and more energy-efficient results than CNN or DNN, and the relevant results have also been published in the journal “Nature”; the other is that in terms of optimization, the performance can be better than the CPU-based path. Faster speed and better performance; in addition, in usage scenarios like intelligent robots, it enables continuous perception and continuous learning of the environment.
The HERO platform was a configurable heterogeneous computing platform developed for robotics research at that time, and Intel combined this computing platform with Loihi to conduct perception-related experiments. Currently, Intel China Research Institute is cooperating with the Institute of Automation of the Chinese Academy of Sciences on an experiment based on tactile perception to detect whether the object is sliding during the manipulation of the robotic arm.
The ultimate goal of any research must be to go to the industry. From the perspective of industrialization, these finished products made by Intel Research are only delivered to the members of the Neuromorphic Research Community (INRC), and these members are allowed to do preliminary research and development or exploratory research, which has not reached real commercial mass production. . Including Loihi 1 and Loihi 2, they are not yet Intel’s product-level chips. In essence, it is a research chip.
Industrialization has always been a neuromorphic problem. Neuromorphic computing has several orders of magnitude difference in energy efficiency compared to current CPUs and GPUs. However, the current problem is an upward challenge, that is, its optimal hardware architecture and algorithm. In fact, it is still under research and has not found a key breakthrough like the convolutional neural network AlexNet in 2012, lacking a “killer application”.
Therefore, Intel chooses to drive the industrialization of neuromorphic computing in a community-based manner, and the problems to be solved include what kind of algorithm is the best and what kind of hardware design is the most suitable. From applications to drive research, weigh the overall development direction.
In addition, the reason why Intel is so optimistic about neuromorphic computing, there is another key point – the concept of green computing. Neuromorphic computing can save a lot of energy, and it is also a continuation of Intel’s current plan for carbon neutrality and green computing. Green computing may be a very important point for Intel to invest in cutting-edge computing in the future. Although neuromorphic computing may only be suitable for one application or running a large-scale scenario, the application of this scenario may save a lot of energy consumption, which is also a very important thing for the environment or sustainable development.
Text/Fu Bin
This article is reproduced from: http://www.guokr.com/article/462776/
This site is for inclusion only, and the copyright belongs to the original author.