AI startup SiMa.ai unveils ‘dedicated’ AI chips for edge computing

Visit the original URL

the public/ ScienceAI (ID: Philosophyai)

Edit | Cabbage Leaves

Pictured: The MLSoC shown in its package is the first dedicated chip that can handle not only the matrix multiplication operations for AI in embedded use cases, but also the traditional functions of computer vision that need to run in the same application. (Source: SiMa)

Amid the broad promise of AI computer chips, products serving “edge” markets, including drones, IoT devices, phones, and low-power server environments, provide fertile ground for suppliers to become a rare market one. Developed regions in the market compared to data center technologies.

Dozens of startups have secured tens of millions of dollars in venture capital to make artificial intelligence chips for mobile and other embedded computing uses, as reported earlier. Since the fringe market is less stable, vendors have a lot of different ways to address this.

On August 30, 2022, artificial intelligence chip startup SiMa dot ai officially launched what it calls MLSoC, a system-on-a-chip for accelerating neural networks with lower power consumption. The company said the new chips, which have begun shipping to customers, are the only “purpose-built” parts to handle tasks that are highly computer vision-focused, such as detecting the presence of objects in a scene.

“Everyone is building machine learning accelerators, that’s all,” Krishna Rangasayee, co-founder and CEO of SiMa.ai, said in an interview.

“The difference between the embedded edge market and cloud computing is that people are looking for end-to-end application problem solvers,” rather than just chips for machine learning capabilities, Rangasayee said.

“They’re looking for a system-on-a-chip experience that can run an entire application on a chip.”

Rangasayee argues that competitors can only “handle a small subset of the problem” by performing the neural network function of machine learning.

“Everybody needs machine learning, but it’s part of the whole problem, not the whole problem,” Rangasayee said.

The SiMa.ai chips are fabricated using TSMC’s 16-nanometer manufacturing process, which combines multiple components into a single chip. These parts include a machine learning accelerator and code-named “Mosaic” dedicated to matrix multiplication that underlies neural network processing.

There is also an ARM A65 processor core onboard, typically found in automobiles, and various functional units to help with specific tasks for vision applications, including a stand-alone computer vision processor, a video encoder and decoder, 4M Bytes of on-chip memory, as well as numerous communication and memory access chips, including an interface to 32-bit LPDDR4 memory circuitry.

The chip hardware comes with SiMa.ai software, making it easier to tune performance and handle a wide range of workloads.

SiMa.ai’s products target a variety of markets, including robotics, drones, autonomous vehicles, industrial automation, and applications in healthcare and government.

“It’s a multi-trillion-dollar market that’s still using decades-old technology.” Rangasayee observes a variety of civilian and government applications.

Many of today’s computer vision systems for autonomous vehicles and other applications use “traditional load-store architectures, the von Neumann architecture,” said Rangasayee, referring to the basic design of most computer chips on the market.

Chips for machine learning and computers, he said, have not advanced in the way they handle computational bandwidth and the way data is combined with each other.

“We have a unique ML SoC, which is the first system-on-a-chip that includes ML, so people can do classic computer vision in a single architecture and solve legacy problems beyond ML,” Rangasayee said.

The word “sima” is a transliteration of the Sanskrit word “edge”.

The term Edge AI has become an umbrella term to refer to everything outside the data center, although it may include servers located at the edge of the data center. It ranges from smartphones to embedded devices that consume microwatts of power using Google Mobile AI’s TinyML framework.

SiMa.ai competes with numerous mobile and embedded competitors. In the edge market, competitors include AMD, ARM, Qualcomm, Intel and Nvidia. However, these companies have traditionally focused on larger chips that operate at more power, on the order of tens of watts.

The SiMa.ai chip has what its creators say is one of the lowest power budgets on the market for typical tasks such as the ResNet-50, the most common neural network for ImageNet labeling tasks.

Pictured: SiMa provides an MLSoC on an evaluation board for application testing. (Source: SiMa)

The company says the part can perform 50 trillion operations per second, or “teraoperations,” or 10 teraoperations per second per watt. This means that the part will consume 5 watts when performing neural network tasks, although it may be higher when using other functions.

These chips, running at a few watts, have made SiMa.ai the company behind many startups including Hailo Technologies, Mythic, AlphaICs, Recogni, EdgeCortix, Flex Logix, Roviero, BrainChip, Syntiant, Untether AI, Expedera, Deep AI, Andes and Plumerai et al.

Rangasayee said the only companies “on our radar” were Hailo and Mythic. But, “Our biggest difference is that they’re only building ML accelerators. We’re building full ML SoCs.”

Thanks to SiMai.ai’s build in ARM cores and dedicated graphics circuitry and Mosaic neural network code, customers will be able to run existing programs with greater capability while adding code from popular ML frameworks such as PyTorch and TensorFlow.

“What’s interesting to me is that the pent-up demand for legacy dedicated platforms is very high,” Rangasayee told the outlet. “They can run their applications almost from day one — that’s a huge amount of what we have. Advantage.”

“We were the first company to crack the code to solve any computer vision problem, because we don’t care about the codebase, it can be C++, Python, or any ML framework,” Rangasayee explained.

He argues that broad support for the project has led companies to see themselves as the “Ellis Island” of chips. “Give us your poor, give us your tired…we’ll take them all!”

Rangasayee asserts that this broad support means the company has more of a customer base than just a niche market.

Another advantage of the chip, according to Rangasayee, is that it has ten times the performance of any comparable part.

“Our customers care about frames per second per watt,” Rangasayee said, or image frames per watt of power. “We are at least 10 [times] better than anyone. We show that to every client day in and day out.”

The company has yet to provide benchmark specifications based on the widely cited MLPerf benchmark scores, but Rangasayee said the company intends to do so further in the future.

“Right now, the priority is to make money,” Rangasayee said. “We are a very small company” with 120 employees. “We can’t have one team do MLPerf alone.”

“You can tweak a lot around benchmarks, but people care about end-to-end performance, not just MLPerf benchmarks.”

“Yes, we have numbers, yes, we do it better than anyone, but at the same time, we don’t want to spend our time building benchmarks. We just want to solve customer problems.”

Although the August 30 announcement was about the chip, SiMa.ai specifically emphasized its software capabilities, including what it called “new compiler optimization techniques.” The software can support “a wide range of frameworks,” including TensorFlow, PyTorch and ONNX, machine learning’s main programming library for developing and training neural networks.

The company says its software allows users to “run any computer vision application, any network, any model, any frame, any sensor, any resolution.”

“You can spend a lot of time on an app, but how do you get thousands of customers across the finish line? It’s really a harder problem,” Rangasayee said.

To achieve this, Rangasayee said, the company’s software work consists of two things: compiler innovation on the “front end” and automation on the “back end.”

The compiler will support “more than 120 interactions,” which provides “flexibility and scalability” to bring a wider variety of applications to the chip than is usually the case.

The backend part of the software uses automation to allow more applications to “map to your performance” rather than “waiting months for results”.

“Most companies are getting people involved to get the performance right,” Rangasayee said.

“We knew we had to automate in a smart way to get a better experience in minutes.”

The software innovation, he said, was designed to take advantage of the MLSoC “button” because “everyone wants ML; nobody wants a learning curve.” It’s an approach Rangasayee’s former employer, Xilinx, is also taking to try to embed it. The AI ​​chip is more user-friendly.

“I learned the importance of software at a previous company. It really depends on the strength of our software,” Rangasayee said. “Yes, our silicon is great and we are very proud of it, without silicon you are not a company.”

“But to me, it’s a necessary feature, not a sufficient feature; a sufficient feature is to provide an effortless ML experience.”

Related coverage: https://ift.tt/d04oCx5

media coverage

IT Thinking Sohu Sohu
Related events

This article is reproduced from: https://readhub.cn/topic/8iYQYPwqWcu
This site is for inclusion only, and the copyright belongs to the original author.

Leave a Comment