Enabling Edge Machine Learning Applications with SiMA.ai 

Stelios Diamantidis

Sep 14, 2022 / 4 min read

Industrial IoT systems with the intelligence to sort goods on the production line based on their size and quality. Autonomous vehicles that passengers can summon for rides. Drones that survey crops to optimize water consumption and yield. Machine learning (ML) at the embedded edge is blossoming and new applications are certain to emerge as the underlying ML technologies become easier to implement.

SiMa.ai is one of the companies at the forefront of ushering in an age of effortless ML for the embedded edge. With its team of software, semiconductor design, and machine learning experts, SiMa.ai aims to disrupt the $10T+ embedded edge market by replacing decades-old technology with a purpose-built, software-first platform that scales ML at the embedded edge. The company recently achieved first-silicon success for its Machine Learning System-on-Chip (MLSoC) platform, which uses Synopsys design, verification, IP, and design services solutions.

Read on to learn more about how SiMa.ai is transforming how intelligence can be integrated into an array of ML devices that are used in computer vision applications for the embedded edge.

Modern High Tech Authentic Robot Arm Holding Contemporary Super Computer Processor

Software-Driven Machine Learning SoC with Industry’s Best Performance Per Watt

Bringing ML and AI to the edge helps address various challenges that stem from centralized computing: capacity, energy use, and cost, to name a few. It also brings significant benefits: meeting real-time latency requirements, reducing the amount of energy it takes to execute a model, alleviating the need for a network connection, and keeping data local for better security and compliance. However, many of today’s ML solutions use existing SoC architectures to accommodate ML, “which enable only limited evolution from existing technology when what is really needed is a revolutionary approach that seamlessly integrates ML into the chip, enabling the application needs to be addressed without compromise,” noted Srivi Dhruvanarayan, VP of Silicon Engineering at SiMa.ai. In many of these other approaches, he said, processing for a given application takes place on a host device while the ML function, such as inference, is handled via an ML accelerator.

Headquartered in San Jose, California, SiMa.ai has made it its mission to deliver effortless ML for the embedded edge. Its platform runs any computer vision application, network, model, framework, and sensor at any resolution. One of the advantages of this approach is that it can operate without a separate host, meaning, an entire application is run on the MLSoC platform, which handles pre-processing, inference, and post-processing. This managed heterogeneous compute approach can bring advantages to a variety of end products. Drones using the MLSoC platform, for example, can experience longer flight times with shorter charging times and enhanced frames per second/watt compared to competitive offerings. Aerospace and government systems can benefit from size, weight, and cost advantages.

“Think about a host that has to do half the workload, sending data back and forth, which greatly increases the latency,” said Dhruvanarayan. “If all of this can be done as part of one chip, in a heterogeneous compute approach where the application is managed among three processing partitions—the application, computer vision, and ML—then you can generate superior performance and also the lowest power.”

Indeed, low power and high performance are both fundamental requirements for ML at the edge, as is having a small footprint given the space constraints of many of the end products. These are also the characteristics that make ML challenging. “By seamlessly integrating ML into its SoC from the beginning, SiMa.ai’s MLSoC platform, including its toolchain, can easily handle computer vision ML workloads,” Dhruvanarayan said.

Co-Optimizing Software and Hardware

Bringing a fresh approach for ML at the edge calls for a robust set of design, verification, and IP solutions. Dhruvanarayan sees working with Synopsys an easy choice, given that many of his team members want to work with a single, trusted vendor and are familiar with Synopsys technologies.

One of the key elements of the MLSoC platform is the Synopsys ARC® EV74 Embedded Vision Processor for real-time vision processing. Fully programmable and configurable, these processor IP cores bring together the flexibility of software solutions with the low cost and low power consumption of hardware. “We did our due diligence and evaluated a few computer vision processors,” said Dhruvanarayan. “We looked at who could handle pre- and post-processing most effectively, with optimal performance and low power, as well as functions that would complement our ML offering. The ARC EV74 Processor met all of our criteria and, as a result, there’s a lot of handshaking between our home-grown ML IP and the ARC EV74 Processor.”

SiMa.ai is using several other cores from the Synopsys IP portfolio. “As a lean startup, we don’t have the bandwidth to design all the IP ourselves or to entertain multiple vendors,” Dhruvanarayan said. “We needed a single supplier for proven IP, from computer vision to PCI Express, Ethernet, memory interfaces, I2C, UARTs, security. Synopsys has it all.”

For design and verification of its SoC, the company uses several key products in the Synopsys Digital Design Family, including the Synopsys Design Compiler RTL synthesis solution, Synopsys PrimeTime® static timing analysis solution, Synopsys PrimePower RTL power estimation solution, and Synopsys Formality® equivalence checking solution. From the Synopsys Verification Family, SiMa.ai uses Synopsys Virtualizer™ virtual prototyping solution and Synopsys VCS® functional verification solution. For fast system verification, software bring-up, and power analysis of its large design, SiMa.ai uses the Synopsys ZeBu® Server 4 emulation system. Synopsys Design Services helped the company close timing on the critical blocks in their design.

“In our solution, a lot of the smarts are in the software, so when software does so much heavy lifting, we rely on emulation to ensure that the software can use all of the hooks provided by the hardware,” said Dhruvanarayan. “Synopsys’ ZeBu Server 4 gets us as close to the real thing as you can get. Not only does it verify the hardware, it also serves as a very good platform for software to meet hardware, for first-time-right results. So, we were able to solidify a lot of things on the platform before we got the actual chip.”

Summary

Adoption of AI continues to explode, with new applications and markets driving tens of thousands of AI SoC opportunities. Capitalizing on trusted collaborations, innovators like SiMa.ai are using technologies such as Synopsys design, verification, and IP solutions to optimize SoCs for this sector. The expertise available from companies like Synopsys can help AI hardware pioneers like SiMa.ai bring their revolutionary concepts to life and make an impact in the ML-driven embedded edge space.

Continue Reading