To USB or Not to USB

 

Artificial Intelligence and USB, mostly AI

Artificial Intelligence embedded in a car in an Embedded Vision (EV) system or a in the cloud (on a server rack) have common requirements

  • Rapid access to memory for data process
  • Rapid access to on-chip and off-chip resources
  • Rapid access to sensors to gather raw data information
  • Local processing power

For AI, decisions need to be made rapidly, often through the input of a lot of data at once.   This requires low latency and high throughput.

 

For the purposes of this Blog, we will focus on one example in the car, but you can see there are at least 6 areas of application for AI.

6 places AI chips exist

 

Here’s an example: a car making semi-autonomous decisions, like staying in the lane, or keeping a constant distance needs to:

  • Take in sensor data, lots of optical, radar, or infrared data
  • Data goes into a buffer (usually some sort of RAM)
  • Then into an Embedded Vision processor, processed, and
  • then either data or a decision is sent to an ADAS system

Sensor Data

Sensor data comes in through sensors using MIPI CSI interfaces and transferred to an embedded vision unit via Ethernet or some other longer reach cable.

 

Memories

During these transfers of any kind, some sort of buffer is needed requiring a fast access memory. This can be small but uses our memory components for small FIFOs. (Our memories are the fastest, lowest power in most situations).  This can be a 1-port, 2-port or multi-port memory.  These are embedded in the chip.

Processing Power in Embedded Vision
For local processing power, our configurable, customize-able ARC cores provide a great solution for embedding in a car or edge device.  For general AI uses, it can be configured and targeted for a specific use.  For Embedded Vision, our EV products can be optimized for uses with different graphs.

A lot of the logic/circuitry involves a lot of dot product computation. It’s important to have special building blocks for dot product. This is where our libraries have special cells that make these dot product blocks lower power and smaller area.

Moving data around the chip and off the chip

For moving data around and off the chip, AXI or PCI Express buses are needed.   Our AXI interfaces on chips can be optimized for routing to increase speed, reduce energy consumption, and reduce heat generation.  PCI Express helps move data off the chip fast for other uses, or decision making by a different processor.

PHYs required

Because AI requires power and low energy, the designs go into FinFET processes, which means FinFET PHYs. Our PCI Express, LPDDR4, and LPDDR4X PHYs have been proven in multiple process nodes down to 7nm.

To move the data to an ADAS system

Processing Power
For local processing power, our configurable, customize-able ARC cores provide a great solution for embedding in a car or edge device.  For general AI uses, it can be configured and targeted for a specific use.  For Embedded Vision, our EV products can be optimized for uses with different graphs.  Once processed the data needs to move to memory on chip via AXI, eventually off-chip to an ADAS system probably via PCI Express.

For ADAS system

Sensor data or processed EV data moves via PCI Express into the ADAS chip. The chip will likely need to access both software and data in LPDDR4 or LPDDRX memory.  This again requires fast, low latency IPl

NOTE: in a server, the AI needs to access large amounts of memory, rapidly using HBM2 is needed. PCI Express again can be used to move data around the server, and of course, super fast Ethernet to move data around the data center.

 

Debug, Firmware Programming

Most ADAS or EV processors will need a fast interface for debugging. This will often be USB or PCIe. USB, in particular, is a fast easy way to debug a platform or update the firmware because every laptop has a USB port and it’s designed for external connects.

 

In short, lots of need for IP, and Synopsys has the IP. As with all designs, with Synopsys providing all the IP blocks, it’s possible for ADAS or EV chip makers to focus on the value-added components, and get lots of stuff for free.

 

To detect FinFET specific transistor defects that manifest in the memory bit cells used in ADAS SoC’s, STAR Memory System (SMS) includes targeted memoryBIST algorithms. This ensures low DPPM and high reliability for designs targeting the most stringent automotive ASIL-D requirements. The STAR Hierarchical System (SHS) enables “soft-monitoring” of safety-critical clocks/PLLs and provides a unified IEEE1500 infrastructure to include observability and control for analog, mixed-signal IP blocks including USB. All the test modes of USB PHY including the manufacturing test patterns are modeled in SHS and automatically ported from the IP level to block level to top level. Ultimately SMS and SHS generated test infrastructure enable the full automotive test lifecycle requirements through design, early prototype bring up, production/ manufacturing test and finally out in the field

It’s crazy to me that everyone doesn’t use this for every design.

My thanks to Brett Murdock, Prasad Saggurti, and Faisal Goriawalla for reading through the word salad that was this document and not scoffing at it. Thanks to Camille Espiritu for proofing the previous two blog posts.

For more, check out our AI IP offering for more information

https://www.synopsys.com/designware-ip/ip-market-segments/artificial-intelligence.html

And here’s an informative video from Navraj Nandra on IP with Near Zero Energy Budget Targeting Machine Learning.

 

https://imgs.xkcd.com/comics/machine_learning.png

 

Share and Enjoy:
  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon
  • LinkedIn
  • RSS