From Silicon To Software

 

Smart IoT Edge Applications Require Lower Power Processing

Smart Edge IoT

By Pieter van der Wolf, Principal R&D Engineer, and Dmitry Zakharov, Senior Software Engineer

We’ve grown accustomed to our devices becoming more intelligent, recognizing and interpreting voice and movement through advanced audio and video processing techniques as well as sophisticated sensors. Say “Hey Google” or wave a hand and our devices not only respond, but often serve up preferences they have been trained to offer. Welcome to the era of Smart IoT Edge devices.

These smart devices have become ubiquitous and their capabilities expected: speakers with voice control that utilize highly accurate speech recognition from an extensive vocabulary of trained voice commands; wearable activity trackers that recognize human activity such as sitting, standing, walking and running based on input data from sensors like gyroscopes, accelerometers and magnetometers; smart camera-equipped doorbells, performing facial recognition and triggering an alert that can be sent to the owner’s mobile device with an image or video; even self-driving cars, applying advanced computer vision techniques to detect vehicles, pedestrians and hazardous driving conditions.

At the core of this evolution are increasingly powerful and sophisticated machine learning techniques that have become more widely adopted to make our systems more contextually aware and responsive. Machine learning technology that has been trained to recognize certain complex patterns (e.g., voice commands, human activity, a face, pedestrians) from data captured by one or more sensors (e.g., a microphone, a gyroscope, a camera) bring new levels of safety and convenience to our lives. When a pattern it is trained to recognize is sensed, the device can respond accordingly. For example, when the voice command “play music” is recognized, a smart speaker can initiate the playback of a preferred song.

The advent of more powerful neural networks and algorithms has allowed the evolution of machine learning-powered devices that learn without being explicitly programmed. However, the promise of greater automation and intelligence that machine learning enables, particularly in consumer devices or other applications that operate at the edge, is limited by power consumption.

The Low Power Challenge

While small in size, a modern IoT edge device must support a complex range of sensing, communications and processing tasks. The challenge is that many IoT edge devices are battery-operated and have a tight power budget, or have other constraints that limit power consumption, making low power design a very important element to consider.

This demands a power-efficient processor, as well as excellent cycle efficiency so that the IoT device’s processor can be run at a low frequency. Low power consumption is particularly important for IoT edge devices that perform always-on functions such as smart speakers, smartphones, or home entertainment systems that have “always listening” voice command functions. The same is true for camera-based devices performing facial detection or gesture recognition, which are “always watching.” And our health and fitness monitoring devices must be “always sensing.”

Such devices typically apply smart techniques to reduce power consumption. For example, an “always listening” device may sample the microphone signal and use simple voice detection techniques to check if anyone is speaking at all. It then applies the more compute-intensive machine learning inference for recognizing voice commands only when voice activity is detected. A processor must limit power consumption in each of these different states — in this case, voice detection and voice command recognition. As a result, various power management features, including effective sleep modes and power-down modes, must be utilized to meet energy consumption requirements.

Machine Learning: Training vs. Inference

In machine learning, two main capabilities are important for our smart devices: training and inference. Training starts with an untrained model, such as a multi-layered neural network with a chosen graph structure. In these neural networks, each layer transforms input data into output data while applying sets of coefficients or weights. Using a machine learning framework like Caffe or TensorFlow, the model is trained using a large training dataset. The result is a trained model, for example, a neural network with its weights tuned for classifying input data into certain categories such as the different types of human activity in the wearable activity tracker mentioned above.

Inference uses the trained model for processing input data captured by sensors to infer the complex patterns it has been trained to recognize. For example, it can check whether the input data matches one of the categories that a neural network has been trained for, like “walking” or “sitting” in the activity tracker device. Upon inference the trained model is applied to new data, and inference is typically performed in the field. This is where low power consumption becomes especially critical and is an important consideration when designing IoT devices that operate on the edge.

Depending on the application, input data rates and model complexities for inferencing can vary significantly in an IoT device. For example, a simple motion detection capability requires less input data than an audio recognition feature, which will be less than a sophisticated machine-vision enabled system. The input data rate can range from 10s of samples per second for human activity recognition with a small number of sensors and up to 100s of millions of samples per second for advanced computer vision with a high-resolution camera capturing images at a high frame rate. As a result, the compute requirements for machine learning inference can differ by several orders of magnitude.

Machine Learning: Low Power Design

For machine learning inference with low to medium compute requirements (a large portion of consumer IoT devices), selecting the right processor is key to achieving high efficiency for the implementation of machine learning inference. Specifically, having the right processor capabilities for neural network processing can be the difference between meeting low MHz requirements (and thus low power consumption) or not.

For more details on how low power operation can be achieved in smart IoT design, download our free low-power machine learning whitepaper, which describes the efficient implementation of machine learning inference on a programmable processor. We also present a programmable processor and an associated software library for the efficient implementation of low/mid-end machine learning inference.