What is Edge Computing and How is AI Driving its Growth?

Ron Lowman

Apr 12, 2022 / 6 min read

In today’s hyperconnected era, it comes as no surprise that data has become one of the world’s most valuable commodities. Companies of all sizes and industries rely on large volumes of data to extract valuable insights that in turn fuel business growth. From informed decision-making by Tesla’s automated driving system and addressing operational deficiencies in manufacturing facilities to using artificial intelligence (AI) to design a connected gaming world, the combination of big data, the internet of things (IoT), and AI has driven massive growth. And with an estimated collective total of 175 zettabytes of generated data across the world by 2025, this trend shows no signs of slowing down.

However, the increasing influx of data produced, collected, and stored has become a near-impossible task for IT teams to sort, process, and analyze every incoming byte. This constant, never-ending stream of new information is restructuring the traditional computing landscape as we know it. Businesses can no longer rely solely on large, centralized data centers to process their data in a timely and cost-effective manner, with bandwidth, network, and latency disruptions posing a risk to business-critical operations.

That’s where edge computing comes into play. By no means a new concept, edge computing has evolved significantly to reduce information storage and communication costs, while facilitating new applications with lower latency. But what is edge computing, really? And how does AI impact its effectiveness?

Read on to learn more about the building blocks of edge computing, its different architectures, the many applications fueling its growth, some of the hurdles chip designers need to overcome, and how AI will play an impactful role going forward.

IoT Smart Watching Scanning

Edge Computing at its Core

As the name suggests, edge computing refers to the processing and analysis of data closer to its source. This concept, also referred to as “edge cloud computing” and “fog computing,” transfers some of the many compute and storage resources needed for data processing closer to where it is generated – the end or edge device.

In traditional enterprise computing, networking capabilities take center stage and endpoint devices collect data from their locations and send it to a data center or the cloud, where it is then stored and analyzed before sending responses back to the source. This results in a time-consuming process which, coupled with the exponential volume of data generated daily, creates latency and bandwidth issues. Mobile edge computing (MEC) is another term that is witnessing a lot of momentum as 5G telecom providers look to a distributed computing environment to provide new real-time services and more diverse connectivity capabilities.

Edge computing focuses on relocating compute resources to collect and process data locally. Whether via on-premise servers, small aggregators, or micro data centers, companies can greatly benefit from shuffling the workload closer to their intended location. This not only allows for added capacity and reliability, but also reduces transmission costs and energy consumption.

Types of Edge Computing Segments

As important as location is when it comes to edge computing, how does one determine proximity? Does that mean 500 miles, 5 miles, or 500 feet? In theory, the closest that cloud computing resources are to the end device or application, the lesser the storage, memory, and computing resource is needed to process the data.

There are three basic system-on-chip (SoC) architectures for edge computing:

  • Regional data centers are miniature versions of cloud computing farms that can host and serve a significant population. While they help reduce latency, out of the three architectures, they are the closest to centralized data centers and still require a vast amount of computing, storage, and memory capabilities to function.
  • Aggregators and IoT gateways are used across edge computing infrastructures to perform limited functions, usually only running one or a few applications at a time with low latency.
  • On-premise or local servers address the particular connectivity and power consumption needs of edge computing. As they are deployed on-site, they are configured for specific functions as close to end devices and users as possible, resulting in ultra-low latency and application-specific optimizations.
Edge Computing Use Cases Flow Chart | Synopsys

All three edge computing systems support a similar SoC architecture, consisting of a networking SoC, a server SoC, storage, and an AI accelerator that share a common objective: enable lower latency to support new and existing consumer services.

Applications Driving Cloud Computing Growth

To put the rise and significance of edge computing into perspective, let’s look at how it impacts our everyday lives. Companies like Netflix and YouTube use edge computing to provide video on demand that can be accessed at the viewers’ convenience. To deliver quick, readily available videos, streaming companies host videos closer to their target users in certain regions to avoid having to stream them from a data center located thousands of miles away. By pre-uploading or “caching” the most accessed videos or movies in specific regions where they are most popular, they can reduce the time it takes to upload and download content from both ends of the spectrum.

These are far from the only use cases fueling the advancement of edge computing today. One of the market’s leading drivers is the advent of 5G and its convergence with AI capabilities as telecom providers increasingly look to offer additional services over their infrastructure beyond data and voice connectivity. To achieve this, 5G telecom providers are building an advanced ecosystem to host unique, local applications with integrated AI capabilities. Added servers open up their network to third parties’ applications, thus creating new business opportunities to deliver high-traffic content closer to users.

Web scale, gaming, mixed reality, industrial, and enterprise conglomerates like Google and Microsoft are also contributing to the market’s growth as they move toward AI-powered edge computing hardware, software, and services. By embracing more automation across the communication channel, companies can increase operational efficiency and ensure customer experiences meet expectations.

What Do Chip Designers Need to Overcome?

To support these market advancements, chip design teams need to consider several critical aspects throughout the design process, with one of the top priorities being the reduction of energy use while increasing efficiency. One of the main reasons most companies want to move from the data center to the edge is to reduce their power consumption – and thus its related costs. To address this, chip design teams need to produce SoCs that are lower powered than a traditional data center SoC, without compromising effectivity.

As the adoption of AI for new market applications continues to grow, chip designers and engineers will also need to understand and find the most effective location to place AI algorithms with respect to their specific functions and tasks. The demand for lower energy use also impacts the adoption of AI capabilities within edge computing, as it involves reducing the computational power of the AI algorithm and memory resources required across the application design.

Despite the many challenges it presents, the addition of AI acceleration capabilities via server chips to all these edge computing segments and applications is a trend we expect will only continue to grow as bandwidth requirements evolve.

AI: Shaping Tomorrow’s Edge Computing Market

AI-powered edge computing is already proving its value in today’s market. With our lives becoming increasingly more connected, the demand for fast and reliable services will also continue to grow exponentially. Be it mobile or AI, applications are pushing toward advanced process nodes.

To keep up with demand, teams will need to continue integrating powerful AI capabilities into their infrastructure to ensure effective processing, enhance memory performance, and provide seamless connectivity. At Synopsys, our silicon-proven DesignWare IP® portfolio tackles these requirements through an array of solutions specifically designed to support specialized processing, power, memory performance, and real-time data connectivity.

Synopsys ARC® EV Processors allow for complete AI processing with scalar and vector capabilities, providing high-performance, flexible processing capabilities for embedded applications.

Our IP solutions also help support efficient architectures for varying memory constraints including DesignWare Multi-Port Memories and our Embedded MRAM Compiler IP. Synopsys’ HBM3 and LPDDR5 IP solutions, for instance, directly addresses the bandwidth bottleneck as it enables designers to achieve their memory requirements with low latency and minimal power consumption.

Power consumption also plays an important role when establishing a performance-efficient foundation for SoCs. Synopsys provides a broad portfolio of Synopsys DesignWare Foundation IP that includes memory compilers and non-volatile memory (NVM), logic libraries, and general-purpose I/O (GPIO), enabling SoC designers to lower integration risk, achieve the maximum performance with the lowest possible power consumption and speed time-to-market.

As AI continues to push the boundaries of edge computing and becomes more closely embedded across applications, we are excited to continue building innovative deep learning solutions and nurture AI SoCs that will address emerging power, performance, and area (PPA) and time-to-market requirements.

Summary

The explosion of data is here to stay. With the unique ability to bring cloud services closer to edge devices while lowering latency, the convergence of edge computing and AI is poised to reshape traditional computing processes and pave the way for new applications and services in the years to come.

Continue Reading