Edge Computing seems to be a hot topic these days but what does it really mean and why is it important? Well as it turns out, most data center and cloud architectures today are highly centralized. For example, if we look at the public cloud providers like Amazon AWS or Microsoft Azure, they place gargantuan data centers in locations with cheap real estate and in close proximity to power generation facilities. This allows them to secure the power they need and create economies of scale by building thousands of highly automated compute, storage, and networking pods in a cookie-cutter approach, allowing for a lean IT staff at each location. This architecture is typically referred to as Public Cloud or Cloud Computing.
Microsoft Data center covers more than 270 acres in Quincy Washington, courtesy of ZDNet
To better understand the meaning of the edge and its importance, let’s define Edge Computing. In basic terms, Edge computing is defined as deploying computing power closer to users and things.
Edge Computing vs. Cloud Computing – What Are The Differences?
So what is the difference between Cloud Computing and Edge Computing and how do they compare? Well, as it turns out, there is an emerging set of applications that have certain requirements that cannot be met with a centralized cloud computing architecture.
To better understand edge computing and its applications and capabilities, let’s start with a few example applications that require edge computing:
Example 1:
Let’s assume you are in Los Angeles with an IoT public traffic safety application running. This sort of mission-critical, real-time application cannot tolerate the latency imposed by the speed of light and delays from network routers to travel all the way from L.A. to the Iowa data center for processing and then all the way back to the end devices in L.A. Instead, this application needs the compute processing function to be deployed closer to where the application is being consumed. This might be in a service provider central office or at the bottom of a 4G or 5G base station, for example, and this location is referred to as “the edge”.
Example 2:
Another example of edge computing could be virtual reality application. To avoid nausea in a virtual reality environment the latency should be less than 20ms worst case and lower than 10ms is optimal. Traveling from New York to Iowa and back again with all of the network hops required would not support latency under 20 ms. This requires computing power to be deployed closer to the user. The solution in use today from companies like Occulus is to place the compute function within the headset itself, resulting in a large and heavyweight headset that is unwieldy. If the processing for VR could happen in an edge computing center located nearby where latency is within tolerances, then the user could wear a lightweight display, making it much more practical and likely more widely deployed.

4 New Application Requirements Driving Edge Compute
There are many application examples like these two that depend on low latency that have triggered a need for Edge Computing – deploying compute, storage, and networking closer to users and things. However, there are also additional drivers beyond latency – the 4 key categories of requirements are listed below.
- Latency – real-time data for applications such as public safety, AR/VR, drone control, and autonomous vehicles, to name a few, cannot tolerate high latencies, that could affect an application’s performance and delivery of mission-critical functions.
- Bandwidth – with IoT, there is an increasing amount of data moving from things up toward the cloud. Imagine 1,000 video cameras at a campus location, but only five minutes of video on two of the cameras has any value. It is more cost-effective to perform AI video analytics at the edge versus paying for the bandwidth to send all that data to a central data center.
- Autonomy – in public safety applications, for example, refinery safety, there may be a need for rapid local interaction between actuators and sensors that need low latency but also that cannot stop working if the connection to the central cloud is lost. If the connection to the central cloud is lost, the application must still perform its responsibility.
- Privacy – data sovereignty and GDPR impose requirements on where personal data resides, and this will become more important. Imagine a theme park using facial recognition to provide a superior customer experience. Once the customer leaves the theme park, they do not want that data leaking out – it must stay local to the theme park.
These four categories of requirements have resulted in the emergence of this trend around Edge Computing. The deployment locations are varied and can include environmentally controlled or uncontrolled locations; examples include telco central offices, cable operator headends, and D-hubs, base stations, factory floors, stadiums, and remote field locations such as wind farms or oil rigs.
The Importance of Edge Computing for 5G
With regards to Edge Computing, one difference between 5G and 4G is that 5G has a more highly distributed architecture. In fact, 5G has something called CUPS which stands for “Control Plane and User Plane Separation”. In effect, this is saying that the user plane, which is the data processing for the application that the user or the machine is consuming, should be distributed into edge computing sites for better throughput and scalability. In other words, edge computing is important for 5G; service providers who are deploying 5G will need an edge computing architecture to support 5G deployments.
What is MEC and How Does It Relate to Edge Computing?
MEC is an acronym for Mobile Edge Computing and more recently Multi-access Edge Computing. It has actually taken on many different definitions over the last few years as a general term for Edge Computing but it also is a standard initiative by the ETSI called ISG Multi-access Edge Computing. So when you hear “MEC” you can think “Edge Computing”.
What is Distributed Cloud?
You might notice that Pluribus and other industry leaders like Ericsson use the term “distributed cloud”. In this blog, we are defining edge compute as different than cloud computing but now we are using the term distributed cloud and this might be a bit confusing.
However, when we talk about edge computing, it is to get very clear that the edge is a location. The consumption model for edge compute will still be cloud-based – spin up containers or virtual machines (VMs) and place the application at a particular edge location, spin down when done, and pay for what you used. In this new world there will not be one edge, but many edges, thus the term “distributed”. So this set of distributed edge computing sites plus the cloud consumption model equals distributed cloud. Ultimately we will see a set of edge application orchestration providers who will dynamically determine where to place the workload to satisfy the application requirements at the lowest cost.
What are the implications?
The main implication is around moving from a massive, centralized and environmentally controlled data center facility to a collection of highly distributed and space/power/cost-constrained locations; many of which will not be environmentally controlled and may even be lights-out with no co-resident staff. You can think of these as mini and micro edge data centers distributed throughout cities, towns, and even in rural areas.
This requires a different approach to designing these micro edge data centers and, in particular, requires a radically different approach in terms of networking. The network must be completely automated so it can be remotely monitored and controlled and this automation needs to occur with minimal overhead. If the edge datacenter has a rack of 10 servers there should not be another 10 servers required for software defined network (SDN) automation and visibility. In a centralized data center 10 servers for SDN automation in the context of thousands of servers are in the noise.
However, in a small, constrained micro edge data center having 10 revenue-producing servers, these 10 non-revenue producing servers becomes significant power, space, capital cost and operational cost overhead. Solving this problem is a focus area for Pluribus Networks. Our Adaptive Cloud Fabric(™) provides a network underlay and overlay fabric designed for distributed edge computing environments and is fully automated with a distributed, controllerless SDN approach.
To learn more about our edge computing solution see our Distributed Cloud Networking solutions page or request a demo here.