Edge, fog and cloud computing: Where you process data matters

by Mike Rosam
| May 31, 2022

Dig into an explainer on the differences between edge, fog and cloud

In 1996, a group of technology executives at Compaq Computer began using the term “cloud computing.” They described a future in which services would move offsite and files would be stored on the web, accessible from any internet connection. Their vision proved to be true and the term cloud computing stuck.

Today most folks know what we mean by “in the cloud.” But newer terms, such as fog computing, cloudlets and the edge, might have you wondering just where that data is going. We’ve got you covered in this simple explainer.

 

What is edge computing?

“The terms cloud, fog and edge computing all refer to where the computing process takes place.”

 

To understand the extended cloud analogy, it helps to know that the terms cloud, fog and edge computing all refer to where the computing process takes place. When processing happens on or beyond the periphery of the cloud, we start to talk about it as edge computing, with the farthest edge being the closest in proximity to where the data is generated.

Edge computing can store, process, or send data from the edge to the cloud, so nothing is lost. It simply reduces the time needed to turn that data into action, making it useful in cases where low latency is critical or a back-and-forth exchange with the cloud is unnecessary. Think of self-driving cars, industrial IoT sensors, and even some of the apps on your wearable device.

Speed and efficiency gains make edge computing attractive for both consumer and industrial markets. Usage forecasts predict that 37% of the global infrastructure edge footprint will be for mobile and residential consumer use cases, with the remaining 63% supporting markets such as healthcare, manufacturing, energy, logistics, smart cities, retail and transportation, according to Equinox’s 2021 State of the Edge report.

 

What is fog computing?

As I explained earlier, cloud, fog and edge all refer to where the processing takes place. Following that logic, fog computing takes place between the edge and the cloud. It could arguably be described as near-edge computing. The terminology is still evolving as the technology advances.

 

“Computing in the cloud, in the fog or at the farthest edge … can make a significant difference in technical applications that are processing large volumes of data at high speeds.”

 

The differences between computing in the cloud, in the fog or at the farthest edge might be hard for humans to detect, but they make a significant difference in technical applications that are processing large volumes of data at high speeds.

 

Understanding cloud, fog and edge terminology

The language around edge and fog computing is still evolving with a lot of overlap between terms. This graphic helps break things down by showing which common terms refer to.

 

Original graphic source: California Technical Academy

 

Near edge, far edge, outer edge: These terms refer to proximity as measured from the central data center out. So the near edge is close to the cloud while the far or outer edge is on the periphery, close to the user interface or data collection point.

Client layer, user edge: This is where events happen and data is collected — a user interface, smartphone, IoT sensor, tech-enabled machinery, etc. It includes devices and applications at the user’s location.

Edge layer, service provider edge: This is computing that happens on the service provider’s network. It might be on the same premises as the user edge and it can be used to manage data to and from the user edge. The service provider or service edge is sometimes broken down further, based on the proximity of the data center to the cloud, with the far edge being the closest to the user edge.

Internet or Network edge: This is the boundary between a user interface or data collection device and the network or internet.

Cloudlets: These are small-scale cloud data centers that operate on the edge of the cloud. They provide low-latency computing to devices within close geographical proximity.

Client layer: Device and applications that run on-site at a user’s location.

 

Stream processing at the edge

While the location and underlying infrastructure might change based on where the actual computing takes place, the types of processing taking place can be the same as those happening in the cloud. In fact, some experts argue that the edge is less about location and more about data processing capabilities.

 

“Some experts argue that the edge is less about location and more about data processing capabilities.”

 

“Edge Computing is definitely a thing, but the computing need not occur at the edge. Instead what is needed is an ability to compute (anywhere) on streaming data from large numbers of dynamically changing devices, in the edge environment,” writes Simon Crosby, CTO of Swim.ai.

Stream processing transforms data into action as it is received. It cuts out the traditional back-and-forth of batch processing for increased speed and efficiency. This enables computing to happen outside the cloud and trigger immediate action, such as detecting a fault on machinery and shutting it down to prevent an accident, alerting a doctor that a patient is at risk, or making sure that a mobile device is using the fastest network available in real time.

Placing stream processing closer to the edge amplifies the speed and efficiency gains. Up until recently, this technology was prohibitively hard for most companies to manage. But Quix is changing that for companies across industries.

See how Quix’s no-code workflow and open source library delivers the benefits of stream processing to teams of all sizes.

Join The Stream community, where you’ll find developers, engineers and scientists supporting each other while working on streaming projects.

by Mike Rosam

Mike Rosam is cofounder and CEO at Quix, where he works at the intersection of business and technology to pioneer the world's first streaming data development platform. He was previously Head of Innovation at McLaren Applied, where he led the data analytics product line. Mike has a degree in Mechanical Engineering and an MBA from Imperial College London.

Related content