Author: James Butcher, Product Manager| IOTech
It is already clear that centralized cloud-based architectures cannot exclusively manage the billions of IoT devices predicted to exist in the coming years. Edge computing is now accepted as a critical part of managing and making sense of the huge amount of raw data being produced. However, this isn’t the end of the story! To create true end-to-end intelligent systems, a hybrid edge-and-cloud approach is needed. These hybrid architectures are emerging as the key basis for successfully deploying advanced IoT systems.
Benefits of the Edge
The sheer number of already-existing IoT devices and sensors means that cloud computing cannot be the sole mechanism for collecting and processing the increasing volume of machine data. The cost of sending huge amounts of data to the cloud – along with issues related to latency, connectivity and security – means it is simply not feasible to architect a scalable system in this way.
Edge computing provides a decentralized approach for device connectivity, data collection and intelligence that allows for real scalability. Edge computing focuses on processing data where it is produced, local to the devices at the edge of the network.
As an example, consider the utilities industry and the large-scale monitoring of an electricity power grid. Modern companies are deploying telemetry systems where local substations can collect data from sensors and devices positioned along the network. It is far more efficient to route the sensor data to the substations than to a centralized cloud. The substations, which are often equipped with powerful computing facilities, can make sense of that local data and apply edge intelligence. That intelligence might be to automatically throttle the network to avoid a dangerous power surge or suggest that an engineer inspect a section of the network where readings are abnormal. In addition, many other sectors use edge computing, including manufacturing, retail, building automation, oil and gas, process control and fleet management.
Understanding the Bigger Picture
Edge computing is beneficial for many reasons, including faster decisions, reduced bandwidth costs and better scalability, but cloud computing also continues to advance. Cloud vendors provide a vast array of dedicated data services such as large-scale storage, big data trend analysis, machine learning, advanced visibility suites and so on. These services are convenient, easy to use and integrate well with back-end IT systems. There is also no doubt that accessing computing resources via an “as a service” model can be more cost effective than buying and maintaining your own computing infrastructure.
Best of Both – a Hybrid Approach
While edge computing has clear advantages for scalable autonomy, it is also clear that full end-to-end industrial systems are rarely being deployed entirely at the edge.
First, there is almost always the need for at least some of the edge data to be pushed to a centralized or cloud-based system for overall monitoring and management. Cloud computing can provide intensive data processing and machine-learning capabilities that the edge cannot. Human operators overseeing the safe running of the system are likely to be positioned back at headquarters, rather than at each remote site.
There are benefits of a hybrid edge-and-cloud approach for our electricity monitoring. For example, the remote substations can make their own decisions about how their local infrastructure is running. But crucially, that substation can send a simplified representation of that up to its centralized system. The exact use case will dictate whether the shared information is the inferred intelligence or a subset of the original data (perhaps based on volume, timing or filtered content). However, the key point is that only some of the edge data should go to the centralized system. This simplifies data flow and reduces both cost and complexity. The substation is the aggregator of local knowledge that communicates to the central system on an as-needed basis.
Just as importantly, the centralized system will also need to communicate back to the edge. Even the most autonomous edge systems will be monitored for correct operation and have means to be adapted dynamically – either by human or machine-based intervention. Also, the insights gained from the processing power of the cloud, such as an updated or refined computer vision model, will be pushed back down to optimize the edge operations. Two-way data flow, therefore, is required to support a seamless hybrid edge-and-cloud architecture.
Openness and Choice
In this hybrid model, interoperability between edge architectures and cloud environments is absolutely key, but the question is, how do you best develop a hybrid edge-and-cloud architecture? The edge is a complex environment consisting of many different types of sensors and devices that communicate via a multitude of different OT protocols. An organization must be able to acquire the data in the first place, normalize it from disparate sources and apply intelligence wherever it generates the most value, either locally or in the cloud. This is not an easy task.
Edge platform software that supports plug-and-play of both edge and cloud components is playing an increasingly important role in accommodating this complexity without imposing significant dependencies on the physical infrastructure.
An example of a widely adopted edge software platform is the Linux Foundation’s open source EdgeX Foundry project and its ecosystem of partners who provide a vendor-neutral solution for these requirements. EdgeX and associated products, such as IOTech’s Edge Xpert, also have strong bi-directional integration to the main cloud vendors. This gives users the best features of both edge and cloud computing.
There is currently an acceleration from fully centralized cloud-based systems to distributed architectures driven by edge computing. Cloud computing alone is unable to handle the vast amounts of data created by the billions of connected devices predicted. It will not meet the need for local insights from the latency-sensitive applications on which they depend. What is also clear is that most systems will not rely 100 percent on edge computing solutions, either. In fact, fully autonomous edge applications are quite rare. Most systems require a hybrid solution consisting of both edge and cloud components. To be successful, users require the ability to utilize cloud resources for heavy-duty applications, while using edge computing for lighter-weight processing and local, real-time insights.