To properly evaluate edge tools and applications, IT professionals need to examine what distinguishes edge computing from other types of computing. The edge provides computer activity somewhere close to the point of action. This flies against modern trends to consolidate that computing power in the data center or the cloud to create a positive economy on a scale.
The justification for removing the standard concentration principle is latency. Edge computing aims for real-time missions in which the application must be strongly linked to real events and actions – so it must be close.
The most critical term in edge applications is control loop – the workflow from the point where a real-time event signals the need for action, to the determination of the appropriate action to take, and back to checking elements that perform the necessary steps. This cycle synchronizes with a process, such as a mountain range or warehouse, in which it is critical that there is no delay between an event and a control reaction. The control loop should be short and introduce as little latency as possible. Randomization accomplishes part of that task — partly through network optimization, and partly through the combination of application design and tool selection.
Balance of latency with urgency
Traditional applications – transactional applications – are almost always limited by human reaction time or database performance in terms of performance and response time. Edge Computing’s real-time applications have no such limitations. The performance of internet connections and edge applications determines response time; anything that introduces latency into the system must undergo optimization, or the mission of the application is at risk.
This is a major change in the way developers think about applications. Breaking large, monolithic applications into dozens of microservices, using a server network for message sharing and similar modern application design strategies could prove fatal in a real-time application.
The application practices and tools appropriate for edge computing reflect the fact that the edge is likely to be positioned in a rack or chamber, close to the processes being controlled, rather than in a data center or. in the cloud. Edge platforms are not designed for general-purpose computing, so traditional OS and middleware are not optimal. Embedded control or real-time OS and process-specific middleware form the basis for the common edge system. The hardware is usually specialized, so the edge is the opposite of either the data center or the cloud.
Retentions on margin adoption
The need for point-of-activity hosting makes it likely that a given edge location cannot be supported by any resource located elsewhere without introducing more latency than the performance can accept. That mere fact reduces the advantage of virtualization – in any form – significantly. And instead of higher availability with fast redeployment in case of failure – or scale if there is a major change in load size – the edge system should rely on redundancy to improve availability and be realized for peak loads rather than for scalability. This has a significant impact on the tools and platforms used.
Containers that facilitate both software portability and easy scaling and redeployment are less valuable at the edge and can generate unnecessary – or prohibited – latency. Orchestral tools like Kubernetes are also superfluous where there are neither containers to deploy nor actions of means on which to deploy them.
Are more applications moving to the edge?
The edge is, in a process sense, an island. Expect to design programs differently for edge operation, but traditional programming languages work if the platform software supports them.
Edge applications monitor the control loop and enforce a latency budget, but edge applications require maintenance and management in parallel with cloud and data center applications. Container strategies may not be applicable – much less helpful – where there are no containers or resources. Monitoring tools specific to edge OSes are better at managing latency in edge control loops, but monitoring the latency budget by a complete transaction may require data integration from multiple sources.
Where businesses want instrumentation and automation throughout IoT infrastructure flow, from edge through cloud to data center, there are versions of Kubernetes designed to support bare metal clusters. These combine with multi-domain Kubernetes tools, such as Anthos or Isthmus, to unify operation. Alternatively, businesses could use a non-container-centric DevOps tool, such as Ansible, Chef or Puppet. However, because edge applications are likely to be less dynamic than cloud applications, it may be easier to manage them separately with the available OS tools.
The edge does not take over
A major shift to edge computing is far from certain. While online GUI processes are increasingly latent, they still support human reaction time and are so strongly linked to the internet and public cloud services that it is unlikely that the majority will move to the edge.
IoT applications are the main source of edge computing. The industrial, manufacturing and transport applications which would justify the edge most easily is a small part of most business application inventory. The edge is different and will probably be challenging in terms of tools and practices, but it’s not that big, or growing so fast that it disrupts IT operations. Think of edge computing as an extension of real-time performance rather than as another place to host applications.