Intel Engineer: Much Left to Solve Before Cloud and Edge Can Become One

Why the dream of convergence between core and edge data centers is further away than it may sound.

7 Min Read
Optical cables connected to a switch in a data center

We’re familiar with the phrase “cloud-native,” perhaps enough now to consider it stretched about as thinly as streusel batter. It doesn’t mean any application that lives in the cloud. It means a workload designed to run on a cloud platform, often created on the same platform where it will run. Kubernetes ­­­­is a project of the Cloud Native Computing Foundation, which is the part of the Linux Foundation working on issues around such workloads.

So, if a workload is “edge-native,” what should this entail? If the edge is another cloud (indeed, vendors in the space do refer to an “edge cloud”), then at least theoretically, distributing workloads to edge data centers should be no more difficult than a cloud deployment. In a perfect world, any service mesh (a virtual network that connects Kubernetes-orchestrated services to the functions that will consume them) should work just as well for services hosted in edge environments.

But edge data centers, as Data Center Knowledge has been explaining, will have somewhat different purposes and missions than cloud data centers. They’ll host small handfuls of mission-critical workloads that require minimal latency and, in many cases, maximum reliability. Because they’re not being housed in centralized facilities, their function will often determine their location. There probably won’t be full-time IT personnel onsite, so the entire operations cycle will need to be administered remotely.

The Basic Constraints

Edge data centers are much more resource-constrained than “core” cloud data centers, Srinivasa Rao Addepalli, Intel’s chief principal cloud and edge software security architect, explained during a recent panel discussion at KubeCon in San Diego. “We are talking about one single server load [per edge site]. So, any service mesh technology, we’re thinking, should work in constrained environments like edges.

“Secondly, edges are physically insecure,” Addepalli continued. “You don’t have security guards manning edge locations.

“The third thing is they just do not have, many times, static, public IP addresses.” Without static IP addresses, microservices distributed across multiple edge locations cannot talk to each other. “You have to consider, in my mind, these three characteristics in figuring out what kinds of challenges and solutions you require… in service meshes working for edge locations.”

Addepalli painted a verbal image of a real-world, distributed data center architecture where servers at the edge do not behave like parts of a broader cloud. From a networking standpoint, edge servers rely on so-called edge gateways that negotiate for IP addresses using protocols like DHCP. Network virtualization creates a necessary abstraction between virtual and physical IP addresses, but that abstraction becomes far more difficult to reconcile when the underlying physical addresses are themselves dynamic.

The first edge-based micro data centers with any significant chance of market success are those designed to be deployable almost automatically. These self-provisioning kits will set up everything needed to become addressable and remotely administrable. This probably means their Kubernetes clusters will be self-contained environments by default, with the option to become segments of broader, cloud-based clusters.

It may also mean they’ll need to establish their network presence as temporary addresses. That could affect how IT central offices provision these remote units.

This is a very different picture from that being painted by edge server vendors such as Dell, which implies that a common cloud platform and a common network virtualization layer – Dell-owned VMware’s NSX, for instance – could seamlessly stitch together edge-based resources. In that view, remote administrators would see one cohesive network by way of a “single pane of glass.”

As VMware’s own open source lead technologist Ramki Krishnan described it, the degree of resource convergence necessary to make a single pane of glass make sense hasn’t happened yet.

Today, “network functions (as a simple example, firewalls of any kind)… are thought of as completely separate from applications,” he said.  “They’re considered so separate that the whole deployment and management is a big nightmare.”

Suppose for instance that you have an edge rack with three to five servers, and you need to deploy both network functions (critical for telcos and communications service providers) as well as applications. “Because of the lack of a converged approach it’s really a pain,” he said.

While the CNCF’s Telecom User Group is devoting time and energy to alleviating this pain, there remains a divisive element in telecom network architecture, mandating the separation of user and network functions for security reasons among others. Still, that prevents the kind of architectural convergence that may be necessary for a real-world edge deployment to match what server vendors have already envisioned for it.

“But I think there’s a lot more that needs to happen to drive towards edge realization — especially an edge-constrained realization,” concluded Krishnan.

The Problem with Service-Mesh Proxies

If any manner of edge server deployment is to be integrated with the rest of an enterprise’s Kubernetes-based infrastructure, it will need to incorporate a service mesh. One of the emerging products in this field is Istio, which uses SDN to divide microservices-oriented network traffic into control planes and data planes. To facilitate an abstraction layer between itself and the network, Istio employs a service proxy called Envoy as a “sidecar,” intercepting network traffic while dispatching responses to requests.

That proxy proves to be convenient in a standard enterprise deployment. But edge deployments are operated by communications providers and organizations for whom latencies cannot be allowed to accumulate. For them, remarked Intel’s Addepalli, Istio and Envoy may gum up the mechanism.

Most proxy-based service meshes are meant for higher-level protocols like gRPC, he said. “But in the edge and IoT kinds of environments, you have long HTTP, long gRPC-based applications.” One of the reasons the Envoy sidecar was created was to take advantage of the information gRPC provides, to route network traffic directly to named or tagged Kubernetes services.

Created by Google engineers, gRPC enables interfacing between client-side functions and server-side services, by way of an interface definition language (IDL). You may be familiar with the concept of service-oriented interfaces, particularly if you’ve ever tried to enable DCOM on a Windows network. You’ll recall the nightmares administrators had there dealing with maintaining registries. gRPC does not rely on a central registry, but in one respect, that’s the problem: The entire interfacing process takes place through the exchange of service manifest files between the two parties, which together form a binding contract. Envoy recognizes the bonds in such a contract.

This is where the length problem comes into play — those “long” applications to which Addepalli referred. Since edge-based applications may be transient for any number of reasons (their servers are portable, they’re only powered for portions of the day, etc.), these IDL connections may need to be renegotiated frequently. The negotiations take time, and that’s where the latency problems emerge. And they are significant enough for engineers at Verizon and elsewhere to consider strange, perhaps contorted, architectures that could enable some kind of proxy-based mechanism — perhaps even a proxy for the proxy.

‘Proxy-Less Service Meshes’

So, processes for low-latency applications need “proxy-less service meshes,” Adepalli said. IT operators may lose the advantages of observing service traffic at a level below API calls, down into Layer 3 and Layer 4, he noted, and for some applications this may pose a problem. But the payoff will come with much lower latency and higher performance.

Such proxy-less alternatives already exist. One example is AppSwitch, which is designed to integrate with Istio, replace Envoy, and serve as a layer of abstraction for the network back plane. But as AppSwitch’s deployment approach would have it, this data plane would extend throughout the entire enterprise network, not just the local network loop at the edge. As a result, the notion that you could have a proxy-less service mesh at the edge and an Envoy-based service mesh at the core flies completely out the window. It’s one or the other.

Casting a ray of hope toward a solution, Addepalli made two observations: First, perhaps services that cohabit the same node don’t need to discover one another by way of a service mesh. Eliminating local traffic could reduce the overhead on Envoy significantly. Secondly, sidecar-based service meshes such as Istio with Envoy comprise about half the active resources in a small server cluster.  “The purpose is that we should make 90 percent of our code for applications’ resources,” he told KubeCon, “and 10 percent for sidecars. That’s why we are looking for techniques for optimizing service mesh technologies.”

One such optimization could involve Intel producing a data center accelerator, he acknowledged, perhaps as an FPGA (though his statement was far from a commitment by Intel to doing so). Its purpose could be to offload internal service mesh functions from CPUs and onto dedicated co-processors. Perhaps, if such an accelerator were made available, an edge server manufacturer could produce an out-of-the-box self-provisioning mini-cluster that uses Istio but incorporates a proxy-less protocol internally. It’s something that could effectively plug into the enterprise network and provide the necessary infrastructure abstractions internally without having to re-architect the entire enterprise.

About the Author(s)

Scott Fulton III

Contributor

Scott M. Fulton, III is a 39-year veteran technology journalist, author, analyst, and content strategist, the latter of which means he thought almost too carefully about the order in which those roles should appear. Decisions like these, he’ll tell you, should be data-driven. His work has appeared in The New Stack since 2014, and in various receptacles and bins since the 1980s.

Data Center Knowledge

Data Center Knowledge, a sister site to ITPro Today, is a leading online source of daily news and analysis about the data center industry. Areas of coverage include power and cooling technology, processor and server architecture, networks, storage, the colocation industry, data center company stocks, cloud, the modern hyper-scale data center space, edge computing, infrastructure for machine learning, and virtual and augmented reality. Each month, hundreds of thousands of data center professionals (C-level, business, IT and facilities decision-makers) turn to DCK to help them develop data center strategies and/or design, build and manage world-class data centers. These buyers and decision-makers rely on DCK as a trusted source of breaking news and expertise on these specialized facilities.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like