Where the relocation of application intelligence from a client-side device to a centralized cloud sounds so much like a good idea that telecommunications engineers begin to ponder: Is this something we could have been doing with every mobile or embedded device all along?
When you hear the bountiful promises of internet of things so poetically uttered by its prospective vendors, like the nice half of a pharmaceutical commercial, your brain is probably receiving two implicit messages. One is that connectivity is a virtue unto itself, like consciousness or the acquisition of a new sense. The other is that connectivity would render devices “smart.”
“I think we are going to be surrounded by smart devices,” Internet Protocol co-inventor Vint Cerf told a Google-sponsored startups conference four years ago. “There’s something really magic, to be able to assume that any device you have that has some programmability in it could be part of a communications network, and could communicate with any other random, programmable device.”
A study of any technology over the span of history, for as long as humans have been building machines, will demonstrate quite clearly that it tends to lose its sense of magic, or nirvana, in its implementation. The one quality that 4G wireless had five years ago that it lacks today is that certain “something really magic.” What is never so obvious at the outset of a platform’s or a system’s adoption is that the shedding of false attire is for the better. Technology is, at its root, the systematic extrication of magic from a task. It is the application of practicality, logic, and inherent capability to the overcoming of obstacles and the achievement of goals.
For nothing done at scale is through the utterance of a spell or the flick of a wand. Real magic takes effort.
What 5G wireless would require to lift our world out of the trap that was unwillingly set for it by 4G — a trap of exponentially increasing costs never to be compensated by flat revenues — is something that from our perspective may seem as magic as Vint Cerf’s view of the IoT. Indeed, the force that would enable a global IoT by way of 5G may indeed seem like magic, in that much of it has yet to be determined. This part of our story, the fourth in our ongoing series on 5G in ZDNet Scale, is about the moment in history when that goal became feasible, the technology for getting it done revealed itself, and the magic, like the opening comic act before a big concert, left the stage.
I’ve compared the journey the world’s telecommunications providers have taken to come together to make 4G wireless into 5G, to the building of the Transcontinental Railroad. In American history, it was the US government that orchestrated the affairs between railroad companies, and so far the government has stayed out of 5G’s way (and should stay out). Still, the nation’s railroad lines knew that finely tuned, scheduled cooperation was necessary to build the infrastructure everyone needed to support their respective, competitive transportation services.
But there’s also this other significant difference between history then and history now: We knew what was on the other side of the Rockies. The engineers of the few passageways through this beautiful, impossible terrain forged the routes to the west and back with tenacity, sweat, and blood. Yet what if the circumstances were different: What if the only thing we had about the other side of the mountains wasn’t information and incentive, but just hope? What if we forged ahead with the eastern side of the railroad line, on the belief that what lay over the hills would make it worth all the effort?
And suppose when we first approached this uncharted territory, it was as if the Old World had opened up for us a second time. This is the situation with 5G: We face the prospects of forging a single path over the main obstacles that face us, only to find ourselves in a new and unexplored world. Here, we discover to our shock that, even though we had acted in concert during that one great push through the mountains, we’ve never really been in agreement over which direction we’re headed and where our final destination would be, once we reached the other side.
Welcome to Septentrionalis, the uncharted territory with seemingly imaginary boundaries. Here, the search for a destination will lead us places we did not expect to go.
Smart and dumber
A psychiatrist could spot it in a moment: Our indecisiveness about our own future once the path forward is clearer, can be summed up by the titles and the first paragraphs of our own 5G marketing brochures. “Imagine,” they ask the reader up front, begging her to fill in the gaps.
“Imagine the potential automakers have,” writes Nokia, “to convert that data into valuable information for applications such as preventative maintenance.”
“The search is on,” stated Cisco in a 2013 article entitled, The Future of Connected Cars: What to Do With All That Data? That search is “for ways to make cars’ data capabilities more useful to drivers. . . To speed the search for expanded applications, car companies are turning to outside programmers.”
The magic here is represented by the open-ended nature of Cisco’s value proposition. “What kind of applications might this effort produce? That’s up to the imagination of developers,” its article concluded. The final response to the question of what we do with all that data: Who knows?
There actually is a rational, beneficial, profitable purpose for connecting cars — a purpose which, if it had been applied to 4G wireless, might have prompted the creation of 5G even sooner. Cars, like every other class of machine, age. Their approach to energy conservation, fuel consumption, power transmission should individually evolve as their own operating conditions change. A centralized maintenance platform that communicates with its vehicles periodically could implement updated strategies for how best to provide for their drivers under changing circumstances.
The keyword in the above sentence is “periodically.”
From period to period, a car’s firmware receives over-the-air upgrades, making it “smarter” and capable of performing functions it couldn’t before, as Tesla owners have already experienced first-hand. From an engineer’s perspective, how short can a “period” become, especially if the electronics of their cars have the connectivity of smartphones?
The German manufacturing firm Bosch GmbH became a kind of software company with the development of its own cloud-based IoT platform. Called Firmware Over The Air (FOTA), it’s this platform which supplies updated instructions to Tesla vehicles. Rather than receive software update packages from their service technicians, Tesla vehicles use dedicated communications units that receive updates from the Bosch IoT Cloud, by way of the 4G wireless network.
But Bosch’s Software Innovations division doesn’t really own its own cloud data centers, nor does it want to. Presently, it manages a cloud space on Amazon AWS. There, it manages a Cloud Foundry platform to build, deploy, and maintain all of its services in a cloud-native environment. That’s nice for 4G, but Bosch wants 5G.
One reason is because it foresees a time when edge-based computing platforms, managed over a 5G network, can run Cloud Foundry closer to devices, with far less latency. Conceivably, Bosch could move its clients’ existing applications to an edge platform with little or no changes.
But even then, Bosch is looking one very large step ahead. It’s an elevation of the internet of things model to a level that even the folks who pled with their readers to “imagine” the possibilities, didn’t imagine. It’s an untested idea that, if it works, would toss the whole “smart networks” model to the curb. It would begin in factories where smart devices are now employed. And, by way of an important update, those devices would become somewhat dumber.
“What is the potential of edge computing in a factory? What we see is, we can generally shift intelligence away from end devices to the network,” announced Dr. Andreas Mueller, Bosch’s head of Communication and Network Technology, as well as chairman of the 5G Alliance for Connected Industries and Automation (5G-ACIA).
This was not a typo. The man leading the effort to promote 5G network-oriented automation in Europe’s manufacturing centers believes the edge model enables the production of dumb-as-rocks devices that fulfill the same functions as smart devices. As Dr. Mueller told the 2018 5G Summit in Brooklyn last April, nearly all the software these zombified devices require would run in the cloud, except what is vitally necessary to make parts move, maintain safety, and download the next update. And if a continuous development regimen such as CI/CD were in place, those updates could happen several times a day.
“End devices could be anything from a programmable logic controller, a robot, a sensor device, [to] a human/machine interface,” Mueller continued. “Currently most of the intelligence is in the device. We just want to shift it away to the network.”
The devices themselves would become less expensive to produce, and thus less expensive to maintain. Bosch itself is a producer of such devices, so it would know this first-hand.
“Some devices,” said the man whose company stands both to lose and to gain from this initiative, “actually may completely disappear. Think of the programmable logic controller, for example. These are nowadays hardware devices. If we can have the PLC running as a piece of software in the edge cloud, we actually don’t need the hardware PLC anymore.”
Let’s put the implications of Dr. Mueller’s projection into as clear a perspective as we can: Applications run through a cloud data center, connected to a device through an edge cloud with minimal latency, could conceivably be faster than applications running directly on an embedded device with zero latency. If such a chain of events were attempted by means of AWS, the distance between the data collection point and the cloud data center would be too great, so the latency introduced would render the application too slow. So, at least for now, Amazon is not the edge.
What 5G would inject into this picture is the missing element: the data center stationed very close to the wireless transmission facility, or at least connected to the WTF by fiber optic cable. Remember, though, 5G began with the aim of moving something else, someplace else: Specifically, taking the intelligence of the radio access network (RAN) and, borrowing a page from Virtual Enhanced Packet Core (vEPC), virtualizing it as a network function (VNF). From there, the VNF could be deployed in a cloud data center — and we often hear that referenced as an “edge cloud” as well.
The whole point of accelerating the development of 5G in the first place was to realize cost savings, and China Mobile’s notion that it could be done by virtualizing the RAN and moving it to the cloud, was the catalyst. Bosch’s plan would be to take that strategy and keep going with it: As long as we’re building a new “edge cloud” for relocating telco functions, why not leverage that same data center to relocate device functions? For example, take all the smarts of a platform that distributes programs to smart devices on a Bosch factory floor, and move them into a virtualized network space in the edge, devoted to Bosch. Then, while we’re at it, radically simplify that program, because now all it has to do is create a centralized manufacturing itinerary for a sea of drones. The devices could become cheaper, the applications would become both faster and easier to maintain, plus an extra-special bonus: Bosch could move those applications off of AWS where they’re losing their cost-effectiveness, into a single location that’s also cheaper and easier to manage.
Even in the case of vehicle automation (self-driving cars), Dr. Mueller projects, an application that effectively automates several such vehicles at once might be easier and faster to run from a centralized location, than distributing just as many individual applications that automate one vehicle each.
There’s something really magic, so to speak, to be able to assume that any device you need to manage, anywhere in the world, can be programmed in one place rather than in thousands or millions of distant locales. You no longer have to be surrounded by smart devices, to get work done.
“Then, of course, we can explore the general cloud computing benefits also,” continued Bosch’s Mueller, “for these very demanding applications. In the past, it was not possible to offload things to the usual data center cloud — for security reasons, for real-time reasons. With edge computing, it’s possible, it’s getting feasible, at least if the edge cloud is within the factory — that’s a very important requirement here. So we can have high scalability, maintainability, and make it easier to secure the whole thing.”
Here’s where things in Bosch’s very realistic, objective dream start to get murky. It sounds as though Dr. Mueller is suggesting that his company reclaim its digital assets and transplant them to its own data center. While that’s not completely out of the question, he told 5G Summit attendees that Bosch is studying this tack from a cost-saving angle as well. You may have read someplace that telcos are seeking to distribute as much as sixty times the current number of cellular transmitters, in much smaller and easier-to-cool packages, throughout the world. For a large Bosch manufacturing facility, that could mean at least one, and probably more than one, 5G WTF would have to reside near or on Bosch property.
So, if the telco facility is going to be a stone’s throw away anyway, then the edge certainly has to be there too. This raises the possibility that Bosch could claim a network slice of that facility for itself. As we introduced in our last segment of Scale, a network slice is a segment of data center resources carved out of the cloud platform and reserved for one tenant’s use.
But here, Bosch would take yet another step further: Instead of reserving a slice of Telekom’s or Vodafone’s or Telefónica’s edge cloud for all of Bosch GmbH, Mueller suggests that the 5G provider delegate several slices of the edge cloud, each for a specific Bosch application. Rather than building a highly distributed, microservices-oriented complex of apps tied together by a nerve center like Kubernetes, Bosch could continue to build apps more simply on its existing Cloud Foundry platform, and the network could help secure them by keeping their respective slices isolated from one another.
There’s a name for this technique, and the network engineers have already been using it for quite some time. But its name hasn’t been seen in public very much — indeed, it’s even hiding in this photograph. It may be a facilitator for an internet of things, or it could turn out to be its exact opposite. It is deep slicing.
Mueller admits he doesn’t have a complete idea of how deep slicing would work, because it’s not ready to be formally proposed at this stage. For example, could two slices gain secured API access to a shared database? Could active slices be live-migrated to different servers with minimal or no downtime? Can public cloud infrastructure be relied upon for backup? These seem on the surface like problems that a Kubernetes cluster would have already resolved. But Bosch, as a service provider in its own right, has hard requirements for quality of service (QoS) that are stipulated in its SLAs with its own customers. Typical, garden-variety containerized scalability may not be able to adhere to those expectations — or perhaps it could. No one really knows yet.
In Bosch’s optimal deep slicing scheme, each application or use case would have its own slice. Dr. Mueller has seen all the promotional literature and videos about the “industrial internet of things” (including from Bosch’s rival, GE). In that model, a cloud platform may provide resources that are tailored for the needs of, say, manufacturing. But in the real world, he said, such a model is not realistic. Industrial applications have their own peculiarities, and running them together on a single platform introduces an element of non-determinism that, for engineers, isn’t practical.
Maybe deep slicing could be accomplished through what Mueller called “sub-slicing” — breaking big chunks into smaller pieces. Again, he’s not sure. In any event, the sizes of slices must be variable, he said, and open to stretching or shrinking, perhaps several times per day.
The 5G value proposition is based on a theory of the benefits of consolidating one class of resource (the intelligence) and highly distributing another (the transmission). The Bosch deep slicing model would take that ball and run it far past where telcos thought the end zone should be. Keep consolidating the intelligence, the model suggests, but distribute the edge even more, shrinking the distance between the industrial customer and the edge to about as far as from the residential customer’s back yard to the nearest 5G pole.
It all sounds wonderful. But if we take AT&T’s word for it, it might not be possible.
A “physical separation,” enunciated AT&T vice president for Ecosystem and Innovation Igal Elbaz, must exist between the service provider’s platform and the customer’s edge platform. This means different servers, not different slices. The reasons, he explained, are already set in stone and cannot be changed: The security of the network on which the virtual RAN and Evolved Packet Core are staged, absolutely depends upon a physical state of single-tenancy.
“How we build the orchestrator, and the layers inside the orchestrator, to allow us to run services and network functions separately, is built into the architecture,” Elbaz told ZDNet Scale. “But imagine at the edge a physical separation between the network function and the services function.
“Let’s separate the conversation between the separation of the user and customer services and the network function. That is separated!” he exclaimed, in response to our posing questions raised by engineers from outside AT&T regarding how what they call the “5G convergence” would take place. The developmental ecosystems for the platforms on both these separate layers, he said, have already matured in their own rights, and cannot be artificially merged with one another without re-architecting both.
So, there appear to be two schools of thought among telco engineers contributing to 5G. One group asserts that physical separation of customer-facing functions from network-facing functions is necessary because the servers running these classes need to run on different hardware. The second group asserts that convergence is not only possible, and not only necessary, but already happening. 5G network slicing would solve, they say, the problem of process isolation for VNFs that telcos are only just now encountering.
“First of all, I’m much more with the first argument,” said AT&T’s Elbaz. “I think the second argument is mixing all kinds of terms together. Network slicing does not imply automatically that everything runs on the same hardware. Network slicing allows you to run services for specific customers, and get a slice of the network. How the slice is deployed and orchestrated, that’s a different conversation. It still needs to maintain some separation.”
“I think you have to think about this: What do they mean by ‘separate?'” said Nick Cadwgan, Nokia’s director of IP mobile networking.
“You need to heavily instrument and automate the network,” Cadwgan continued. “You have to do that if we’re going to move forward. We have to be able to take what’s going on in the network and, via policy, work out what the implications are. The challenge you’ve got is, if you want to keep services and applications separate, we can do that. We already virtualize our networks today; we’ve been doing it for years. We can keep the things separate. I think the point that you’ve got is, how do you talk about operations, and how do you link it to services and orchestration? That is where we in the industry are now starting to look at and get our heads around, as to what that actually means.”
What it actually means, if it eventually means anything at all, could be the reason these two points on our map are separate. One represents mobile devices, or what network engineers call user equipment (UE). The other represents a concept in the embryonic stage, one that reveals itself in the periphery of networking conversations. I’m calling it “virtual UE,” or “vUE.” It’s about the feasibility of making the “smarts” of every mobile device effectively non-mobile. Imagine a virtual desktop in your hand, and you’ll see where I’m going with this.
Network functions are evolving — even against some engineers’ wishes — toward a deployment model that looks more and more like containerized enterprise data centers, and the realm of Docker and Kubernetes. There, virtual machines are wrapped around individual applications, as though they were devised to run only those applications rather than pretend they were fully-fledged operating systems into which these applications happened to be installed. Cloud-based computing platforms, including the ones that 5G stakeholders would see supporting telco data centers, are adapting the ways they implement NFV to apply to a broader variety of customers than just telcos. One way or the other, it’s the intention of these engineers (especially for OpenStack, an open source hybrid cloud platform) to enable the specific type of convergence that AT&T casts as impossible, but which Nokia portrays as an unavoidable, present-day fact.
“I think people are thinking about 5G, and everybody’s going, ‘Oh my god, we can do this end-to-end network slicing! How is it all going to work?’ Well, let’s take a step back here,” said Nokia’s Cadwgan in an interview with ZDNetScale. “We can automate and instrument the network. We can do that. How is that information then exposed to other areas of that particular customer? That, I think, is the question that really needs to be thought through, because the networks themselves do need to be secure. Historically, we have had this separation. But if we are truly to get the economics that we want out of all of this, we have to create a tighter binding. We have to do it. Otherwise, it doesn’t matter what technology we do — whether it be 5G, the next generation of fixed access — we have to actually create this synergy because we cannot now run our networks just running and delivering services and service characteristics for one service and application. We can’t do that. We have to move forward on this.”
This would appear to be an impasse in the architectural development of 5G: Not only a dispute about where the technology should go, but literally about where it already is. At opposite ends of this crossroads is the company that first brought telephony to the world, and the company that now owns the laboratory of the company that first brought telephony to the world.
If you think you’ve seen the final twist in the path forward for 5G, and that Bosch’s bombshell suggestion changes everything, then you haven’t yet imagined what the owner of Bell Labs is considering for the architecture of the 5G data center. It would pull the whole discussion in another new, and radically unexplored, dimension. And there is where we will pick up our journey for the next stage of our 5G adventure of Scale. Until then, hold strong.