A year ago I wrote about the rise of Edge Computing in this space. Since then a new record has been set for the most distant deployment of Edge Computing. The Mars Perseverance Rover and its sidekick drone helicopter Ingenuity operate in a semi-autonomous fashion, without real-time direct control from Earth. With signal latency measured in minutes instead of milliseconds, this new pair of interplanetary exploration robots make their own real-time decisions about flight path, landing sites, possible objects or locations worth studying, and more.
Mars is not the only off-world Edge Computing deployment. NASA is running a study with HP Enterprise on a new computer that will run artificial intelligence (AI) routines on the International Space Station. The goal is to make real-time insights a reality for the crew.
As a refresher, Edge Computing is a distributed computing architecture that locates computation as close as possible to remote data sources. If you make high-pressure equipment that operates in a dangerous setting, responding to leaks or pressure drops needs to happen immediately.
Immediacy is the goal, which we refer to in practice as real-time data use. Using the data where it is created, without a round-trip to the HPC mothership makes it possible to gain consistency across a dispersed ecosystem. Such data is often time-sensitive and volatile. If you can process it immediately, the information is much more valuable. If a pedestrian is about to step in front of a self-driving car, every millisecond counts.
Here’s one way the split of computation is shaping up for manufacturing and operations: local or Cloud resources for training, optimization, and analysis; Edge resources for real-time operations. Autonomous vehicles generate terabytes of data daily. Processing that much data is a challenge no matter where it is accomplished; splitting it between Edge and Cloud uses the best of both environments. One researcher calls this split “the virtuous cycle for autonomous applications.”
Behind this bifurcated form of computation is a phenomenon data scientists are starting to call “data gravity.”
Products with heavy computational requirements — like autonomous vehicles — act as a “gravitational force” drawing services, applications, and data in a fashion analogous to a planet drawing everything toward its center.
Matt Trifiro is CMO at Vapor IO, a company working on what they call the Kinetic Edge, a wide-scale network for solving Edge computing issues. It takes a month to send a petabyte on today’s Internet, Trifiro notes. Data gravity is when the application to process the data is sent to the data source — an inversion of how things have been operating. “There is no one Edge,” Trifiro says. “You must be able to access the Edge everywhere as one common set of infrastructure, [one in which] companies bring their technology to the common infrastructure.”
As such new computational schemes roll out in manufacturing there are going to be bumps in the road. Operational Technology teams in the factory are device-oriented; the Information Technology people in the office are the priesthood of central processing. To succeed, OT and IT teams must come together and establish a single digital workflow. To borrow a key phrase from technology adoption guru Geoffrey Moore, they have to cross the chasm to find success.
Containers and Kubernetes
Two data technologies with a relatively short history are becoming key to this new form of computation, Containers and Kubernetes. Containers are small Virtual Machines, defined for a narrow set of requirements. A Container has its own file system and uses a share of a local CPU. It is decoupled from its underlying infrastructure, which makes it portable to Cloud and local distributions.
declarative configuration and device automation from Containers. Use together, Containers become agile hosts for deployment and use in remote Edge Computing environments. The Kubernetes side takes care of deployment issues, service discovery, load balancing, and storage management.
Don’t think of Containers and Kubernetes as a complete Platform as a Service system. It is more like a stack microcosm for the Edge, offering the pieces to build and deploy independent operations.
New opportunities on the Edge
As more Edge networks are deployed, new opportunities will follow. Michael DeNeffe is director of production development at AMD, and is keenly interested in what Edge Computing can do for manufacturing. “Virtual Reality and Augmented Reality in engineering workflows are awesome, but unless you are directly connected to the Cloud at high bandwidth it gets dicey.”
DeNeffe says the solution is using 5G networks to connect to the Edge. “Companies can now hire engineers in time zones all over the world. With Edge computing they can share data sets and take advantage of local capabilities. There is no need for centralized work.”
“When virtual reality first came out, we realized you needed a direct connection to a computer or an extremely fast network. Use cases broke down.” Now they are picking up, DeNeffe says, thanks to fast networking. Using the example of petroleum engineers onsite at a remote pipeline station, DeNeffe says, “If all they have is 5G, they can still put on a [VR] headset to do their reviews.”
Editor: 5G – the fifth generation cellular network standard – promises extremely fast communication, ultra-reliable low-latency links for real time communication and interaction, and support for huge numbers of connected devices in small areas. Beyond just providing faster internet access to more users, 5G will enable new technologies and new ways of doing business across industries, from enhanced smart manufacturing to safe autonomous driving and remote surgery.
Check out our workshop and learn how robust electromagnetic simulation and industry workflows can be used to design and optimize 5G networks for smart, reliable and safe manufacturing environments.