Controlling the operational expenses associated with
managing the life-cycles of software services requires model-driven automation platforms. These platforms base their automation functionality on service instance models that provide centralized representations of all services under
management.
You may wonder what is so novel about this? Don’t we already
have a number of tools that do exactly this? For example, isn’t this what
Kubernetes does? Kubernetes keeps track of all deployed pods, the containers in
those pods, and the scaling of those containers. Is there a need for anything
more?
The answer—as is often the case—is it depends. While
Kubernetes is great for automating lifecycle management of container-based
deployments, not all services fit nicely into the Kubernetes paradigm.
Consider, for example, a fairly common Edge Computing use
case. Edge Computing typically involves rather complex application topologies
where some application components are installed on edge devices, other
components are hosted in the cloud, and networks need to be provisioned to
interconnect these components. The cloud components might be packaged as
virtual machine images that need to be deployed on OpenStack of AWS clouds, or
they might be constructed as cloud-native applications that are deployed using container
systems such as Docker. Network connectivity might be provided by establishing
secure tunnels over the public internet (e.g. using SD-WAN technology) or by
special-purpose networks provided by network operators. As a result, Edge
Computing invariably deals with extremely heterogeneous infrastructure
environments on top of which applications need to be deployed.
In addition, Edge Computing application topologies tend to
be much more dynamic and unpredictable than pure cloud-based applications. Edge
devices can vary widely in how much compute power, memory, or storage is
provided, which means that application components may need to adapt to the devices on which they are deployed. Devices may be mobile and can move, in which case
application workloads may need to adapt to varying network conditions, and
workloads may need to be moved dynamically from the cloud to the edge to
satisfy latency or interactivity requirements.
It should be clear that such scenarios cannot easily be
handled by Kubernetes alone, since Pods and Containers offer no support for
creating network tunnels or for deploying EC2 instances on AWS. Containers may
also not be the best technology for performance-sensitive data plane
applications running on low-end edge devices.
What is needed instead is an automation platform that can
manage services across multiple application domains. Such an automation
platform must not be tied to specific infrastructure technologies or to domain
specific deployment paradigms.
What might such domain-independent automation platform look
like? To answer this question, let’s think about what makes an automation
platform domain specific. The answer, as might be clear from our previous
discussion about model-based automation platforms, is the platform’s meta-model.
Key to every model-driven automation platform is a meta-model that defines the
abstractions that can be used to create and manage instance models for the
services managed by the platform. In the case of Kubernetes, the meta model
includes Pods and Containers as first-class abstractions. This makes the
Kubernetes meta-model hard to use for automating services that do not use
Containers and are not organized in Pods.
The key to building a domain-independent automation
platform, then, is to define a meta-model that is not tied to specific
infrastructure domains or to specific deployment paradigms. At the same time,
this meta model must be sufficiently expressive to describe service lifecycle
management functionality in a general-purpose fashion, which would allow it to cover
a broad variety of application domains. With a proper meta-model, we can build
domain-independent automation platforms that can be used for end-to-end
orchestration of the Edge Computing use case described earlier. I will
investigate later what a feature set might look like for such a meta model.
No comments:
Post a Comment