ETSI NFV early on published a Reference Architecture for NFV that focused on the life-cycle management and orchestration aspects of network services. The ETSI NFV reference architecture introduced a number of abstractions that can be organized in roughly the following three layers:
- The bottom layer holds virtual infrastructure resources managed by Virtual Infrastructure Managers (VIMs).
- A middle layer holds Virtual Network Functions (VNFs) and Virtual Links or Virtual Networks (VLs) whose life-cycle is managed by VNF Managers. VNFs and VLs are hosted on virtual infrastructure resources.
- In the top layer, VNFs and VLs are combined into Network Services using orchestration functionality provided by the Management and Orchestration (MANO) function of the reference architecture.
More recently, a number of network operators and industry bodies have published their own reference architectures for NFV and SDN-based network services. These architectures add functionality in areas that were out of scope in the initial ETSI NFV model. Among other things, they focus on more complete service life-cycles, on operationalization of services, on support for physical as well as virtual network functions, on integration with application-level services, etc. The architectures that seem to have received the most press are:
- AT&T’s Enhanced Control, Orchestration, Management & Policy (ECOMP) architecture.
- Verizon's SDN-NFV Reference Architecture.
- MEF's Lifecycle Service Orchestration (LSO) architecture.
While each of these architectures address the specific needs for which they were designed, it is not clear if any of them are appropriate as an overall reference architecture for the network services industry as a whole. Let’s first take a look at the ETSI NFV reference architecture itself. While the simple three-layer model proposed by ETSI NFV is simple enough, if you analyze it deeper you’ll find a number of challenges with the layering structure. Simple layering works well in a legacy world where all resources are physical and where it’s typically very clear whether a software component is intended to be used as a building block or as an end-user service constructed from such building blocks. In a physical world, it makes sense to have separate layers for infrastructure, for building blocks, and for services.
Unfortunately, this simple model becomes less useful in a virtual world where all entities are software abstractions that can be created and destroyed at will. In this world, virtual infrastructure resources are orchestrated in very much the same way as the network functions that sit on top of them and the network services that are constructed from these functions. This creates similarities between the different layers that make the boundaries between them increasingly fuzzy. As a result, it is often not immediately obvious in which layer a certain abstraction belongs. In fact, depending on your viewpoint, it might make perfect sense to categorize the same abstraction as a service, as a component, or as a virtual infrastructure resource.
Nigel Davis (of Ciena and ONF) did a nice write-up illustrating this problem using the Virtual Link abstraction as an example (if you’re a member of ETSI NFV, you can get the full document here). In his analysis, Nigel observes:
"The Virtual Link is essentially a Network Service provided by another organization (which may be either within the same overall business administration or separate) and/or layer network. It is important to recognize this recursion whilst formulating and improving the Network Service Descriptor and Virtual Link Descriptor models".
Nigel makes the point that a Virtual Link could either be looked at as a component or as an entire network service in its own right. In fact, one could go even one step further and argue that a Virtual Link in some environments should be considered part of the virtual infrastructure. If I use NFV to orchestrate (virtual) networks (e.g. using software routers), shouldn’t I then be able to use these networks as infrastructure on top of which other services can be orchestrated? If so, are my Virtual Links/Virtual Networks not part of my infrastructure layer?
The layering challenge is further demonstrated by taking a closer look at the other reference architectures mentioned above. Each of these reference architectures seem to struggle with how to best reconcile their own architecture with the ETSI NFV reference architecture. Some of the reference architectures address the issue by pushing the ETSI NFV functionality down into lower layers. This is the approach taken by the Verizon model, where a new End-to-End Orchestrator (EEO) is introduced on top of the ETSI NFV MANO functionality and NFV MANO’s role is reduced to managing the life-cycle of just the VNFs, not of end-to-end services. The Verizon architecture owes up this reduced role of NFV MANO by stating that “EEO and NFV MANO are shown as overlapping. This is because the ETSI NFV definition of MANO includes a Network Service Orchestration (NSO) function, which is responsible for a sub-set of functions that are required for end-to-end orchestration as performed by EEO.”
The MEF LSO model goes even further. It moves NFV MANO functionality all the way down into the infrastructure layer while LSO takes responsibility for all Service Orchestration functionality. In the MEF LSO model the ETSI NFV MANO functionality is considered an example of a technology that can provide Infrastructure Control and Management.
The AT&T architecture takes a different approach. AT&T ECOMP doesn’t try to shoe-horn the ETSI NFV architecture into its own architecture but instead presents ECOMP as an extension of the ETSI NFV architecture. The AT&T whitepaper states that “ECOMP expands the scope of ETSI MANO coverage by including Controller and Policy components.” The ECOMP model largely adheres to the ETSI NFV layers for Infrastructure, VNFs, and Services but it also hints at the similarities between these layers by introducing Controller and Orchestrator components in each of these layers.
Just to be clear: I’m not suggesting that any of these reference architectures are invalid or flawed. I’m just using them to illustrate the challenges with trying to categorize abstractions into simple layers. What I’m hoping to illustrate is that in a virtual world:
- Reference architectures that prescribe a fixed number of layers may not be a great fit.
- Reference architectures that force features and functionality into specific layers may not be a great fit.
- As a corollary, reference architectures that impose layer-specific interfaces or APIs may not be a great fit.
What this shows, I hope, is that in a cloud-focused virtual word, simple layering may no longer be the most appropriate organizing construct for software architectures.
Fortunately, Nigel’s write-up hints at an alternative. His observation of a natural “recursion” in the Virtual Link concept suggests that a better way to organize systems might be to think about structures in terms of recursive decomposition rather than in terms of linear layering. Using recursion, building systems is no longer about stacking higher-level layers on top of lower-level layers. Instead, it’s about recursive decomposition of higher level abstractions into lower level abstractions until ultimately these abstractions can be realized on top of resources that already exist. In order for arbitrary recursion to work, all layers in the decomposition hierarchy must expose the same orchestration and management interfaces and it should be possible to place features and functionality at different levels in the hierarchy depending on the specific use cases or the specific technology requirements.
By adopting recursive decomposition as the new organizing construct for software architectures, we’re no longer hand-cuffing virtual abstractions with the constraints of a legacy physical world. I hope to explore the implications of this in future posts.