This blog is the first in a series that describes the evolution of mobile transport for 5G services and beyond.
Unlike 4G networks, 5G encompasses applications that include and surpass faster downloads and fixed wireless access. Many new service offerings are available or being planned thanks to the low latency and significant increase in speed and capacity that 5G brings. These applications fall into three areas with somewhat disparate performance criteria as shown below and in figure 1.
- Enhanced mobile broadband (eMBB) through direct and wholesale services.
- Ultra-reliable low-latency connectivity (uRLLC) for Healthcare, Finance and Smart City applications.
- Massive machine-type communications (mMTC) used in Enterprise services.
Figure 1 – The three categories of 5G applications
Traditionally, these disparate service offerings would be delivered through separate parallel networks. While this may be a suitable approach in dense, high-demand urban areas, in general it is cost-prohibitive; it doesn’t scale well, requires management of multiple separate networks, and leads to inefficient, expensive over-engineering.
RAN and transport network virtualization
To overcome the challenges of parallel networks, the technology and market are evolving to virtualize the purpose-built elements of the network. The goal is a fully programmable network infrastructure. Once this infrastructure is established, virtualizing the network offers CSPs and CIPs an agile and flexible way to deploy network elements and topologies on-demand, without the need for physical configuration and truck rolls. Network functions and services are abstracted and delivered where and when they are needed. Industry organizations such as the O-RAN Alliance are evolving RAN architectures to a more cost-effective, agile model through open interfaces, open “whitebox” hardware, and an open-source approach. 5G radio access network (RAN) elements—specifically the distributed unit (DU) and central unit (CU)—are evolving to the fully virtualized vDU and vCU running on whitebox hardware.
Similarly, these same cost-effective and agile disaggregated models are driving the evolution of the transport network. Service and infrastructure providers need the flexibility to configure and reconfigure the same underlying network infrastructure and applications, based on traffic content and context, with different performance requirements for each. The virtualized approach to RAN and transport, along with machine learning and artificial intelligence, will enable dynamic operation and efficient scaling, resulting in a more flexible and automated network.
Figure 2 – The virtualized network
Whitebox transport: economical, efficient scaling, and faster innovation
The enabling technology starts with open hardware. Whitebox networking servers are the basis for the open interfaces and open source software, with the software independent from the hardware. This disaggregation of software and hardware offers the best available performance at reduced cost versus purpose-built equipment. Whitebox hardware can be applied to a broader market segment, in a manner similar to the personal computer (PC) market, achieving lower cost through high global scaling within a standard architecture.
From a performance standpoint, a new purpose-built network element has a performance level based on older hardware at the time of general availability (GA) for the platform. This is because it takes, on average, 18 to 24 months to develop a new RAN or transport platform. Since hardware performance is enhanced, on average, every six months, the performance of purpose-built hardware at GA is three or four revisions behind the latest commercially available technology. Additionally, purpose-built platforms are expected to remain in commercial operation for five to ten years, while performance demand continues to increase over time.
Finally, a Whitebox platform is modular whereas the purpose built is fully integrated, see figure 3. When a significant hardware or software performance upgrade is needed, the purpose built platform will require a wholesale replacement. Taking this combination of factors into consideration, purpose-built hardware clearly represents a performance limitation in the network that adversely affects the service delivery business.
Figure 3 – Purpose-built vs. disaggregated whitebox approaches
The disaggregated open architecture whitebox is much more flexible than purpose-built hardware, as compared in the figure above. At GA, a whitebox platform will utilize the most up-to-date and optimal performance-to-cost ratio hardware for the application. After several years of operation, CSPs/CIPs have the flexibility to upgrade their networks in areas where high-demand performance is needed, using the latest hardware that meets specifications. Meanwhile, the performance-revised software remains fully backward compatible with existing platforms and management systems.
DCSG: Disaggregated Cell Site Gateway
One example of the whitebox approach is the DCSG (Disaggregated Cell Site Gateway), a router armed with Ethernet, optical, and with network synchronization interfaces that aggregate cell site traffic for transport over the front/mid/backhaul network. The Telecom Infra Project (TIP) industry association defined the standard requirements, and many vendors offer products that have passed testing and validation.
Network slicing and open service orchestration
RAN virtualization sets the groundwork for infrastructure that provides the basis to achieve multiple service delivery applications from a single network. These service delivery applications use network slicing to achieve independent services from the same infrastructure. Whitebox transport complements the RAN virtualization use cases, offering agile xhaul operation. This agile operation, along with innovative optical components, enables robust fiber relief to further improve cost-efficiencies and accelerate time-to-service.
The service orchestrator (SO) provides the technology needed to stitch together xhaul segments of transport vRAN elements and the core network (see Figure 4).
Figure 4 – The Service Orchestrator cuts across network and service domains
The SO automates service delivery for multidomain network resources by abstracting complex networks and then managing what those resources do and when. The SO also lays the foundation for network event response, which can only be accomplished with a full-scale open-architecture implementation.
The open digital architecture provides the ability to enhance the network rapidly in response to customer needs. As a critical component of autonomous networking, the SO orchestrates closed-loop automation, with standard interfaces for network intelligence, machine learning, and artificial intelligence, as well as for domain controllers. Finally, the SO receives policy-based information to establish appropriate network resource and services optimization for efficient network slicing capabilities and management.
Virtualizing the network within a single programmable infrastructure will optimally address the many 5G disparate service offerings. The whitebox approach offers the best performance scaling technique over purpose-built equipment for RAN and transport elements by disaggregating the hardware from the software. This optimization includes a cost-efficient, highly scalable network and a single management operation abstracting the functions and maintaining the SLAs for services on-demand. The service orchestrator will stitch the network elements together using network slicing for an agile, programmable network capable of addressing the many new applications and services 5G technology has yet to offer.
The next blog installment will take a closer look at hard and soft network slicing including protocol variations and tradeoffs for brownfield and greenfield networks.