5G Transport: The Impact of Millimeter Wave and Sub-6 Radios

Part two in a blog series about how Fujitsu is bringing the 5G vision to life

As communications service providers (CSPs) prepare to deploy 5G, a number of factors will need to be considered as they plan their radio access network (RAN) architecture. An important aspect of this planning is an understanding of the 5G radio interface (NR) specifications and spectrum options.

Both millimeter wave (mmWave) and sub-6 GHz radio architectures have a fronthaul, midhaul and backhaul in terms of transport. However, the differences in the coverage aspects of these two radio types will define the network topology.

The high frequencies of mmWave radios result in reduced coverage of a given area, requiring a more dense deployment outside of traditional cell towers. The mmWave radios will be deployed in a small cell type of configuration, since a large number are required to cover a given area.  In urban areas, the dense deployment of mmWave radios will most likely be on street lamps, and the side or top of buildings. Sub-6 radios, however, enable coverage configurations similar to 4G LTE radios. Therefore, Sub-6 radio topology could be similar to a C-RAN LTE fronthaul, in which dark fiber is used where available, and some form of multiplexing such as WDM or packet multiplexing is used where fiber is lacking.

Initially, the mmWave radios will be best-suited for high throughput applications such as fixed wireless access (FWA), while sub-6 radios will be best used for mobility.  In the long term, both radio types will be used for both use cases.

Since sub-6 radio coverage dynamics are similar to LTE, many CSPs will consider deploying sub-6 much like 4G LTE in a C-RAN to realize DU pooling efficiencies and offer higher performance using cell site aggregation.

Alternatives to a centralized pool of DUs, whether mmWave or Sub-6 radio, is an integrated DU and RU which eliminates the fronthaul transport and discrete fiber connections between the two.  This alternative expedites service delivery while reducing capital and operational expense, but also eliminates pooling and cell site aggregation capabilities.  Cell sites with integrated DUs will have midhaul, or what the IEEE refers to as fronthaul-II, in this section of the RAN transport.

Based on the various deployment options for mmWave and Sub-6 radios, either WDM based transport or a newer packet based transport using Time Sensitive Ethernet (TSN) will be used to pass 5G eCPRI/xRAN channels, as well as legacy 4G CPRI channels, from the cell site to a central aggregation point when an abundance of dedicated dark fiber is not available.

This blog is the second in a series about our vision for 5G transport. See part one here.

Networks and Vehicles Follow Similar Journey to Automation

Autonomous vehicles (let’s call them AVs) and Autonomous Networks (ANs) are road-mates; they’ve essentially traveled the same route in the quest for full automation. They share the overarching Holy Grail objective of zero-touch operation, undisturbed by human hand as they go about the full range of their respective operations.

The Society of Automotive Engineers (SAE) has defined a six-degree taxonomy that classifies the level and type of automation capabilities in a given vehicle. This is summarized on Wikipedia’s Self-Driving Car page and illustrated in Figure 1.

Figure 1: SAE levels of vehicle automation

Both AVs and ANs have already arrived at their third level of automation, i.e. partial automation, where most of what they do is automated—but human supervision, monitoring, and even interaction is still needed. And just as AVs have relied upon an evolving set of building blocks over decades, ANs have also employed and built upon a number of tools along the way. Figure 2 illustrates this cumulative evolution.

Figure 2: Building blocks of network evolution

There are many examples of these building blocks in the network world. For instance, we have the availability and growing adoption of zero-touch provisioning (ZTP); YANG model-based open interfaces (NETCONF, REST APIs, gNMI/gNOI); gRPC-based deep-streaming telemetry; extensive, detailed logging and monitoring; and streaming for rapid fault isolation and prediction.

Perhaps the most critical characteristic that AVs and ANs share is that in order for their potential to be fulfilled, diverse stakeholders need to come together and coordinate. In the AV world, massive efforts are underway at every level (governments, cities and towns, car companies, insurance companies, and technology vendors) to standardize and streamline end-to-end operations based on key principles of interoperation, openness and reliability.

For ANs, there is a similar and pressing need by networking community for collaborative, coordinated development of an open, generic framework for a fully autonomous optical network, which could be used for setting up reference use cases that can be extended to various network architectures. This framework should be driven by the primary requirement of ZERO human intervention in network operations after initial deployment—including configuration, monitoring, fault isolation, and fault resolution. The framework should leverage currently available tools and technologies for full-featured and automation-ready software, such as Fujitsu System Software version 2 (FSS2) for network element management, in conjunction with Fujitsu Virtuora®, an open network control solution for network element and network management.

Efforts to achieve autonomous networks and autonomous vehicles show strong similarities in terms of both pace and trends.  These similarities are driven by common objectives to, primarily, address scale and the need for a growing number of applications, while tackling the human error element, and enabled by an intertwined and cross-dependent set of technology advancements and adaptations.

Four Key Enablers of Automated, Multi-Domain Optical Service Delivery

New advancements in software-defined control and network automation are enabling optical service delivery transformation. Stitching together network connectivity across vendor-specific domains is labor-intensive; now those manual processes can be automated with emerging solutions like multi-vendor optical domain control and end-to-end service orchestration. These new solutions provide centralized service control and management that are capable of reducing operational costs and errors, as well as speeding up service delivery times. While this sounds good, it can be all too easy to gloss over the complexities of decades-old optical connectivity services. In this blog post, I will explore the four enabling technologies for multi-domain optical service delivery as I see it.

The first enabler, optical service orchestration (OSO), is detailed here. In the not so distant past, most carriers deployed their wireline systems using a single vendor’s equipment in metro, core, and regional network segments. In some cases, optical overlay domains were deployed to mitigate supply variables and ensure competitive costs. While this maximized network performance, it also created siloed networks with proprietary management systems. The OSO solution that I imagine effectively becomes a controller of controllers, abstracting the complexities of the optical domain and providing the ability to connect and monitor the inputs/outputs to deliver services. As such, an OSO solution controls any vendor’s optical domain as a device, with the domain controller routing and managing the services lifecycle between vendor-specific end-points.

The second enabler is an open line system (OLS) consisting of multi-vendor ROADMs and amplifiers deployed in a best-fit mesh configuration. A network configured this way must be tested for alien wavelength support, which means defining the domain characteristics and doing mixed 3rd party optics performance testing. This testing requires considerable effort, and service operators often expect complete testing before deployment. The question is, who takes on the burden of testing in a multi-vendor network? Testing is a massive undertaking and operators do not have the budget or expertise; perhaps interoperability labs at MEF and CE services could help define it. Bottom line, there is no free lunch.

The third enabler is a real-time network design for the deployed network. Service operators deploy optical systems with 95%+ coverage of the network and are historically limited to vendor-specific designs. Currently, the design process requires offline tools and calculations by PhDs. A real-time network design tool that employs artificial intelligence algorithms promises to make real-time network design a reality. Longitudinal network knowledge combined with network control and path computation can examine the performance of optical line systems and work with the controller to optimize system design, variations in optical components, types, and quantity of fiber optical signals, component compatibility, fiber media properties, and system aging.

The final enablers are open controller APIs and network device models that support faster and flexible allocation of network resources to meet service demands. Open device models (IETF, OpenConfig, etc.) deliver common control for device-rich functionalities that support network abstraction. This helps service operators deliver operational efficiencies, on-boards new equipment faster, and provides the extensible framework for revenue-producing services in new areas, such as 5G and IoT applications.

Controller APIs enable standardized service lifecycle management in a multi-domain environment. Transport Application Programming Interface (T-API), a specification developed by the Open Networking Foundation (ONF), is an example of an open API specific to optical connectivity services. T-API provides a standard northbound interface for SDN control of transport gear, and supports real-time network planning, design, and responsive automation. This improves the availability and agility of high-level technology independent services, in addition to specific technology and policy-specific services. T-API can seamlessly connect the T-API client, like a carrier’s orchestration platform or a customer’s application, to the transport network domain controller. Some of the unique benefits of T-API include:

  • Unified domain control using a technology-agnostic framework based on abstracted information models. Unified control allows the carrier to deploy SDN broadly across equipment from different vendors, with different vintages, integrating both greenfield and brownfield environments.
  • Maintaining telecom management models that are familiar to telecom equipment vendors and network operations staff, making its adoption easier and reducing disruption of network operations.
  • Faster feature validation and incorporation into vendor and carrier software and equipment using a combination of standard specification development and open source software development.

Service operators are looking for transformation solutions with a visible path to implementation, and many solutions fall far short and are not economically viable. Fujitsu is actively co-creating with service operators and other vendors to integrate these four enabling technologies into mainstream, production deployments. Delivering ubiquitous, fully automated optical service connectivity management in a multi-vendor domain environment is finally within reach.

Open and Automated: The New Optical Network

Communication service providers (CSPs) are increasingly transforming their networks with an eye towards more openness and automation. There has been a continued push to disaggregate optical networking platforms in order to drive down total cost of ownership and provide network operators with the flexibility to upgrade their networks while keeping up with the accelerated pace of innovation across different layers of the network framework stack. The promise of vendor interoperability and automated control through open standards, APIs and reference platforms are the key drivers enabling CSPs to make the shift to open.

There are varying degrees of openness that one can choose to adopt in this transition – from the proprietary systems of today to a fully disaggregated open optical network. The sweet spot in which the industry seems to be converging is to be partially disaggregated, as in the open line system (OLS) model. OLS provides a good trade-off between interoperability and performance; however, we still have a long way to go to make these systems future-proof and deployable. Multiple industry organizations such as the Open ROADM MSA, OpenConfig, Telecom Infra Project (TIP) and Open Disaggregated Transport Network (ODTN) are working towards bringing this vision of open networking to reality. Though there are multiple initiatives addressing disaggregation in optical transport, we believe there is a strong need for harmonization among them so that the industry can truly benefit from standardization of common models and APIs.

As optical equipment vendors aggressively evolve their offerings to help enable this open optical transformation, care must be taken to address the key business and technical requirements which are unique to each network operator, depending on the state of their current network infrastructure. There is no one single solution that can be applied across the board, bringing both challenges and opportunities to vendors who have embraced open and disaggregated architectures. The migration to open networking requires the operator to reevaluate the manner in which networks are architected, deployed and operated. Enabling this shift presents multiple challenges (such as network planning and design and multi-vendor control) when it comes to the implementation and operationalization of the various building blocks. Effectively addressing them will be key to this transformation.

Fujitsu believes a collaborative process with CSPs that involves a thorough assessment of the network architecture and OSS/IT workflows, along with establishing a phased deployment plan for implementation of hardware and software solutions, will be instrumental in navigating this transition seamlessly. The enclosed white paper provides an overview of the open optical ecosystem today, identifies and describes some of the key challenges to be addressed in implementing open automated networks, and outlines some migration strategies available to network operators embracing open networking.

Time, Technology and Terabit Transport

If you measure time against technological progress, six years is a long time in optical networking. In 2012, we were congratulating ourselves for getting to 100G transport. Now we’ve officially reached 600G, as Fujitsu recently demonstrated on our new 1FINITY T600 blade, the latest in the 1FINITY transport series. Optical transport products are now available that can modulate photons to create signals with 600,000,000,000 bits of information packed into every second, and send those signals at close to the speed of light, traversing the globe almost instantaneously. To put this colossal capability into perspective, a 2 TB digital library could be transmitted virtually anywhere on the globe in seven seconds with a single T600.

It’s easy to disregard or minimize yet another technology advancement. But the implications of 600G and beyond are more significant and positive than simply an increased amount of Internet junk. For example, healthcare could become extremely collaborative across continents with a combination of real time data collection and data analytics with massive data rate transfers near real time. Universally available high-speed connections to a smartphone supports the kind of data gathering and analysis needed to understand our world better and develop remedies for the many serious problems we face.

Access to information is the chief means of empowerment in both personal and business life. Consequently, it is important deploy this 600G technology rather than, for example, hold to the false economy of continued deployments at slower rates. Being able to transmit entire libraries in seconds is an awesome power that opens up rich possibilities. The network occupies a critical role as the foundation of the connected digital economy that, one way or another, is making stakeholders out of every one of us. So, one might say our industry has an economic and moral imperative to drive the highest possible speeds and capacities as deep into communities as possible. High-speed connectivity fosters opportunity, learning and commerce. In the final analysis, more really IS better when it comes to the network.

Automation and Operations in the Modern Network: Bridging the Gap Between Legacy and Digital Infrastructure

In terms of network automation, we’re beyond removing manual tasks, speeding up production, and improving service quality. In the face of complex mesh network architectures, automated network slicing with guaranteed SLAs, as well as real-time spectrum and resource management, automation has become a foundational capability that is required for sheer survivability in basic network operations. The need for high-scale, intelligent control and management of end points, elastic capacity, and dynamic workloads and applications will only grow.

As the network virtualizes to accommodate connection-aware applications, the need for disaggregated, open and programmable hardware and software also gets stronger. To deliver on-demand services to bandwidth-hungry mobile consumers, the modern network must find ways to combine legacy gear that is vendor-proprietary and domain specific with virtual network elements and functions over merged wireline and wireless infrastructure. That requires software platforms and applications that connect and control physical and virtual network elements, automates network planning, provisioning and management, provides real-time network topologies, and increases the efficiency of traffic management and path computation up and down the network stack. It also paves the way for Communication Service Providers to implement service-oriented architectures that put business needs before arcane methods of network management that are required, but do not necessarily drive incremental revenue.

This type of agile network requires an agile deployment model that is predicated on open, disaggregated, and programmable physical and virtual infrastructure, as well as SDN-enabled applications that use open models. Disaggregated NE’s can deliver new capabilities without disrupting the existing production network, and SDN-enabled applications tie it all together seamlessly.

This approach has the advantage of increasing revenue velocity and speeding up the adoption of digital networking while maintaining the investment in the existing physical infrastructure.

Operationalizing the open and programmable network.

As closed and proprietary network segments give way to open network architectures that include Open Line Systems, Open ROADM, and the open APIs that connect them, operational gaps will emerge that require detailed integration and design considerations from a software perspective. This requires an understanding of disaggregation, service-oriented architectures, open APIs, and the ability to break all of that down into discrete datasets that can be mined by artificial intelligence so that CSPs know what levers to pull to improve the customer experience or deliver new types of services.

Microservices and container-based applications have the ability to fill those gaps without costly capital initiatives. Just as an SDN platform abstracts multi-vendor network elements from service provisioning applications to facilitate intent-based networking, container technologies abstract applications from the server environment where they actually run. Containerization provides a clean separation of “duties”; developers can focus on application logic and dependencies, while network operations can focus on management. Container-based microservices and microapplications can be deployed easily and consistently, regardless of the target environment.

This construct provides the ideal set up for operations teams that identify “holes” in the production environment. In the past, product managers and operations teams would have been forced to wait for lengthy development cycles to take advantage of new feature functionality. Now, with microservices and microapplications, new functionality can be developed quickly, more efficiently, and generate additional revenue inside the customer window of opportunity.

Microapplications are inherently cloud-native, and can be used to integrate newer technologies into monolithic systems without waiting for maintenance windows that may or may not include the capability. Examples of microservices include:

  • Customer IP circuit connections
  • A virtual gateway network element
  • Multi-vendor network element backup
  • IP forwarding loop detection
  • Bandwidth optimization

These microservices can augment existing SDN-enabled applications and infrastructure to provide precision solutions that impact revenue generating OSS/BSS applications. They also have the ability to accelerate lab testing and certification cycles so that new applications can be deployed faster and more effectively.

In addition to speed and efficiency, microservices and applications can also make the network more resilient and flexible. Since they can be deployed without impacting other services, developers can enhance the performance of one service without impacting other services.

All of this requires vendor adherence to, and cooperation with, open models that are streamlined for coordinated control and management across all network domains that include end-to-end connectivity services (MPLS, Carrier Ethernet, IP VPN, etc.) In the modern network, every touch point is engineered to do its job faster and more efficiently, whether it is legacy or digital network gear. Microservices and microapplications are a part of that solution, providing new capabilities that are free from traditional operational constraints, bridging the gap between legacy and digital infrastructure with precision solutions that drive revenue now, rather than later.

For more information about Fujitsu’s Microapplications practice, please visit http://www.fujitsu.com/us/products/network/products/microapplications-practice/

Digitizing the Customer Experience

Digitization of the network is reshaping the telecom landscape as customer data consumption habits change thanks to new, disruptive technologies. We’ve gone from a LAN connection on a desktop in your home to a cellular device in your pocket, and regular customers expect to access content whenever and wherever they are. This means that service providers are in trouble if they can’t adjust. They must find a solution that will keep the network healthy and adopt new technologies suited to today’s demands.

Today’s Network Operations Center (NOC) monitors the entirety of a network, actively working to keep everything healthy. However, it’s fundamentally reactive, with thousands of alarms piling up each day for operators to sift through. Current operations are handled manually, creating difficulties when trying to onboard new technologies. Digitizing the NOC to meet customers’ demands requires automation that will turn its reactive nature into a proactive one.

To ensure the health of a network, service providers need a service assurance solution capable of providing fault and performance management, as well as closed-loop automation. Fault and performance management uses monitoring, root-cause analysis, and visualization to proactively identify and notify operators of potential problems in the network before a customer can experience them. Providing closed-loop automation, a service assurance platform continuously collects, analyzes, and acts on data gathered from the network. When combined with machine learning, a service assurance platform becomes an essential part of the NOC. Altogether, a service assurance platform can cut the number of alarms by 50%, a significant reduction considering that a provider may collect close a million alarms each month.

A targeted network management solution provides an accessible path for network migration. While legacy equipment is guaranteed to work, it may not be the best fit for digitization. Integrating a targeted network management solution into your NOC helps bridge the gap between new technologies and vendors with legacy equipment. It supports a multivendor environment, allowing the NOC to manage both new and legacy equipment from different vendors in the same ecosystem. As well, targeted network management enables service providers to bring new services to market twenty times faster due to significant improvements made to the onboarding of new technologies and vendors into the network.

An automated NOC that contains both service assurance and targeted network management provides a network perfectly suited for the changing digital landscape. Service assurance helps keep the network up and running by identifying critical issues so that no matter where or how users access the network, they will be provided with a seamless experience. Targeted network management helps quickly onboard new technologies and vendors that will help push towards digitalization. Combined in a 24x7x365 NOC, service providers are prepared for whenever, wherever, and however, a customer chooses to interact with the network.

For a customer or business, the advantages of an automated NOC are exceptional. Customers don’t have to worry about any issues regarding the accessing of data from any device, anywhere or at any time of day. For businesses, the proactive nature of service assurance and the simple network migration of targeted network management helps ease operating expenses and mean-time-to-repair. Digitization isn’t slowing down for anyone, and service providers offer a way to hop on the train.

5G Transport: From Vision to Reality

Part one in a blog series about how Fujitsu is bringing the 5G vision to life

On the road to 5G, there are a number of different paths that communications service providers (CSPs) can choose. This blog is the first in a series about our vision for the 5G RAN, and how Fujitsu is working with leading CSPs to co-create these networks and bring 5G to life.

Transport is vital for building a robust and reliable network. The xHaul ecosystem consists of the backhaul, midhaul and fronthaul transport segments.  Dedicated dark fiber, WDM and packet technologies are used within these transport segments. As CSPs evolve their networks from 4G / LTE to 5G, there are several options explaining how those transport networks will be designed.

In a “Split Architecture,” the distribution unit (DU) connects to many macro site radio units (RUs) over multiple fronthaul fiber paths. This is a similar architecture to the 4G central RAN (C-RAN) where there is a central point; the DU in this case, fanning out to multiple macro sites for interconnect with the 5G radios, also known as RUs or Transmission Reception Points (TRPs).  This efficient technique is referred to as RAN Pooling, and along with cell site aggregation, offers mobile network operators the ability to engineer the RAN capacity based on clusters of sites coming into the central point DUs, instead of individual cell site demands.

The “Distributed DU” architecture involves DUs collocated with RUs at the cell site.  The distributed DU use case offers a latency sensitive architecture by eliminating the fronthaul transport path.  The fronthaul becomes a local connection between the top and bottom of the tower via fiber cable.  This is a low latency configuration, which also reduces costs by eliminating the fronthaul transport section.  The tradeoff is a loss of multi-site pooling and cell site aggregation with macro cell sites. Moreover, the midhaul capacity is reduced to 10GE rates.

Finally, there is the “Integrated DU” architecture, which integrates the DU into the RU at the cell site.  This architecture offers similar benefits as the Distributed DU use case, but with an additional advantage of lower CapEx and OpEx by combining these devices.   The combined DU and RU reduce the number of devices to install, manage and maintain resulting in expedited service turn-up and faster time to revenue.

To learn more, register for an archived webinar “New Transport Network Architectures for 5G RAN” with Fujitsu and Heavy Reading analyst Gabriel Brown: www.lightreading.com/webinar.asp?webinar_id=1227

DCI Growth Planning and the Bandwidth Amplification Effect

As more and more traffic is driven into data centers, in turn pressure builds between and among data centers. This phenomenon is known as the “bandwidth amplification effect,” which essentially means that when X amount of user traffic passes into a data center, it generates many times that amount of traffic within the data center and between that data center and others. This is why there is an urgent need for more data center interconnect (DCI) bandwidth and higher line rates to support these demands.

Operators have a couple of options for meeting DCI traffic demand. One is to increase fiber count and the other is to increase the line data rate. Increasing the data rate is far more common and economical and is accomplished with new bandwidth-variable transponders. Data rate increases may seem like the obvious remedy for boosting DCI bandwidth, but this option brings consequent issues and impairments along the optical path. These issues must be corrected via ROADMs and amplifiers. Although the modulation scheme is the most important aspect to consider when increasing DCI bandwidth, several other factors come into play. Among these are dispersion compensation, error correction, link distance (reach), amplification, channel width, and spectral tilt.

My new article on Lightwave summarizes the challenges and technologies associated with growing DCI traffic through higher line rates, and discusses each of the most important factors to be considered when planning the best way forward. Moving to higher line rates for DCI is an effective and economical way to address continued DCI growth, but a variety of equipment upgrades and new techniques are needed to adequately address new optical impairments and achieve the benefits of higher line rates.

Keeping the Lights On

In rural areas, groups of towns are often connected by a telecommunications ring or rings. Schools, municipalities, hospitals and other customers are connected to communications services over these rings.

In this type of environment, high school football games are often important community events; video feeds of the games, usually on Friday nights (hence the term “Friday night lights”), are a significant source of traffic on rural communications networks.

Throughout the last decade, service providers in these communities have met communication needs with 10G networks. As demands increase from booming wireless, internet and other communications traffic, these 10G networks are being outgrown. Growth is good, but meeting growth with the right technology can be a challenge.

Service providers seeking to address this problem often ask for n × 10G DWDM networks.  But is this the best technology?  An n × 10G network will meet today’s traffic needs, but its ability to meet future needs in terms of both capacity and service types is not certain. Virtually all n × 10G networks use non-Coherent technologies. We see upcoming demand growth in the next few years for 100G services, which happen to require a Coherent optical network, but non-Coherent DWDM systems cannot handle 100G Coherent channels.  That spiffy new n × 10G DWDM network won’t pass muster when these 100G service demands arrive. Is there an alternative approach than can meet current needs, accommodate 100G services in the future and still be economical?

A service provider could deploy a Layer 2 Ethernet switch that utilizes a single 100G or 200G ring. In this network, there would be a carrier-grade Layer 2 100/200G switch for each city, with a single 100/200G ring between each community. This Ethernet approach provides capacity similar to the n × 10G DWDM network. Additionally, the Ethernet ` is more economical than the DWDM version once the number of 10G wavelengths grows beyond five channels. When 100G service demands arrive, it is easy to accommodate these new services by adding an n × 10G Coherent DWDM system with little impact to the 100 GbE ring.  Additionally, an Ethernet network can offer E-Line and E-LAN services instead of the optical services on the n × 10G DWDM network.

A 100 GbE Layer 2 network is more forward-looking, and it future-proofs a service provider’s network. If providers want to “keep the lights on” for the Friday night football game, a 100G Ethernet ring is a more versatile and long-lasting choice than an n × 10G DWDM network.