Putting It All Together: The Power of Time-Sensitive Networking (TSN)

Service providers are rapidly transforming their networks to deliver competitive and affordable 5G services, and cell site densification is one of the key factors in the eventual success of 5G. But deploying fiber and then maximizing bandwidth capacity to so many cell sites can be an expensive proposition. Compared to active WDM-based offerings, Ethernet-based mobile fronthaul using Time Sensitive Networking (TSN) can significantly reduce the total cost of ownership, with up to 50% lower capital costs, 90% turn-up time savings, 75% footprint reduction, and simplified spares and inventory management.

Time-Sensitive Networking

Standards have been developed that are crucial to the success of mobile fronthaul. The IEEE 802.1 Time-Sensitive Networking set of standards extends Ethernet to support time-sensitive traffic, with stringent bounds on loss, end-to-end delay (latency), and delay variation (jitter). This standard is intended to combine the deterministic performance and reliability of circuit-switched technologies with the speed and scale of Ethernet. In this blog, we dive into TSN as well as IEEE 802.1CM, the TSN profile for mobile fronthaul.

Four key components of TSN serve to support real-time communications:

1.            Timing and synchronization

2.            Bounded low latency

3.            High availability and reliability

4.            Resource management

Timing and Synchronization

The 5G RAN requires greater timing accuracy and precision than 4G. Another difference with 4G is that the remote radio heads (RRH) will more likely derive timing from the network, instead of from a GPS clock located at the cell site (a more expensive option). Hence the timing network needs to be planned properly, the number of hops between the clock and the radios minimized, and the time error introduced at each hop minimized. Amongst other considerations, the required fronthaul and associated midhaul/backhaul networks’ timing accuracy and latency tolerance will depend on the 5G RAN functional splits that includes CPRI and eCPRI architectures.

The IEEE 1588v2 Precision Time Protocol (PTP) is used to provide timing and synchronization to the 5G radio unit (RU). Ethernet switches in a TSN network act as telecom boundary clocks (T-BC), processing and passing on timing information, correcting errors, and synchronizing traffic accordingly.  TSN networks use the IEEE 802.1AS timing protocol, which is a subset of PTP with additions.

Bounded Low Latency

An Ethernet-based mobile fronthaul network will likely transport multiple traffic types:

  • CPRI, which is encapsulated with Ethernet using IEEE 1914.3 Radio over Ethernet
  • eCPRI, which is already packet-based, from 5G RU to the DU/CU
  • Alarms, environmental monitoring, and operations data from the cell site to the NOC
  • Converged service offerings, such as business Ethernet services

Within these traffic types, TSN implements a variety of quality of service (QoS) mechanisms at the switch level to deliver the zero congestion loss, deterministic latency, and minimal jitter required:

Credit-based shaper IEEE 802.1Qav Smooths out packet transmissions; reduces bursting and bunching. A similar algorithm is used in Carrier Ethernet networks.
Frame Pre-emption IEEE 802.1Qbu IEEE 802.3br Critical, express frames can interrupt transmission of lower priority frames. Pre-empted frames are not lost.
Time-Aware Shaper (TAS) IEEE 802.1Qbv Implements fixed time slices with 8 traffic priorities, the highest for time-critical control data (with a worst-case latency of 100 µs over 5 hops).
Cyclic Queuing and Forwarding (CQF) IEEE 802.1Qch The use of double buffers synchronizes transmissions in a cyclic manner, resulting in bounded latency that is independent of the network topology.
Asynchronous Traffic Shaping (ATS) IEEE 802.1Qcr Improves link utilization for mixed traffic types. The techniques above handle deterministic traffic very well, but are less efficient for traffic with arbitrary profiles. ATS remedies this.

In addition to these standards, IEEE 802.1CM includes standard TSN profiles for fronthaul that enable the transport of fronthaul streams, specifically with regards to:

  • CPRI, eCPRI use cases, requirements, and synchronization, as well as the RBS (Radio Base Station) splits
  • Packet networking and synchronization characterizations for Bridging and TSN features
  • Leveraging the telecom profile of IEEE 1588v2

The result is two 802.1CM profiles that apply to both CPRI and eCPRI, and meets the requirements of TSN for fronthaul:

  • Profile A: Simple and based on the strict priority of CPRI and eCPRI traffic
  • Profile B: Leverages frame pre-emption (IEEE 802.3br & 802.1Qbu) to maintain strict priority traffic with pre-emptible Ethernet traffic

High Availability and Reliability

A robust network must cope with power outages, switch failures and fiber cuts. But Ethernet networks are bridged packet networks, not fault tolerant SONET/STM rings. Thus several network level mechanisms have been built into TSN to ensure end-to-end availability and reliability:

Frame Replications and Elimination for Reliability (FRER) IEEE 802.1QCB Duplicate copies of each frame are transmitted over separate paths across the network, and 1+1 or 1+N redundancy is possible. At the other end, the packets are merged and/or discarded. Note that this does not rely on link failure detection or switchover, as in the case of SONET, but rather on duplicating packets.
Path Control and Reservation (PCR) IEEE 802.1Qbu Configures multiple paths through the network for frame replication. Multiple paths in Ethernet networks are usually avoided in order to prevent bridging loops.
Per-Stream Filtering and Policing (PSFP) IEEE 802.1Qci Prevents traffic overloads that may stem from bandwidth violations, malfunctions or malicious attacks such as Denial of Service (DoS).

Operating at a network level, these protocols lend themselves to a software-defined networking (SDN) approach, with a centralized network controller controlling the TSN switches.

Resource Management

The concept of paths across the network is analogous to traditional connection-oriented circuits, and TSN enables centralized network management of paths and devices:

Stream Reservation Protocol (SRP) IEEE 802.1Qat SRP provides end-to-end management of traffic streams, allocating the bandwidth resources required at each switch, calculates worst-case latency, and monitors stream metrics.
SRP Enhancements IEEE 802.1Qcc Improves the Stream Reservation Protocol for administration of large TSN networks with improvements to centralized reservation and scheduling, remote management, and reservation requests.
YANG data model IEEE P802.1Qcp Network management, device configuration, and status reporting of switches.

Putting It All Together: Introducing the HFR flexiHaul™ M6424

The M6424 is an optimized, TSN switch for fronthaul that can be deployed at cell sites, hub sites, and central offices, for aggregating 4G CPRI, 5G eCPRI, and Ethernet traffic onto the transport network. This single aggregation switch offers relief from fiber exhaustion, with no WDM optics required, simplifying deployment and reducing costs.

The M6424 supports the IEEE 802.1CM profiles for fronthaul with frame pre-emption, and is packed inside a 1RU hardened chassis.

For more information about the Fujitsu Smart xHaul Solution, visit this web page or call your Fujitsu Network Communications Sales Manager today.

Virtualized Routers for 5G Transport: Webinar Replay Now Available

If you want to learn more about how virtual routers, or vRouters, will be used to meet the demands of 5G transport and other next-gen services, check out the on-demand recording of the webinar, “Virtual Routers for Flexible, Future-Proof 5G Transport.” Click here to listen to the free recorded session, and you’ll also be able to download the presentations, a special market report by IHS Markit Executive Director Heidi Adams, and a number of other resources on this topic.

The 60-minute webinar, co-sponsored by Fujitsu, first aired on December 10, 2019, and was hosted by IHS Markit, the London-based data and information services firm. Allen Tatara, Senior Manager at IHS Markit, served as moderator. A Q&A session followed the presentations.

Presenters included: Joseph Mocerino, Principal Solutions Architect, Optical Networking, Fujitsu Network Communications; Heidi Adams, Executive Director, Network Infrastructure Research, IHS Markit; and Hugh Kelly, Vice President of Marketing, Volta Networks.

The global audience included network operators, service providers, equipment manufacturers, and enterprise end users. The presentations and Q&A covered a number of topics, including:

  • The market trends driving IP network evolution;
  • An introduction to virtualized routing architectures and virtual routers;
  • The strategies for supporting the delivery of network slices for 5G services;
  • The challenges facing this network evolution; and
  • Several use cases and deployment examples.

5G will bring the promise of ultra-broadband speeds, ultra-reliable low-latency services, and the ability to massively scale communications for a wide range of devices and next-gen applications like Internet of Things (IoT), Smart Cities, telemedicine, and connected cars. But these services will also place new demands on the underlying IP transport infrastructure impacting how we will design our networks in the future.  

In particular, the way routing is delivered into the network must evolve. In response, the implementation of new solutions – such as cloud-native virtualized routers – are emerging to enable higher-capacity, more flexible, and less costly IP networks. In fact, a recent survey by IHS Markit revealed that 95% of service providers had plans to virtualize at least one or more of their network functions or applications. All these topics and more are covered in the webinar, so don’t miss this opportunity to learn about how cloud-native virtualized routers will play a leading role in meeting 5G transport requirements. Click here to listen to the free recorded session and download additional market insights on these emerging topics.

Demystifying 400ZR and ZR+ Coherent Optical Technology: Webinar Replay Now Available

If you missed the Fujitsu co-sponsored webinar, “400ZR and ZR+: Enabling Next-Generation Data Center Connectivity” or want to listen again, an on-demand recording is now available. Click here to listen to the recorded session. You’ll also be able to download the presentations, a special report from IHS Markit, and the application note, “400G ZR – Enabling Data Center Evolution” by Rehan Zaki, Principal Architect, DCI Strategy and Planning from Fujitsu Network Communications.

The original 60-minute webinar aired on November 12, 2019, and examined the coherent optical solutions enabled by the new and developing standards: 400ZR and ZR+. The session was hosted by IHS Markit, the London-based data and information services firm. Allen Tatara, Senior Manager at IHS Markit, served as moderator. A Q&A session followed the presentations.

Attendees came from around the globe and across industry sectors including data center operators, service providers, cable network operators, mobile network operators, transceiver and transponder vendors, coherent optics companies, and financial analysts.

Presenters included: Rehan Zaki, Principal Architect, DCI Strategy and Planning, Fujitsu Network Communications; Joerg Pfeifle, Solution Manager for Coherent Test, Keysight Technologies; Scott Swail, Vice President, Business Development, Lumentum; and Timothy Munks, Principal Research Analyst, Optical Networking Technology at IHS Markit.

The presentations and Q&A covered a number of topics, including:

  • Insights from these industry leaders on standardized and interoperable 400G pluggable coherent optics and strategies for testing these new transceivers;
  • The value proposition and use cases for 400G ZR and 400G Open ROADM transponders;
  • Applications for pluggable and interoperable transponders from data center interconnect (DCI) to long haul; and
  • The unique challenges of optimizing equipment and processes for this new set of coherent modems during their manufacturing and deployment at data centers.

Data center bandwidth is growing from 20-40 percent year over year, according to IHS Markit estimates, driven by applications like video streaming, industrial IoT, 5G backhaul and cloud services. And as the demand for data center capacity grows, so will the number of data center interconnect (DCI) networks.

To accommodate future network requirements, 400G+ technologies are forecast to support about 50 percent of the deployed bandwidth by 2022 enabled by the 400ZR and ZR+ standards. These new standards, developed by the Optical Internetworking Forum (OIF), define the implementation of coherent technology. They promise to substantially increase bandwidth capacity between data centers, while reducing footprint, power consumption, and the cost per bit for coherent transport. If you want to learn more about these new technologies and and how they will impact your operations, don’t miss this opportunity to get up to speed with the latest market insights into 400G ZR compliant optics. Click here to listen to the on-demand session.

Expanding the Scope of Data Center Network Automation

In the data center world, downward pressure on operations costs, coupled with the demands of managing large numbers of devices, has produced an approach to configuration management that prioritizes efficiency, simplicity, and automation. Essentially, data center operators must make it easy not just to configure large numbers of new devices, but also to monitor and manage clean device configuration data over time, particularly with reference to change control.

Data center operations staff are increasingly using open-source network automation tools to manage configuration data stored in a network-wide master data center database that models the entire network and is separate from the data specific to individual devices. Configuration data is periodically refreshed, by re-applying the data in the master database to the devices, ensuring their configuration matches the records in the master database.

Many data center operators are working towards automating all operations, not just configuration management. While some develop their own platforms, most utilize Ansible as their automation framework. Developed in Python, the Ansible automation platform allows a data center operator to automate the configuration, upgrade and orchestration of their servers, databases, and networking devices.

Network vendors providing solutions to data centers need to provide comprehensive support for the open source platforms and technologies used by data center operators to automate routine procedures such as configuration monitoring or data refresh. As discussed in our technology brief about automating data center operations, Ansible, RANCID and Oxidize are enabling development of increasingly sophisticated automation tools.

Tools for applications such as automated analytics and telemetry are expected to become commonplace in the near future as data centers expand the range and sophistication of their automation capabilities. The Fujitsu FSS2 system software platform, which is built into the 1FINITY family of optical network products, incorporates modules and plug-ins that support Ansible. Ansible is an automation tool that the data center market uses to monitor their devices and what changes may occur on them. These plug-ins are a set of python scripts, which data center customers can install into the Ansible tool that the customer has on their server.   We expect to continue building on this set of plug-ins, as well as to collaborate with customers using our GitHub website to develop more advanced tools that go beyond configuration management. A set of Ansible tools is in the late stages of development and these will be available on this GitHub server in the coming weeks.

Evolving your FLASHWAVE 9500 Network with the Fujitsu 1FINITY™ Platform

Today’s telecom service providers are confronting powerful market drivers that are challenging their network operations and business models. First, a plethora of new technology options are now available to upgrade legacy networks, but this creates interoperability issues between existing and new hardware and software. Second, constant growth in customer demand for bandwidth is challenging service providers to create networks that can economically keep pace. Lastly, continuous innovation in the services and applications that customers require adds another layer of complexity. Designing an agile network that can quickly accommodate new end-user services is critical.

The good news is for those service providers that have an existing Fujitsu FLASHWAVE® 9500 network: they are in a much better position to evolve their network than those based on other legacy infrastructure. 

To facilitate this evolution, Fujitsu has developed the 1FINITY™ platform to take service providers and their networks into the future. 1FINITY is an open, modular blade-based portfolio of products that provides flexibility, scalability and programmability to customers in a pay-as-you-grow business model.

A key product in the 1FINITY family is the S100, a 1.2 Tbps Ethernet switch capable of supporting 1GbE, 10GbE or 100GbE interfaces. There are three different scenarios that illustrate how a FW9500 network can leverage the S100 to effectively grow and meet future demands.

SONET Configuration

Consider a network where each FW9500 is configured as a SONET multiservice provisioning platform (MSPP). Most service providers have found that as their network grows, it is more economical to convert the protocol from SONET to Ethernet as soon as possible and add support for Ethernet switching functions. With the FW9500, you can add an Ethernet over Anything (EoX) gateway that performs this protocol conversion. Then, you can augment the FW9500 with a 1FINITY S100 to provide Ethernet switching to a 10GbE or 100GbE interface. The FW9500 can also provide DWDM capabilities if that function is needed.

Packet-Optical Configuration

Consider a FW9500 network deployed as a 10GE packet-optical transport platform (POTP) for wireless backhaul or business services. In this scenario, the FW9500 is a converged platform equipped with a switch fabric that provides Ethernet switching and DWDM functionality. As demand grows, the need for 100G outstrips the FW9500 10G capabilities. However, by augmenting the FW9500 with the S100, you can provide 10GbE to 100GbE aggregation and a direct connection to the FW9500 ROADM by using the 100GbE DWDM narrow-band module. This enables the FW9500 100G lambda capability to carry these aggregated services on a DWDM network while increasing the capacity of the existing ROADM network tenfold. Additionally, you free up card slots on the FW9500 because it becomes a ROADM-only system.

In an earlier application note, “Mobile Backhaul 100GbE Migration,” Fujitsu demonstrated how service providers can achieve a savings up to 44% in capital expenditure over the present mode of operation using this augmented network approach.

DWDM Configuration

Consider a FW9500 network that is deployed as a 10G ROADM network. In this scenario, the FW9500 simply provides DWDM functionality. As demand grows, the need for 100G and Ethernet switching services outstrips the FW9500 10G ROADM capabilities. By adding the S100 with a 100GbE DWDM narrow-band module, the network is transformed into a POTP network that provides carrier Ethernet services, such as E-Line and E-LAN, as well as 100G lambda services over the FW9500 ROADM.

Managing Your Network with the Fujitsu Virtuora Network Control (NC) Solution

In all of these scenarios, both the FLASHWAVE 9500 and 1FINITY S100 platforms can be managed by the Fujitsu Virtuora Network Control (NC) solution, a range of software products that enable you to build and grow a virtualized, programmable network. The Virtuora NC solution encompasses control, planning and design, operations and management, and service fulfillment and assurance functions.

Summary These three scenarios clearly show that starting your network evolution with Fujitsu’s FW9500 network has significant advantages. A FW9500 network enables you to meet the demand for interoperability, scalability and service innovation when upgrading your network. Augmenting an existing FW9500 network with the 1FINITY S100 allows you to leverage your infrastructure investment, grow your network economically and pave the way to 100G traffic based on packet switching technology.

These Four Tenets are the Secrets of Hyperscale Optical Transport

The ever-expanding demands of data center interconnect were never going to be easy to address. Data center operators facing constant pressure for better cost metrics in terms of bandwidth and rack space density know that when the chips are down, it’s all about economics of scale—or more accurately, scalability.

With the new 1FINITY T600 optical transport blade, the quest to deliver the maximum amount of traffic and the highest performance at the minimum possible cost is suddenly much more reasonable and achievable. In addition to being the first compact modular blade to offer ultra-high speed transmission up to 600G, the T600 delivers the highest spectral efficiency in the industry: up to 76.8 Tbps per single fiber, enabling maximum performance and capacity for both data center interconnect (DCI) and 5G applications.

The T600’s value for data center operators can be broken down into four tenets that were uppermost in our minds as we designed the platform. These four tenets represent the cornerstones of hyperscale optical transport for next-generation DCI as well as 5G:

  • Flexibility – Designed to support all DCI applications, the T600 offers a wide range of configuration options and is engineered to scale progressively while controlling cost per bit per km.
  • Capacity – To enable extreme optical transport use cases, the T600 supports 600G transmission with both C- and L-band spectrum on the line side, as well as providing client ports that are upgradeable to 400 GbE, further boosting capacity; the blade will soon offer 6 × 400 GbE client ports as an option in place of the existing 24 × 100 GbE ports.
  • Automation – Starting with the feature-rich system software on the blade, Fujitsu has embraced the open-source model and laid the foundations for automation that simplifies operations and enhances adoption of network-level automation.
  • Security – From management to control to data plane, the T600 incorporates security measures to protect critical data from intrusion, including Layer 1 encryption and compliance with Federal Information Processing Standard (FIPS) 140-2 as well as built-in physical design defenses.

Hyperscale optical transport will require extreme but flexible fiber capacity and reach capabilities that can be scaled for various DCI applications. Fujitsu addresses these needs with the 1FINITY T600 Transport blade, enabling data centers and cloud providers to equip their networks for the demands of the hyperconnected digital economy.

Find out more about the four tenets of hyperscale optical transport on the 1FINITY T600 blade—watch our video intro and check out the hyperscale transport technology brief.

5G Transport: The Impact of Millimeter Wave and Sub-6 Radios

Part two in a blog series about how Fujitsu is bringing the 5G vision to life

As communications service providers (CSPs) prepare to deploy 5G, a number of factors will need to be considered as they plan their radio access network (RAN) architecture. An important aspect of this planning is an understanding of the 5G radio interface (NR) specifications and spectrum options.

Both millimeter wave (mmWave) and sub-6 GHz radio architectures have a fronthaul, midhaul and backhaul in terms of transport. However, the differences in the coverage aspects of these two radio types will define the network topology.

The high frequencies of mmWave radios result in reduced coverage of a given area, requiring a more dense deployment outside of traditional cell towers. The mmWave radios will be deployed in a small cell type of configuration, since a large number are required to cover a given area.  In urban areas, the dense deployment of mmWave radios will most likely be on street lamps, and the side or top of buildings. Sub-6 radios, however, enable coverage configurations similar to 4G LTE radios. Therefore, Sub-6 radio topology could be similar to a C-RAN LTE fronthaul, in which dark fiber is used where available, and some form of multiplexing such as WDM or packet multiplexing is used where fiber is lacking.

Initially, the mmWave radios will be best-suited for high throughput applications such as fixed wireless access (FWA), while sub-6 radios will be best used for mobility.  In the long term, both radio types will be used for both use cases.

Since sub-6 radio coverage dynamics are similar to LTE, many CSPs will consider deploying sub-6 much like 4G LTE in a C-RAN to realize DU pooling efficiencies and offer higher performance using cell site aggregation.

Alternatives to a centralized pool of DUs, whether mmWave or Sub-6 radio, is an integrated DU and RU which eliminates the fronthaul transport and discrete fiber connections between the two.  This alternative expedites service delivery while reducing capital and operational expense, but also eliminates pooling and cell site aggregation capabilities.  Cell sites with integrated DUs will have midhaul, or what the IEEE refers to as fronthaul-II, in this section of the RAN transport.

Based on the various deployment options for mmWave and Sub-6 radios, either WDM based transport or a newer packet based transport using Time Sensitive Ethernet (TSN) will be used to pass 5G eCPRI/xRAN channels, as well as legacy 4G CPRI channels, from the cell site to a central aggregation point when an abundance of dedicated dark fiber is not available.

This blog is the second in a series about our vision for 5G transport. See part one here.

Open and Automated: The New Optical Network

Communication service providers (CSPs) are increasingly transforming their networks with an eye towards more openness and automation. There has been a continued push to disaggregate optical networking platforms in order to drive down total cost of ownership and provide network operators with the flexibility to upgrade their networks while keeping up with the accelerated pace of innovation across different layers of the network framework stack. The promise of vendor interoperability and automated control through open standards, APIs and reference platforms are the key drivers enabling CSPs to make the shift to open.

There are varying degrees of openness that one can choose to adopt in this transition – from the proprietary systems of today to a fully disaggregated open optical network. The sweet spot in which the industry seems to be converging is to be partially disaggregated, as in the open line system (OLS) model. OLS provides a good trade-off between interoperability and performance; however, we still have a long way to go to make these systems future-proof and deployable. Multiple industry organizations such as the Open ROADM MSA, OpenConfig, Telecom Infra Project (TIP) and Open Disaggregated Transport Network (ODTN) are working towards bringing this vision of open networking to reality. Though there are multiple initiatives addressing disaggregation in optical transport, we believe there is a strong need for harmonization among them so that the industry can truly benefit from standardization of common models and APIs.

As optical equipment vendors aggressively evolve their offerings to help enable this open optical transformation, care must be taken to address the key business and technical requirements which are unique to each network operator, depending on the state of their current network infrastructure. There is no one single solution that can be applied across the board, bringing both challenges and opportunities to vendors who have embraced open and disaggregated architectures. The migration to open networking requires the operator to reevaluate the manner in which networks are architected, deployed and operated. Enabling this shift presents multiple challenges (such as network planning and design and multi-vendor control) when it comes to the implementation and operationalization of the various building blocks. Effectively addressing them will be key to this transformation.

Fujitsu believes a collaborative process with CSPs that involves a thorough assessment of the network architecture and OSS/IT workflows, along with establishing a phased deployment plan for implementation of hardware and software solutions, will be instrumental in navigating this transition seamlessly. The enclosed white paper provides an overview of the open optical ecosystem today, identifies and describes some of the key challenges to be addressed in implementing open automated networks, and outlines some migration strategies available to network operators embracing open networking.

Time, Technology and Terabit Transport

If you measure time against technological progress, six years is a long time in optical networking. In 2012, we were congratulating ourselves for getting to 100G transport. Now we’ve officially reached 600G, as Fujitsu recently demonstrated on our new 1FINITY T600 blade, the latest in the 1FINITY transport series. Optical transport products are now available that can modulate photons to create signals with 600,000,000,000 bits of information packed into every second, and send those signals at close to the speed of light, traversing the globe almost instantaneously. To put this colossal capability into perspective, a 2 TB digital library could be transmitted virtually anywhere on the globe in seven seconds with a single T600.

It’s easy to disregard or minimize yet another technology advancement. But the implications of 600G and beyond are more significant and positive than simply an increased amount of Internet junk. For example, healthcare could become extremely collaborative across continents with a combination of real time data collection and data analytics with massive data rate transfers near real time. Universally available high-speed connections to a smartphone supports the kind of data gathering and analysis needed to understand our world better and develop remedies for the many serious problems we face.

Access to information is the chief means of empowerment in both personal and business life. Consequently, it is important deploy this 600G technology rather than, for example, hold to the false economy of continued deployments at slower rates. Being able to transmit entire libraries in seconds is an awesome power that opens up rich possibilities. The network occupies a critical role as the foundation of the connected digital economy that, one way or another, is making stakeholders out of every one of us. So, one might say our industry has an economic and moral imperative to drive the highest possible speeds and capacities as deep into communities as possible. High-speed connectivity fosters opportunity, learning and commerce. In the final analysis, more really IS better when it comes to the network.

5G Transport: From Vision to Reality

Part one in a blog series about how Fujitsu is bringing the 5G vision to life

On the road to 5G, there are a number of different paths that communications service providers (CSPs) can choose. This blog is the first in a series about our vision for the 5G RAN, and how Fujitsu is working with leading CSPs to co-create these networks and bring 5G to life.

Transport is vital for building a robust and reliable network. The xHaul ecosystem consists of the backhaul, midhaul and fronthaul transport segments.  Dedicated dark fiber, WDM and packet technologies are used within these transport segments. As CSPs evolve their networks from 4G / LTE to 5G, there are several options explaining how those transport networks will be designed.

In a “Split Architecture,” the distribution unit (DU) connects to many macro site radio units (RUs) over multiple fronthaul fiber paths. This is a similar architecture to the 4G central RAN (C-RAN) where there is a central point; the DU in this case, fanning out to multiple macro sites for interconnect with the 5G radios, also known as RUs or Transmission Reception Points (TRPs).  This efficient technique is referred to as RAN Pooling, and along with cell site aggregation, offers mobile network operators the ability to engineer the RAN capacity based on clusters of sites coming into the central point DUs, instead of individual cell site demands.

The “Distributed DU” architecture involves DUs collocated with RUs at the cell site.  The distributed DU use case offers a latency sensitive architecture by eliminating the fronthaul transport path.  The fronthaul becomes a local connection between the top and bottom of the tower via fiber cable.  This is a low latency configuration, which also reduces costs by eliminating the fronthaul transport section.  The tradeoff is a loss of multi-site pooling and cell site aggregation with macro cell sites. Moreover, the midhaul capacity is reduced to 10GE rates.

Finally, there is the “Integrated DU” architecture, which integrates the DU into the RU at the cell site.  This architecture offers similar benefits as the Distributed DU use case, but with an additional advantage of lower CapEx and OpEx by combining these devices.   The combined DU and RU reduce the number of devices to install, manage and maintain resulting in expedited service turn-up and faster time to revenue.

To learn more, register for an archived webinar “New Transport Network Architectures for 5G RAN” with Fujitsu and Heavy Reading analyst Gabriel Brown: www.lightreading.com/webinar.asp?webinar_id=1227