C-RAN Mobile Architecture Migration: Fujitsu’s Smart xHaul is an Efficient Solution

To adapt mobile network architectures and address increasing bandwidth demands, service providers are deploying C-RAN architectures to improve performance and reduce costs

Deployment of C-RAN architectures has enabled increased deployment of optical mobile fronthaul solutions to deliver low-latency, high-bandwidth connectivity between remote radio heads and baseband unit electronics. Service providers are recognizing the necessity to reduce their mobile networking costs by better aligning the total electronics capacity of their networks with the total network utilization at any given time. They realize that by separating the electronics into a centralized pool where multiple radios or remote radio heads can share access to it, they can drive down capital costs and eliminate underutilized capacity. Centralized baseband units also enable easier handoffs and dynamic RF decisions based upon input from a combined set of radios.

As service providers deploy C-RAN architectures, they face significant challenges and decisions, specifically the selection of their mobile fronthaul solution. The CPRI protocol is extremely latency sensitive, which results in a latency link budget that limits the distance between RRH and base-band units to less than 20 km. The mobile fronthaul transmission equipment must minimize its latency contribution, or this distance will become even shorter. CPRI signaling is also highly inefficient, consuming as much as 16x transmission bandwidth versus the actual data rate seen by mobile applications.

To address which solutions best target these requirements, ACG Research analyzed the total cost of ownership and compared the economics of P2P dedicated dark fiber to that of active DWDM solutions like Fujitsu’s Smart xHaul. We analyzed the operational expense of the Smart xHaul solution over five years and compared it to competing mobile fronthaul alternatives. The analyses focused on the deployment of 150 macro cell sites, each supporting three frequency bands and three sectors. We also considered deployment of five small cells per macro cell site for a total of 750 small cell deployments.

The results demonstrate that although the capital expense of deploying a DWDM solution such as Smart xHaul is multiple times greater than the capex of P2P dark fiber, the reduction in fibers due to signal multiplexing and the advanced service assurance capabilities delivers 66% lower opex and 30% TCO savings. When looking at competing DWDM solutions, we also find that the advanced functions of the Smart xHaul solution deliver 60% lower opex associated with detecting, identifying root cause and resolving field issues.

In addition, industry-leading features in the Smart xHaul solution provide the ability to distinguish between optical transport and radio service impairments, which are identified by inspecting the actual CPRI packet frames. When combined with the other performance monitoring and service assurance capabilities, CPRI frame inspection results in rapid issue identification, assignment and resolution.

Click to download the paper and read how, in contrast with a dedicated dark fiber solution, the Smart xHaul solution is flexible and supports multiple network architectures.

Click for the HotSeat video of Tim Doiron, ACG Research analyst, and Joe Mocerino, Fujitsu principal solutions architect, discuss the Smart xHaul solution and C-RAN mobile architecture migration.

The Surprising Benefits of Uncoupling Aggregation from Transponding

Data Center Interconnect (DCI) traffic comprises various combinations of 10G and 100G on each service. In a typical application, DWDM is used to maximize the quantity of traffic that can be carried on single fiber.

Virtually all available products for this function combine aggregation and transponding into a single platform; they aggregate multiple 10G services into a single 100G and then transpond that 100G onto a lambda for multiplexing alongside other lambdas onto a single fiber. Decoupling aggregation and transponding into two different platforms is a new approach. At Fujitsu, this approach consists of a 10GbE to 100G Layer 1 aggregation device—the 1FINITY T400— and a separate 100GbE to 200G transponder—the 1FINITY T100— that serve the two halves of the formerly combined aggregation-transponding function. This decoupled configuration is unique to these 1FINITY platforms, and it offers unique advantages.

Paradoxically, at first glance, this type of “two-box” solution may seem less desirable. But there are several advantages to decoupling aggregation from transponding—particularly in DCI applications. Here’s a quick rundown of the benefits. As you’ll see, they’re similar to the overall benefits of the new disaggregated, blade-centric approach to data center interconnect architecture.

Efficient use of rack space: Physical separation of aggregation and transponding splits a single larger unit into two smaller ones: a dedicated transponder and a dedicated aggregator. As a result the overall capacity of existing racks is increased and as an added benefit, it is easier to find space for individual units and use up scattered empty 1RU slots, which helps make the fullest possible use of costly physical facilities.

Reducing “stranded” bandwidth: Many suppliers are using QSFP+ transponders, which offer programmable 40G or 100G. Bandwidth can be wasted when aggregating 10G services because 40 is not a factor of 100, which necessitates deployment in multiples of 200G in order to make the numbers work out; this frequently results in “over-buying” significant un-needed capacity.. The 1FINITY T400 aggregator deploys in chunks of 100G, which keeps stranded bandwidth to a minimum by reducing the over-buy factor.    

Simplified operations: Operational simplification occurs for two reasons. First, when upgrading the transponder, you simply change it out without affecting the aggregator. With aggregation decoupled from the transponder, changes such as upgrading the transponder or adjusting the mix of 10G/100G clients involve disconnection/reconnection of fewer fibers and require fewer re-provisioning commands. Line-side rate changes to the mix of 10 and 100G services involve roughly 60% of the operational activities in comparison with competing platforms. Client-side  rate changes involve 25% fewer operational activities. Fewer activities means fewer mistakes, less time per operation, and therefore less cost. Savings in this area mainly affect the expensive line side, which creates a larger cost reduction.

Overall, by separating the aggregator and transponder, Fujitsu can offer data centers significant savings through better use of resources as well as simplification of operations and provisioning. Find out more by visiting the Fujitsu 1FINITY platform Web page.

Four Key Ingredients Solve Network Business Challenges

Network operators face seemingly conflicting challenges. They must maximize network assets, reduce costs, and introduce new revenue-generating services—all while maintaining existing legacy services. This may seem like an impossible combination to achieve, but just four key capabilities provide the right ingredients to reconcile apparently conflicting needs and profitably address these big business challenges:

  • Transport legacy services in groups. Individual legacy service instances are often transported separately, which makes inefficient use of network and fiber resources. It is more efficient to combine multiple instances into batches that can be transported together at higher bit rates.
  • Combine multiple services onto a single fiber. Fiber resources are expensive and constrained. Freeing up fiber capacity or reducing the number of leased fibers needed to sustain growing networks by transporting additional services over a single fiber pair saves on fiber resource costs.
  • Efficiently pack 100G wavelengths. Many 100G wavelengths are inefficiently utilized, cumulatively wasting a large amount of capacity. If more services can be transported over existing 100G wavelengths, the network is more efficient and additional costs can be avoided.
  • Provide transparent wholesale services. Services that support a range of SLA choices by allowing demarcation and providing visibility into traffic, management, and alarms are attractive to customers and a valuable source of revenue.

You may be surprised to find out that an often-overlooked technology, Optical Transport Network (OTN), provides all four of these capabilities. OTN is a standard (ITU-T G.709) digital wrapper technology that allows multiple services of various types to be packaged and transported together at higher rates. This universal package is ideal for transporting legacy services, which makes better use of network resources while simultaneously benefiting from modern technologies and rates. OTN also inherently allows an end customer access to network management and performance data. Finally, as networks move to 100G transport, OTN provides an easy means of filling partially utilized 100G wavelengths by transparently delivering a combination of services. Overall, OTN is a highly viable option that deserves serious consideration for network modernization. On grounds of both efficiency and ongoing revenue opportunities, OTN carries excellent potential for long-term ROI.

A Better Radio Access Network Delivers Performance and Savings That Can’t Be Ignored

The tried and true distributed radio access network (RAN) is the standard in mobile architectures. Significant improvements in performance—and reductions in capex and opex—would be required for service providers to consider making substantial changes.

But these are no ordinary times. The exploding popularity of digital video and social networking are driving wireless traffic relentlessly higher. In fact, a recent Cisco VNI study shows that worldwide mobile data traffic is growing at a 57% compound annual rate in the six-year period beginning in 2014.

What began as 2.5 exabytes per month two years ago will reach 24.3 exabytes per month before you know it.

Given this explosion in wireless traffic, C-RAN, the centralized radio access network, provides just the bonuses that make network upgrades a wise investment.

Evolving to a C-RAN architecture makes dollars and sense:

  • RAN performance can increase up to 30% through gains in spectral efficiency, cell site aggregation, and scalability.
  • Capex can be reduced up to 30% through savings in site acquisition, construction costs, and equipment efficiency.
  • Opex can be reduced up to 50% through savings in rent, power consumption, capacity management, and operation and maintenance.

“Mobile operators are increasingly seeking to deploy Cloud RAN architectures for efficiency and performance reasons,” said Gabriel Brown, senior analyst, Heavy Reading. “To disaggregate the radio access network into centralized baseband and distributed RF components requires a fronthaul solution that can meet stringent reliability, scalability, and opex targets.”

A new C-RAN solution from Fujitsu includes a smart WDM system with integrated diagnostics, remote visibility, self-healing functionality, and ultralow latency. The result is fast installation, high service availability, and a dense, scalable architecture that adapts easily to growing demand.

Learn more here.

A Unified Network Combining Ethernet and DWDM

Double exposure of businessman working with new modern computer show social network structure and bokeh exposure

Carrier Ethernet is a very successful solution for providing services in a metropolitan area. This technology provides a variety of capabilities including multiple classes of service; fast restoration; standardized services such as E-Line and E-LAN; and bandwidth guarantees. As demand grows in a metro Ethernet network it becomes necessary to accommodate capacity beyond 10G access rings. DWDM is an economical technology for scaling networks beyond 10G. But an effective solution, ideally a unified network incorporating these two technologies, requires that all the components play well together.

The most common approach is deploying a DWDM overlay on top of the Carrier Ethernet network.  This architecture is a solid choice, but carries the disadvantage of requiring two separate network management systems that don’t talk to each other. This imposes a high cost in terms of operational and administrative overhead, which increases operations cost and complexity.

The Fujitsu NETSMART® 1200 Management System offers an attractive alternative. In combination with FLASHWAVE 5300 and FLASHWAVE 7120 platforms, NETSMART 1200 can integrate DWDM capabilities into the existing Carrier Ethernet network—eliminating the problem of dual management systems, while providing service management, end-to-end provisioning, and open interfaces. Each core network element has both core Ethernet switching and DWDM modules—an elegant, comprehensive, and unified solution.

SFP+ Delivers Precision Bandwidth Upgrades

Network cables closeup with fiber optic. Selective focus.

Perhaps the most onerous issue facing Ethernet network operators is that of upgrading to higher-bandwidth services.

Typically, a network interface device (NID) is deployed at a new customer site in the form of a ring that is shared among several customers. At this point, there is a decision to be made: should the NID be put in a 1 GbE ring or a 10 GbE ring?

Usually, traffic at the time of deployment warrants only a 1 GbE ring, but based on historical market trends, the aggregate bandwidth requirements of this ring will almost certainly increase to warrant a 10 GbE ring in the future. Thus, in this type of deployment, you have to decide up-front whether to invest in a 10 GbE ring initially without knowing when additional bandwidth will be needed. Alternatively, might it be more appropriate to go with a 1 GbE ring now and change to a 10 GbE ring later? Changing to a 10 GbE ring typically requires changing the NID, an expensive and troublesome activity, but this choice at least has the advantage of deferring the cost until the bandwidth is needed.

Now there’s a new approach to solving this dilemma. Small Form-Factor Pluggable (SFP) transceivers are widely adopted, small footprint, hot-pluggable modules available in a variety of capacity and reach options, including 1 GbE. Now, enhanced Small Form-Factor Pluggable (SFP+) modules advance the original SFP technology, offering an elegant solution to the bandwidth growth issue: 10 GbE performance is available in SFP+ devices that are physically compatible with SFP cages. In essence you get all the convenience of SFPs, but with ten times the bandwidth.

This new capability, available in in the Fujitsu FLASHWAVE® 5300 family of carrier Ethernet devices provides an exciting and economical solution to common bandwidth growth problems. A NID can be deployed with 1 GbE client ports and 1 GbE network ports using SFPs. Then, when traffic approaches full capacity, 10 GbE SFP+ transceivers can be substituted for the original set. The onerous issue of aggregate bandwidth growth suddenly becomes…not so onerous. Simple changes of optical modules let you cost-effectively target growth exactly where it is needed—without the burden and waste of whole-chassis replacements.

This same mechanism can also accommodate client port growth from 1 to 10 GbE. This solution allows the initial installation to be sized with a more appropriate, lower cost product—1 GbE client and network SFPs—and then grow to 10 GbE when needed. The additional cost is incurred as and when needed.

Importance of Fiber Characterization

Fiber networks are the foundation on which telecom networks are built.  In the early planning stages of network transformation or expansion, it is imperative that operators perform a complete and thorough assessment of the underlying fiber infrastructure to determine its performance capabilities as well as its limits.  Industry experts predict as many as one-third of the fiber networks will require modifications to the existing systems.

Front-end fiber analysis ensures key metrics are met and the fiber is at optimum performance levels to handle the greater bandwidth required to transport data-intensive applications over longer distances.  This will save the service provider time and money and prevent delays in the final test and turn-up phase of the expansion or upgrade project.

Fiber architecture diagram that shows fiber’s journey from the central office to the various real-world locations (homes, businesses, universities, etc.).


This full network diagram shows node locations, types of fiber and distance between notes. Includes ELEAF and SMF-28.


Actual images of clean and dirty fiber. Includes comparison of clean fiber versus fiber with dust, oil and liquid contaminations.

Potential Problems & Testing Options

Fiber networks are comprised of multiple types, ages and quality of fiber all of which significantly impact the fiber infrastructure and transmission capabilities.  Additionally, the fiber may come from several different fiber providers.  The net result is there are several potential problem areas with fiber transmission including:

  • ­Aging fiber optics – Some fiber optic networks have been in operation for 25+ years. These legacy fiber systems weren’t designed to handle the sheer volume of data that is being transmitted on next generation networks.
  • Dirty and damaged connectors – Dirty end faces are one of the most common problems that occur at the connectors. Environmental conditions such as oil, dirt, dust or static-charged particles can cause contamination.
  • Splice loss – Fibers are generally spliced using fusion splicing. Variations in both fiber types (manufacturers) and the types of splices that are being used (fusion or mechanical) can all result in loss.
  • Bending – Excessive bending of fiber-optic cables may deform or damage the fiber. The light loss increases as the bend becomes more acute.  Industry standards define acceptable bending radii.

Fiber characterization testing evaluates the fiber infrastructure to make sure all the fiber, connectors, splices, laser sources, detectors and receivers are working at their optimum performance levels.  It consists of a series of industry-standard tests to measure optical transmission attributes and provides the operator with a true picture of how the fiber network will handle the current modernization as well as future expansions.  For network expansions that require new dark fiber, it is very important to evaluate how the existing fiber network interacts with the newly added fiber to make sure the fiber meets or exceeds the service provider’s expectations as well as industry standards such as TIA/ANSI and Telcordia.

There are five basic fiber characterization tests:

  • Bidirectional Optical Time-Domain Reflectometer (OTDR) – sends a light pulse down the fiber and measures the strength of the return signal as well as the time it took. This test shows the overall health of the fiber strand including connectors, splices and fiber loss.  Cleaning, re-terminating or re-splicing can generally correct problems.
  • Optical Insertion Loss (OIL) – measures optical power loss that occurs when two cables are connected or spliced together. The insertion loss, is the amount of light lost.  In longer distances, the light loss can cause the signal strength to weaken.
  • Optical Return Loss (ORL) – sends a light pulse down the fiber and measures the amount of light that returns. Some light is lost at all connectors and splices.  Dirty or poorly mated connectors cause scattering or reflections and result in weak light returns.
  • Chromatic Dispersion (CD) – measures the amount of dispersion on the fiber. In single-mode fiber, the light from different wavelengths travels down the fiber at slightly different speeds causing the light pulse to spread.  Additionally, when light pulses are launched close together and spread too much, information is lost. Chromatic dispersion can be compensated for with the use of dispersion-shifted fiber (DSF) or dispersion compensation modules (DCM’s.)
  • Polarization Mode Dispersion (PMD) – occurs in single-mode fiber and is caused by imperfections that are inherent in the fiber producing polarization-dependent delays of the light pulses. The end result is the light travels at different speeds and causes random spreading of optical pulses.

Once the fiber characterization is complete, the service provider will receive a detailed analysis of the condition of the fiber plant including: location of splice points and pass-troughs as well as assignments of panels, racks and ports.  They will also know if there is any old fiber that will not be able to support higher data rates now or for future upgrades.   More importantly, by doing the fiber characterization prior to transforming or expanding their telecom network, service providers can eliminate potential risks with the fiber infrastructure that can result in substantial delays during the final test and turn-up phases.

Real-World SDN: Anything Less than Multivendor Won’t Cut It

Abstract background from metallic cubes
Vendors seeking to set “ground rules” for critical aspects of Software-Defined Networking (SDN), such as what “multilayer” approaches mean, are leaving gaping holes in their strategy if they don’t account for multivendor interoperability. A diverse vendor ecosystem is one of the things that makes an open standards paradigm so powerful.

The importance of multilayer networking is beyond dispute, regardless of what the “routers can do everything” lobby might have to say. In fact, the idea that entire networks can be based on router architectures amounts to a case in point for the multilayer view. Trying to build an end-to-end router-based network would impose tremendous cost and complexity burdens. The cheaper, more efficient, long-range optical transport layer is essential to economical networks, alongside routers. It’s a question of using the most efficient means to carry traffic, not taking a one-size-fits-all approach.

In a similar vein, multilayer approaches seem to be coalescing along single-vendor lines, or at least to be offering limited multivendor interoperability. Even those claiming to be closest to rolling out a multilayer SDN offering seem to imagine that service providers will buy all their equipment from them. When the chips are down, multilayer approaches based on alliances between vendors to make up for each other’s deficiencies amount to turf-protection efforts that don’t take account of the diversity at all layers of the network. In reality, a truly open and interoperable multivendor approach ranks alongside multilayer networking in importance. Yet we hear very little about the multivendor aspect save for the claims of one or two vendors of glorified network management systems thinly disguised as SDN offerings.

Any single-vendor or vendor-limited solution is useless in the real world where SDN will be deployed. Moreover, protecting old OSS turf prevents interoperable solutions from flourishing and produces some “strange bedfellows.” A realistic and pragmatic approach to SDN must recognize the true conditions into which SDN will be deployed. In a fragmented optical market and an economic climate where service providers seek to maximize the long term viability of their capital investments, SDN must be as interoperable as possible if it is to deliver on its value promises.

As the provider of one of the first multivendor SDN solutions deployed in the world, Fujitsu has valuable expertise and perspective on this topic and is uniquely positioned to address connecting data centers to transport networks using multilayer and multivendor systems. We already build servers and resource orchestrators, and are currently among the world’s largest providers of hosted cloud services. Look for more announcements from Fujitsu on their fast-developing SDN technology portfolio over the coming months.