The New Network Normal: Service-Oriented, Not Infrastructure-Oriented

Mobile broadband connections will account for almost 70% of the global base by 2020. The new types of services those customers consume will drive a tenfold increase in data traffic by 2019. At this rate, most of the world will be mobile, with “mobile” expectations. The “cloud” has become synonymous with mobility and is matching customers with new products and services more and more. More customers are coming, more services are coming, and more types of services are coming. More, more, more.

Carrier networks must embrace a new normal to support and drive this digital revolution. Unlike the static operating models of the past, a new dynamic system is emerging, and it’s not about the network at all. It’s about the applications that deliver services to paying customers— wherever they are, however they want them. This kind of dynamic network requires intelligence, extreme flexibility, modularity, and scalability. The new normal means creating innovative, differentiated services and combining these with the kind of intensely integrated, highly personalized relationships that enable services to delivered and  billed on-demand.

To be competitive in the new application economy, service providers need to dedicate more budget and resources to service innovation. However, multi-layer/multi-vendor network design necessitates that the lion’s share of any service provider’s budget goes to the network itself. At Fujitsu, we are changing that: we are working with our customers to architect an entirely new system: disaggregated, flattened, and virtual. And it doesn’t require a “scorched earth re-write” or “rip and replace” investment.

The new network normal means a new way of doing business for service providers, and it requires a different way of operating. In the old business model, service providers functioned like vending machine companies. A vending machine offered a pre-set lineup of products, snacks, and a single way to pay, namely your pocket change. Only field technicians could fill vending machines, only field technicians could fix broken machines, and only field technicians could deliver new vending machines to new locations. An entirely different staff collected the money and handled banking. Vending machine companies were forced to wait weeks, or even months, to receive payment for sold goods.

Vending machines in remote areas might not get serviced as often as population-dense areas. Technicians didn’t know which products were the most popular, but they knew which were the least! Plenty of people had dollar bills in their wallet- but no loose change. If the machine was out of stock, customers had to find another.

Companies lost sales because of the limitations of this infrastructure— not because there were no willing customers.

Vending machine companies developed new ways to accept payment, re-negotiated partnerships and delivery routes to refill popular product lines more often, and reorganized the labor force into groups who could fill and service machines simultaneously. In spite of these optimization tactics, much like service providers, vending machine companies were still ultimately reliant on physical devices and physical infrastructure to deliver a static line of products. Otherwise happy customers were required to seek other vendors when their needs were unfulfilled.

But unlike vending machine companies, service providers are not always selling a physical product. Service providers can re-package their products virtually— and it starts with virtualization of the network itself. Applying standard IT virtualization technologies to the service provider network allows administrators to shed the expense and constraints of single-purpose, hardware-based appliances.

Rolling out new services over traditional hardware-based network infrastructure used to take months or even years for service providers to achieve. Many time-consuming steps were required: service design, integration, testing, and provisioning. Virtualization addresses these wide-ranging use cases and more.

Software-defined networking, combined with network function virtualization, creates a single resource to manage and traverse an abstracted and unified fabric. As a result, application developers and network operators don’t have to worry about network connections; the intelligent network does that for them. Imagine seamlessly connecting applications and delivering new services, automatically, at the will of the end user. Virtualization provides this new normal: best-of-breed components that are intelligent, optimized end-to-end, fully utilized, and much less expensive. Budget previously dedicated to network infrastructure can now be released to support new applications and services for whole new categories of customers.

Thanks to readily-available data analytics on trending customer behavior, Network Operators will know exactly which products their customers are willing to buy and what they’re looking for—and they’ll be able to deliver them individually or as part of value-package offerings far beyond the current range of choices. Remote areas can get the same services and level of customer support that those in population-dense areas enjoy. Payment will be possible on-demand or by subscription. Premium convenience services will offer new flexibility for customers—and new revenue streams for providers.

Service providers will be able to differentiate their offerings beyond physical products, including bandwidth, SLAs and price points. Their enterprise customers will get better tools, on-demand provisioning, and tight integration between the carrier network, enterprise network, and cloud builders. Service Provider’s business customers will get on-demand services and always-on mobile connectivity. Other customers will get bundled services or high-bandwidth mobile connectivity only.

Not like a vending machine at all. Even the new ones that accept credit cards. Welcome to the new normal.

Four Key Ingredients Solve Network Business Challenges

Network operators face seemingly conflicting challenges. They must maximize network assets, reduce costs, and introduce new revenue-generating services—all while maintaining existing legacy services. This may seem like an impossible combination to achieve, but just four key capabilities provide the right ingredients to reconcile apparently conflicting needs and profitably address these big business challenges:

  • Transport legacy services in groups. Individual legacy service instances are often transported separately, which makes inefficient use of network and fiber resources. It is more efficient to combine multiple instances into batches that can be transported together at higher bit rates.
  • Combine multiple services onto a single fiber. Fiber resources are expensive and constrained. Freeing up fiber capacity or reducing the number of leased fibers needed to sustain growing networks by transporting additional services over a single fiber pair saves on fiber resource costs.
  • Efficiently pack 100G wavelengths. Many 100G wavelengths are inefficiently utilized, cumulatively wasting a large amount of capacity. If more services can be transported over existing 100G wavelengths, the network is more efficient and additional costs can be avoided.
  • Provide transparent wholesale services. Services that support a range of SLA choices by allowing demarcation and providing visibility into traffic, management, and alarms are attractive to customers and a valuable source of revenue.

You may be surprised to find out that an often-overlooked technology, Optical Transport Network (OTN), provides all four of these capabilities. OTN is a standard (ITU-T G.709) digital wrapper technology that allows multiple services of various types to be packaged and transported together at higher rates. This universal package is ideal for transporting legacy services, which makes better use of network resources while simultaneously benefiting from modern technologies and rates. OTN also inherently allows an end customer access to network management and performance data. Finally, as networks move to 100G transport, OTN provides an easy means of filling partially utilized 100G wavelengths by transparently delivering a combination of services. Overall, OTN is a highly viable option that deserves serious consideration for network modernization. On grounds of both efficiency and ongoing revenue opportunities, OTN carries excellent potential for long-term ROI.

Service Provisioning is Easier with YANG and the 1FINITY Platform

Years ago, alarm monitoring and fault management were difficult across multivendor platforms. These tasks became significantly easier—and ubiquitous—after the introduction of the Simple Network Management Protocol (SNMP) and Management Information Base (MIB-II).

Similarly, multivendor network provisioning and equipment management has proved elusive. The reason is the complexity and variability of provisioning and management commands in equipment from multiple vendors.

Could a new protocol and data modeling language again provide the solution?

Absolutely. Over time, NETCONF and YANG will do for service provisioning what SNMP and MIB-II did for alarm management.

Recently, software-defined networking (SDN) has introduced IT and data center architectural concepts to network operators—that is, by separating the control plane and the forwarding plane in network devices, allowing their control from a central location. Innovative disaggregated hardware leverages this new SDN paradigm with centralized control and the use of YANG, a data modeling language and application program interface (API).

YANG offers an open model that, when coupled with the benefit of standard interfaces such as NETCONF and REST, finally supports multivendor management. This approach provides an efficient mechanism to overcome the complexity and idiosyncrasies inherent in each vendor’s implementation.

Fujitsu’s response to this evolution is the 1FINITY™ platform, a revolutionary disaggregated networking solution. Rather than creating a multifunction converged platform, each 1FINITY function resides in a 1RU blade: transponder/muxponder blades, lambda DWDM blades, and switching blades. Each blade delivers a single function that previously resided in a converged architecture—the result is scalability and pay-as-you-grow flexibility.

Each 1FINITY blade has an open API and supports NETCONF and YANG, paving the way for a network fully rooted in the new SDN and YANG paradigm. New 1FINITY blades are easy to program via an open, interoperable controller, such as Fujitsu Virtuora® NC. Since each blade has a YANG model, it’s easy to include provisioning and management in a networkwide element management function.

Any open-source SDN controller that enables multivendor and multilayer awareness of the virtual network will revolutionize network management and control systems. Awareness of different layers and different vendor platforms will result in faster time to revenue, greater customer satisfaction, increased network optimization, and new services that are easier to implement.

Multilayer Data Connectivity Orchestration: Exploring the CENGN Proof of Concept

While computer technology in a wider sense has advanced rapidly and dramatically over the past three decades, networking has remained virtually unchanged since the 1990s. One of the main problems facing providers around the world is that the numerous multivendor legacy systems still in service don’t support fast, accurate remote identification, troubleshooting and fault resolution. The lack of remote fault resolution capabilities is compounded by the complex, closed, and proprietary nature of legacy systems, as well as the proliferation of protocols at the Southbound end. As a result, networks are difficult to optimize, troubleshoot, automate, and customize. SDN (Software-Defined Networking) is set to solve these issues by decoupling the control plane, plus bringing the benefits of cost reduction, overhead reduction, virtual network management, virtual packet forwarding, extensibility, better network management, faster service activation, reduced downtime, ease of use and open standards.

Why Multilayer SDN is Needed

One of the issues facing network operators is that there is no SDN controller with a streamlined topology view of both optical transport and packet layers. That’s why coordination between transport and IP/MPLS layer management is one of the most promising approaches to optimized, simplified multilayer network architecture. However, this coordination brings significant technical challenge, since it involves interoperation between very different technologies on each of the network layers, each with its own protocols, approach and legacy for network control and management.

Traditionally, transport networks have relied on centralized network management through a Network Management System or Element Management System (NMS/EMS), whereas the IP/MPLS layer uses a distributed control plane to build highly robust and dynamic network topologies. These fundamentally different approaches to network control have been a significant challenge over the years when the industry has tried to realize a closer integration between both network layers.

Although there has been a lot of R&D in this area (one example would be OpenFlow adding optical transport extensions on version 1.3 onwards), there are few, if any, successful implementations of multilayer orchestration through SDN.

It’s important to mention a common misconception about SDN, which is the assumption that SDN goes hand-in-hand with the OpenFlow protocol. OpenFlow (which is an evolution of Ethane protocol) is just a means to an end, namely separation of the control and data plane. OpenFlow is a communication protocol that gives access to the forwarding plane of a network element (switch, router, optical equipment, etc.). SDN isn’t dependent on Openflow specifically; it can also be implemented using other protocols for southbound communication, such as Netconf/Yang, BGP, XMPP, etc.

A Multilayer, Multivendor SDN Proof of Concept

To address the issues outlined above, CENGN (Canada’s Centre of Excellence in Next Generation Networks), in collaboration with Juniper Networks, Fujitsu, Telus and cenX, initiated a PoC to demonstrate true end-to-end multilayer SDN orchestration of an MPLS-based WAN over optical infrastructure.

In the PoC, the CENX Cortx Service Orchestrator serves as a higher layer orchestrator that optimally synchronizes the MPLS and Optical layers. The MPLS layer uses Juniper’s NorthStar SDN controller for Layer 2–3 devices and the Optical transport layer uses Fujitsu Virtuora® Network Controller. All northbound integration is through a REST API and upon notification of failures or policy violations this API dynamically adjusts the optical or packet layers via the SDN controllers, ensuring optimal routing and policy conformance.


The proof of concept consists of the following scenarios:


  • Optical link failure (via cable pull or manual port failure).
  • Cortx Orchestrator gets link failure alarms from Virtuora, stores them and updates path info.


  • Cortx Service Orchestrator receives link failure alarms from Juniper MPLS, stores alarm.
  • Cortx Service Orchestrator receives updated topology information from SDN controllers.
  • Juniper MPLS automatically re-routes blue label-switched path and notifies Cortx Service Orchestrator of link state changes.


  • Cortx Service Orchestrator processes new topology and raises alert of network policy. violation, which remains in effect until the situation is corrected.
  • Cortx Service Orchestrator notifies the operations user of policy violation.


  • Virtuora turns up optical links and alerts Cortx Service Orchestrator of topology change
  • Policy violation is cleared when condition corrected
  • LSP is rerouted through new provisioned optical paths


This is an excellent model of how a collaborative, multivendor, multilayer approach based on open standards can drive the industry towards the network of the future. By providing a functional example of real-time operations across multivendor platforms, this project has shown that multilayer data connectivity orchestration—and the benefits it offers—is feasible in a realistic situation. Other proofs of concept at the CENGN laboratories will continue to advance SDN and NFV technologies, helping to refine functionality and move towards production systems.

Times, they are a-Changin’ (and the Pace is a-Heatin’ Up)

Three decades is a long time to be in the same industry, even one as historically slow-moving as telecommunications. It’s certainly long enough to become familiar with the typical rate of change. Looking back over my thirty-year telecom tenure, it’s clear that bigger changes are happening at an accelerating pace.

A quick look at how long it takes people to pick up new technologies is enough to prove this observation. By considering technologies that have come to dominate our lives over the past 100 years and examining how long it took each to reach 50 million users, we discover a few interesting things.

Let’s start with the technology that started the communication-at-a-distance revolution: the ubiquitous telephone. It took 75 years for Bell to attract 50 million subscribers after rolling out the telephone in 1876. Then, from the first TV broadcast in 1929, it took a relatively short 33 years to garner 50 million viewers. The World-Wide Web took only four years, starting in 1991, to reach this milestone. More recently Angry Birds, as mentioned elsewhere on this site by Rhonda Holloway, hit the market in 2009 and it took just 35 days for 50 million users to catch on.

With adoption time frames collapsing from almost a century to a little over a month, clearly the pace of adoption is accelerating. But astute readers will point out that I’m not exactly making fair comparisons regarding technology deployments. The first two (the telephone and television) depend on infrastructure deployments that require huge investments of expertise, construction, equipment and time. The second two (the WWW and Angry Birds) are “just software,” which, without seeming disingenuous, is much easier and faster to deploy.

And that is indeed the case; software is in general easier to deploy and the future of networking is not hardware; it’s software. To manage the hyper-connected, always-on, high-bandwidth demands of the Internet of Everything, networks will be forced to evolve in ways that are unimaginable if we keep thinking about operating them in the same hardware-oriented way we always have. The network must become a programmable entity and evolve beyond mere physical infrastructure.

Are your network and your operations capabilities prepared for Angry Birds deployment speed? My next few posts will explain how you can achieve a programmable network, leverage new hardware and software technology advancements and ultimately, implement the disaggregated network of the future.

A Better Radio Access Network Delivers Performance and Savings That Can’t Be Ignored

The tried and true distributed radio access network (RAN) is the standard in mobile architectures. Significant improvements in performance—and reductions in capex and opex—would be required for service providers to consider making substantial changes.

But these are no ordinary times. The exploding popularity of digital video and social networking are driving wireless traffic relentlessly higher. In fact, a recent Cisco VNI study shows that worldwide mobile data traffic is growing at a 57% compound annual rate in the six-year period beginning in 2014.

What began as 2.5 exabytes per month two years ago will reach 24.3 exabytes per month before you know it.

Given this explosion in wireless traffic, C-RAN, the centralized radio access network, provides just the bonuses that make network upgrades a wise investment.

Evolving to a C-RAN architecture makes dollars and sense:

  • RAN performance can increase up to 30% through gains in spectral efficiency, cell site aggregation, and scalability.
  • Capex can be reduced up to 30% through savings in site acquisition, construction costs, and equipment efficiency.
  • Opex can be reduced up to 50% through savings in rent, power consumption, capacity management, and operation and maintenance.

“Mobile operators are increasingly seeking to deploy Cloud RAN architectures for efficiency and performance reasons,” said Gabriel Brown, senior analyst, Heavy Reading. “To disaggregate the radio access network into centralized baseband and distributed RF components requires a fronthaul solution that can meet stringent reliability, scalability, and opex targets.”

A new C-RAN solution from Fujitsu includes a smart WDM system with integrated diagnostics, remote visibility, self-healing functionality, and ultralow latency. The result is fast installation, high service availability, and a dense, scalable architecture that adapts easily to growing demand.

Learn more here.

YANG: the Very Model of a Modern Network Manager

Network management is undergoing a change towards openness, driven by the competing desires to reduce vendor lock-in without increasing operational effort. Software Defined Networking (SDN) is maturing from programmatic interfaces into suites of applications, powered by network controllers developed through multivendor open-source efforts. One such controller is OpenDaylight, which brings a compelling feature to SDN: YANG models of devices and services.

YANG is a standard data definition language which was humorously named “Yet Another Next Generation,” in part because it grew out of efforts to create a next-generation SNMP syntax, which was later applied to the next-generation NETCONF network device configuration protocol. YANG provides the structure and information to describe the behavior, capabilities, and restrictions of devices in a manner that can be incorporated into an abstracted framework. OpenDaylight uses YANG models to present a unified user interface that hides the differences between devices from different vendors and allows them to work together without requiring an administrator to know the details of those devices.

The concept of using automation to reduce both the required level of device knowledge and the possibility of mistakes due to mistyping is not new. For many years customers have used EMS (Element Management Systems) and NMS (Network Management Systems) to create configuration templates and push bulk changes to devices. Most of these tools are vendor-specific, but they succeed at reducing the level of effort. Other organizations have also created home-grown tools using scripting languages like Perl to interface with device CLI via Expect. This technique takes both programming skill and device knowledge to develop the tools, but can result in solutions to specific problems at the cost of the script being specific to a single vendor, device model, and OS version. However, with the addition of YANG, there is the potential to create cross-vendor tools that solve the same problem with less effort. When used in OpenDaylight, YANG models also provide an additional feature: OpenDaylight creates a REST interface, called restconf, based on the YANG models it’s imported.  Since the restconf calls are based on the abstracted YANG model, it’s possible to separate the core logic from the details of device configuration, so a script can potentially work with multiple different devices. At the OpenDaylight Summit in July 2016, Brian Freeman of AT&T said that the use of YANG models “allows us to prototype apps in hours.” YANG clearly has potential to result in better tools faster.

That’s not to say that YANG is easy. YANG enables application user interfaces to be easy, but there’s a lot of detail that goes into a YANG model for a device, and for the services that that device can provide.  A useful YANG model should describe the hierarchical configuration options, the valid ranges, and the constraints.  Therefore, the source of device-specific YANG models will mostly be vendors.

The code example contains a constraint in a YANG model, taken from a presentation at IETF 71.

Notice that the leaf element “retry-timer” has a constraint comparing it to the leaf “access-timeout,” with an explanatory error message. Since the goal of YANG in Open Daylight is to describe the device’s behavior to a computer program to automate the configuration of that device, a model can’t rely on a trained administrator to know that certain options can’t be used with each other: the model must prevent the OpenDaylight application from telling a device to do something it can’t do.

While YANG as a language is well specified, further work is needed. True inter-vendor interoperability must extend beyond listing all of the configuration options for each device, and must involve creating high-level abstractions for service definitions. While much has been done to create standard YANG service models for the OpenFlow protocol in L2 Ethernet switching, SDN is still relatively new to the optical network. It will be exciting to see standards converge to deliver on the promise of true multivendor openness.

A Unified Network Combining Ethernet and DWDM

Double exposure of businessman working with new modern computer show social network structure and bokeh exposure

Carrier Ethernet is a very successful solution for providing services in a metropolitan area. This technology provides a variety of capabilities including multiple classes of service; fast restoration; standardized services such as E-Line and E-LAN; and bandwidth guarantees. As demand grows in a metro Ethernet network it becomes necessary to accommodate capacity beyond 10G access rings. DWDM is an economical technology for scaling networks beyond 10G. But an effective solution, ideally a unified network incorporating these two technologies, requires that all the components play well together.

The most common approach is deploying a DWDM overlay on top of the Carrier Ethernet network.  This architecture is a solid choice, but carries the disadvantage of requiring two separate network management systems that don’t talk to each other. This imposes a high cost in terms of operational and administrative overhead, which increases operations cost and complexity.

The Fujitsu NETSMART® 1200 Management System offers an attractive alternative. In combination with FLASHWAVE 5300 and FLASHWAVE 7120 platforms, NETSMART 1200 can integrate DWDM capabilities into the existing Carrier Ethernet network—eliminating the problem of dual management systems, while providing service management, end-to-end provisioning, and open interfaces. Each core network element has both core Ethernet switching and DWDM modules—an elegant, comprehensive, and unified solution.

SFP+ Delivers Precision Bandwidth Upgrades

Network cables closeup with fiber optic. Selective focus.

Perhaps the most onerous issue facing Ethernet network operators is that of upgrading to higher-bandwidth services.

Typically, a network interface device (NID) is deployed at a new customer site in the form of a ring that is shared among several customers. At this point, there is a decision to be made: should the NID be put in a 1 GbE ring or a 10 GbE ring?

Usually, traffic at the time of deployment warrants only a 1 GbE ring, but based on historical market trends, the aggregate bandwidth requirements of this ring will almost certainly increase to warrant a 10 GbE ring in the future. Thus, in this type of deployment, you have to decide up-front whether to invest in a 10 GbE ring initially without knowing when additional bandwidth will be needed. Alternatively, might it be more appropriate to go with a 1 GbE ring now and change to a 10 GbE ring later? Changing to a 10 GbE ring typically requires changing the NID, an expensive and troublesome activity, but this choice at least has the advantage of deferring the cost until the bandwidth is needed.

Now there’s a new approach to solving this dilemma. Small Form-Factor Pluggable (SFP) transceivers are widely adopted, small footprint, hot-pluggable modules available in a variety of capacity and reach options, including 1 GbE. Now, enhanced Small Form-Factor Pluggable (SFP+) modules advance the original SFP technology, offering an elegant solution to the bandwidth growth issue: 10 GbE performance is available in SFP+ devices that are physically compatible with SFP cages. In essence you get all the convenience of SFPs, but with ten times the bandwidth.

This new capability, available in in the Fujitsu FLASHWAVE® 5300 family of carrier Ethernet devices provides an exciting and economical solution to common bandwidth growth problems. A NID can be deployed with 1 GbE client ports and 1 GbE network ports using SFPs. Then, when traffic approaches full capacity, 10 GbE SFP+ transceivers can be substituted for the original set. The onerous issue of aggregate bandwidth growth suddenly becomes…not so onerous. Simple changes of optical modules let you cost-effectively target growth exactly where it is needed—without the burden and waste of whole-chassis replacements.

This same mechanism can also accommodate client port growth from 1 to 10 GbE. This solution allows the initial installation to be sized with a more appropriate, lower cost product—1 GbE client and network SFPs—and then grow to 10 GbE when needed. The additional cost is incurred as and when needed.

Importance of Fiber Characterization

Fiber networks are the foundation on which telecom networks are built.  In the early planning stages of network transformation or expansion, it is imperative that operators perform a complete and thorough assessment of the underlying fiber infrastructure to determine its performance capabilities as well as its limits.  Industry experts predict as many as one-third of the fiber networks will require modifications to the existing systems.

Front-end fiber analysis ensures key metrics are met and the fiber is at optimum performance levels to handle the greater bandwidth required to transport data-intensive applications over longer distances.  This will save the service provider time and money and prevent delays in the final test and turn-up phase of the expansion or upgrade project.

Fiber architecture diagram that shows fiber’s journey from the central office to the various real-world locations (homes, businesses, universities, etc.).


This full network diagram shows node locations, types of fiber and distance between notes. Includes ELEAF and SMF-28.


Actual images of clean and dirty fiber. Includes comparison of clean fiber versus fiber with dust, oil and liquid contaminations.

Potential Problems & Testing Options

Fiber networks are comprised of multiple types, ages and quality of fiber all of which significantly impact the fiber infrastructure and transmission capabilities.  Additionally, the fiber may come from several different fiber providers.  The net result is there are several potential problem areas with fiber transmission including:

  • ­Aging fiber optics – Some fiber optic networks have been in operation for 25+ years. These legacy fiber systems weren’t designed to handle the sheer volume of data that is being transmitted on next generation networks.
  • Dirty and damaged connectors – Dirty end faces are one of the most common problems that occur at the connectors. Environmental conditions such as oil, dirt, dust or static-charged particles can cause contamination.
  • Splice loss – Fibers are generally spliced using fusion splicing. Variations in both fiber types (manufacturers) and the types of splices that are being used (fusion or mechanical) can all result in loss.
  • Bending – Excessive bending of fiber-optic cables may deform or damage the fiber. The light loss increases as the bend becomes more acute.  Industry standards define acceptable bending radii.

Fiber characterization testing evaluates the fiber infrastructure to make sure all the fiber, connectors, splices, laser sources, detectors and receivers are working at their optimum performance levels.  It consists of a series of industry-standard tests to measure optical transmission attributes and provides the operator with a true picture of how the fiber network will handle the current modernization as well as future expansions.  For network expansions that require new dark fiber, it is very important to evaluate how the existing fiber network interacts with the newly added fiber to make sure the fiber meets or exceeds the service provider’s expectations as well as industry standards such as TIA/ANSI and Telcordia.

There are five basic fiber characterization tests:

  • Bidirectional Optical Time-Domain Reflectometer (OTDR) – sends a light pulse down the fiber and measures the strength of the return signal as well as the time it took. This test shows the overall health of the fiber strand including connectors, splices and fiber loss.  Cleaning, re-terminating or re-splicing can generally correct problems.
  • Optical Insertion Loss (OIL) – measures optical power loss that occurs when two cables are connected or spliced together. The insertion loss, is the amount of light lost.  In longer distances, the light loss can cause the signal strength to weaken.
  • Optical Return Loss (ORL) – sends a light pulse down the fiber and measures the amount of light that returns. Some light is lost at all connectors and splices.  Dirty or poorly mated connectors cause scattering or reflections and result in weak light returns.
  • Chromatic Dispersion (CD) – measures the amount of dispersion on the fiber. In single-mode fiber, the light from different wavelengths travels down the fiber at slightly different speeds causing the light pulse to spread.  Additionally, when light pulses are launched close together and spread too much, information is lost. Chromatic dispersion can be compensated for with the use of dispersion-shifted fiber (DSF) or dispersion compensation modules (DCM’s.)
  • Polarization Mode Dispersion (PMD) – occurs in single-mode fiber and is caused by imperfections that are inherent in the fiber producing polarization-dependent delays of the light pulses. The end result is the light travels at different speeds and causes random spreading of optical pulses.

Once the fiber characterization is complete, the service provider will receive a detailed analysis of the condition of the fiber plant including: location of splice points and pass-troughs as well as assignments of panels, racks and ports.  They will also know if there is any old fiber that will not be able to support higher data rates now or for future upgrades.   More importantly, by doing the fiber characterization prior to transforming or expanding their telecom network, service providers can eliminate potential risks with the fiber infrastructure that can result in substantial delays during the final test and turn-up phases.