Times, they are a-Changin’ (and the Pace is a-Heatin’ Up)

Three decades is a long time to be in the same industry, even one as historically slow-moving as telecommunications. It’s certainly long enough to become familiar with the typical rate of change. Looking back over my thirty-year telecom tenure, it’s clear that bigger changes are happening at an accelerating pace.

A quick look at how long it takes people to pick up new technologies is enough to prove this observation. By considering technologies that have come to dominate our lives over the past 100 years and examining how long it took each to reach 50 million users, we discover a few interesting things.

Let’s start with the technology that started the communication-at-a-distance revolution: the ubiquitous telephone. It took 75 years for Bell to attract 50 million subscribers after rolling out the telephone in 1876. Then, from the first TV broadcast in 1929, it took a relatively short 33 years to garner 50 million viewers. The World-Wide Web took only four years, starting in 1991, to reach this milestone. More recently Angry Birds, as mentioned elsewhere on this site by Rhonda Holloway, hit the market in 2009 and it took just 35 days for 50 million users to catch on.

With adoption time frames collapsing from almost a century to a little over a month, clearly the pace of adoption is accelerating. But astute readers will point out that I’m not exactly making fair comparisons regarding technology deployments. The first two (the telephone and television) depend on infrastructure deployments that require huge investments of expertise, construction, equipment and time. The second two (the WWW and Angry Birds) are “just software,” which, without seeming disingenuous, is much easier and faster to deploy.

And that is indeed the case; software is in general easier to deploy and the future of networking is not hardware; it’s software. To manage the hyper-connected, always-on, high-bandwidth demands of the Internet of Everything, networks will be forced to evolve in ways that are unimaginable if we keep thinking about operating them in the same hardware-oriented way we always have. The network must become a programmable entity and evolve beyond mere physical infrastructure.

Are your network and your operations capabilities prepared for Angry Birds deployment speed? My next few posts will explain how you can achieve a programmable network, leverage new hardware and software technology advancements and ultimately, implement the disaggregated network of the future.

A Better Radio Access Network Delivers Performance and Savings That Can’t Be Ignored

The tried and true distributed radio access network (RAN) is the standard in mobile architectures. Significant improvements in performance—and reductions in capex and opex—would be required for service providers to consider making substantial changes.

But these are no ordinary times. The exploding popularity of digital video and social networking are driving wireless traffic relentlessly higher. In fact, a recent Cisco VNI study shows that worldwide mobile data traffic is growing at a 57% compound annual rate in the six-year period beginning in 2014.

What began as 2.5 exabytes per month two years ago will reach 24.3 exabytes per month before you know it.

Given this explosion in wireless traffic, C-RAN, the centralized radio access network, provides just the bonuses that make network upgrades a wise investment.

Evolving to a C-RAN architecture makes dollars and sense:

  • RAN performance can increase up to 30% through gains in spectral efficiency, cell site aggregation, and scalability.
  • Capex can be reduced up to 30% through savings in site acquisition, construction costs, and equipment efficiency.
  • Opex can be reduced up to 50% through savings in rent, power consumption, capacity management, and operation and maintenance.

“Mobile operators are increasingly seeking to deploy Cloud RAN architectures for efficiency and performance reasons,” said Gabriel Brown, senior analyst, Heavy Reading. “To disaggregate the radio access network into centralized baseband and distributed RF components requires a fronthaul solution that can meet stringent reliability, scalability, and opex targets.”

A new C-RAN solution from Fujitsu includes a smart WDM system with integrated diagnostics, remote visibility, self-healing functionality, and ultralow latency. The result is fast installation, high service availability, and a dense, scalable architecture that adapts easily to growing demand.

Learn more here.

YANG: the Very Model of a Modern Network Manager

Network management is undergoing a change towards openness, driven by the competing desires to reduce vendor lock-in without increasing operational effort. Software Defined Networking (SDN) is maturing from programmatic interfaces into suites of applications, powered by network controllers developed through multivendor open-source efforts. One such controller is OpenDaylight, which brings a compelling feature to SDN: YANG models of devices and services.

YANG is a standard data definition language which was humorously named “Yet Another Next Generation,” in part because it grew out of efforts to create a next-generation SNMP syntax, which was later applied to the next-generation NETCONF network device configuration protocol. YANG provides the structure and information to describe the behavior, capabilities, and restrictions of devices in a manner that can be incorporated into an abstracted framework. OpenDaylight uses YANG models to present a unified user interface that hides the differences between devices from different vendors and allows them to work together without requiring an administrator to know the details of those devices.

The concept of using automation to reduce both the required level of device knowledge and the possibility of mistakes due to mistyping is not new. For many years customers have used EMS (Element Management Systems) and NMS (Network Management Systems) to create configuration templates and push bulk changes to devices. Most of these tools are vendor-specific, but they succeed at reducing the level of effort. Other organizations have also created home-grown tools using scripting languages like Perl to interface with device CLI via Expect. This technique takes both programming skill and device knowledge to develop the tools, but can result in solutions to specific problems at the cost of the script being specific to a single vendor, device model, and OS version. However, with the addition of YANG, there is the potential to create cross-vendor tools that solve the same problem with less effort. When used in OpenDaylight, YANG models also provide an additional feature: OpenDaylight creates a REST interface, called restconf, based on the YANG models it’s imported.  Since the restconf calls are based on the abstracted YANG model, it’s possible to separate the core logic from the details of device configuration, so a script can potentially work with multiple different devices. At the OpenDaylight Summit in July 2016, Brian Freeman of AT&T said that the use of YANG models “allows us to prototype apps in hours.” YANG clearly has potential to result in better tools faster.

That’s not to say that YANG is easy. YANG enables application user interfaces to be easy, but there’s a lot of detail that goes into a YANG model for a device, and for the services that that device can provide.  A useful YANG model should describe the hierarchical configuration options, the valid ranges, and the constraints.  Therefore, the source of device-specific YANG models will mostly be vendors.

The code example contains a constraint in a YANG model, taken from a presentation at IETF 71.

Notice that the leaf element “retry-timer” has a constraint comparing it to the leaf “access-timeout,” with an explanatory error message. Since the goal of YANG in Open Daylight is to describe the device’s behavior to a computer program to automate the configuration of that device, a model can’t rely on a trained administrator to know that certain options can’t be used with each other: the model must prevent the OpenDaylight application from telling a device to do something it can’t do.

While YANG as a language is well specified, further work is needed. True inter-vendor interoperability must extend beyond listing all of the configuration options for each device, and must involve creating high-level abstractions for service definitions. While much has been done to create standard YANG service models for the OpenFlow protocol in L2 Ethernet switching, SDN is still relatively new to the optical network. It will be exciting to see standards converge to deliver on the promise of true multivendor openness.

A Unified Network Combining Ethernet and DWDM

Double exposure of businessman working with new modern computer show social network structure and bokeh exposure

Carrier Ethernet is a very successful solution for providing services in a metropolitan area. This technology provides a variety of capabilities including multiple classes of service; fast restoration; standardized services such as E-Line and E-LAN; and bandwidth guarantees. As demand grows in a metro Ethernet network it becomes necessary to accommodate capacity beyond 10G access rings. DWDM is an economical technology for scaling networks beyond 10G. But an effective solution, ideally a unified network incorporating these two technologies, requires that all the components play well together.

The most common approach is deploying a DWDM overlay on top of the Carrier Ethernet network.  This architecture is a solid choice, but carries the disadvantage of requiring two separate network management systems that don’t talk to each other. This imposes a high cost in terms of operational and administrative overhead, which increases operations cost and complexity.

The Fujitsu NETSMART® 1200 Management System offers an attractive alternative. In combination with FLASHWAVE 5300 and FLASHWAVE 7120 platforms, NETSMART 1200 can integrate DWDM capabilities into the existing Carrier Ethernet network—eliminating the problem of dual management systems, while providing service management, end-to-end provisioning, and open interfaces. Each core network element has both core Ethernet switching and DWDM modules—an elegant, comprehensive, and unified solution.

SFP+ Delivers Precision Bandwidth Upgrades

Network cables closeup with fiber optic. Selective focus.

Perhaps the most onerous issue facing Ethernet network operators is that of upgrading to higher-bandwidth services.

Typically, a network interface device (NID) is deployed at a new customer site in the form of a ring that is shared among several customers. At this point, there is a decision to be made: should the NID be put in a 1 GbE ring or a 10 GbE ring?

Usually, traffic at the time of deployment warrants only a 1 GbE ring, but based on historical market trends, the aggregate bandwidth requirements of this ring will almost certainly increase to warrant a 10 GbE ring in the future. Thus, in this type of deployment, you have to decide up-front whether to invest in a 10 GbE ring initially without knowing when additional bandwidth will be needed. Alternatively, might it be more appropriate to go with a 1 GbE ring now and change to a 10 GbE ring later? Changing to a 10 GbE ring typically requires changing the NID, an expensive and troublesome activity, but this choice at least has the advantage of deferring the cost until the bandwidth is needed.

Now there’s a new approach to solving this dilemma. Small Form-Factor Pluggable (SFP) transceivers are widely adopted, small footprint, hot-pluggable modules available in a variety of capacity and reach options, including 1 GbE. Now, enhanced Small Form-Factor Pluggable (SFP+) modules advance the original SFP technology, offering an elegant solution to the bandwidth growth issue: 10 GbE performance is available in SFP+ devices that are physically compatible with SFP cages. In essence you get all the convenience of SFPs, but with ten times the bandwidth.

This new capability, available in in the Fujitsu FLASHWAVE® 5300 family of carrier Ethernet devices provides an exciting and economical solution to common bandwidth growth problems. A NID can be deployed with 1 GbE client ports and 1 GbE network ports using SFPs. Then, when traffic approaches full capacity, 10 GbE SFP+ transceivers can be substituted for the original set. The onerous issue of aggregate bandwidth growth suddenly becomes…not so onerous. Simple changes of optical modules let you cost-effectively target growth exactly where it is needed—without the burden and waste of whole-chassis replacements.

This same mechanism can also accommodate client port growth from 1 to 10 GbE. This solution allows the initial installation to be sized with a more appropriate, lower cost product—1 GbE client and network SFPs—and then grow to 10 GbE when needed. The additional cost is incurred as and when needed.

Importance of Fiber Characterization

Fiber networks are the foundation on which telecom networks are built.  In the early planning stages of network transformation or expansion, it is imperative that operators perform a complete and thorough assessment of the underlying fiber infrastructure to determine its performance capabilities as well as its limits.  Industry experts predict as many as one-third of the fiber networks will require modifications to the existing systems.

Front-end fiber analysis ensures key metrics are met and the fiber is at optimum performance levels to handle the greater bandwidth required to transport data-intensive applications over longer distances.  This will save the service provider time and money and prevent delays in the final test and turn-up phase of the expansion or upgrade project.

Fiber architecture diagram that shows fiber’s journey from the central office to the various real-world locations (homes, businesses, universities, etc.).

 

This full network diagram shows node locations, types of fiber and distance between notes. Includes ELEAF and SMF-28.

 

Actual images of clean and dirty fiber. Includes comparison of clean fiber versus fiber with dust, oil and liquid contaminations.


Potential Problems & Testing Options

Fiber networks are comprised of multiple types, ages and quality of fiber all of which significantly impact the fiber infrastructure and transmission capabilities.  Additionally, the fiber may come from several different fiber providers.  The net result is there are several potential problem areas with fiber transmission including:

  • ­Aging fiber optics – Some fiber optic networks have been in operation for 25+ years. These legacy fiber systems weren’t designed to handle the sheer volume of data that is being transmitted on next generation networks.
  • Dirty and damaged connectors – Dirty end faces are one of the most common problems that occur at the connectors. Environmental conditions such as oil, dirt, dust or static-charged particles can cause contamination.
  • Splice loss – Fibers are generally spliced using fusion splicing. Variations in both fiber types (manufacturers) and the types of splices that are being used (fusion or mechanical) can all result in loss.
  • Bending – Excessive bending of fiber-optic cables may deform or damage the fiber. The light loss increases as the bend becomes more acute.  Industry standards define acceptable bending radii.

Fiber characterization testing evaluates the fiber infrastructure to make sure all the fiber, connectors, splices, laser sources, detectors and receivers are working at their optimum performance levels.  It consists of a series of industry-standard tests to measure optical transmission attributes and provides the operator with a true picture of how the fiber network will handle the current modernization as well as future expansions.  For network expansions that require new dark fiber, it is very important to evaluate how the existing fiber network interacts with the newly added fiber to make sure the fiber meets or exceeds the service provider’s expectations as well as industry standards such as TIA/ANSI and Telcordia.

There are five basic fiber characterization tests:

  • Bidirectional Optical Time-Domain Reflectometer (OTDR) – sends a light pulse down the fiber and measures the strength of the return signal as well as the time it took. This test shows the overall health of the fiber strand including connectors, splices and fiber loss.  Cleaning, re-terminating or re-splicing can generally correct problems.
  • Optical Insertion Loss (OIL) – measures optical power loss that occurs when two cables are connected or spliced together. The insertion loss, is the amount of light lost.  In longer distances, the light loss can cause the signal strength to weaken.
  • Optical Return Loss (ORL) – sends a light pulse down the fiber and measures the amount of light that returns. Some light is lost at all connectors and splices.  Dirty or poorly mated connectors cause scattering or reflections and result in weak light returns.
  • Chromatic Dispersion (CD) – measures the amount of dispersion on the fiber. In single-mode fiber, the light from different wavelengths travels down the fiber at slightly different speeds causing the light pulse to spread.  Additionally, when light pulses are launched close together and spread too much, information is lost. Chromatic dispersion can be compensated for with the use of dispersion-shifted fiber (DSF) or dispersion compensation modules (DCM’s.)
  • Polarization Mode Dispersion (PMD) – occurs in single-mode fiber and is caused by imperfections that are inherent in the fiber producing polarization-dependent delays of the light pulses. The end result is the light travels at different speeds and causes random spreading of optical pulses.

Once the fiber characterization is complete, the service provider will receive a detailed analysis of the condition of the fiber plant including: location of splice points and pass-troughs as well as assignments of panels, racks and ports.  They will also know if there is any old fiber that will not be able to support higher data rates now or for future upgrades.   More importantly, by doing the fiber characterization prior to transforming or expanding their telecom network, service providers can eliminate potential risks with the fiber infrastructure that can result in substantial delays during the final test and turn-up phases.

The Heart of the Matter: Virtuora Path Computation

SWBU_blog_1024x600

The Virtuora Path Computation application is part of the Fujitsu Virtuora Product Suite for software-defined networking. The product suite is based on a modular architecture designed for simplicity, control, and extreme flexibility.  The suite includes a network controller (Virtuora NC) with supporting applications that deliver services to market faster and more competitively.

The Virtuora Path Computation application is an automated software engine capable of calculating the most optimal path for information being sent from one managed network element to another. It is different from traditional path computation in that it has the computational power to thrive in a multilayer, multivendor, multidomain network, as opposed to residing at the node or switch level.

The Virtuora Path Computation application accommodates three primary use cases:

  • When a network operator is activating or deactivating a service
  • When a working path has failed
  • When a network fault alarm has been activated

The Virtuora NC product performs service activation, restoration, and fault management across multiple layers of the physical network. The Virtuora Path Computation application can accommodate constraints like diversity (node, SRLG, and link) as well as cost (hops and latency). The application  also provides for more sophisticated path computation that can take into account price of services and risk of failure.

For example, Virtuora engages with the IP layer, assesses the capacity on the Ethernet layer and the physical bandwidth at the OTN and WDM layer. From there, it can activate an optimized service with or without constraints, provide a new path when a protected or non-protected path goes down, or route around the fault that triggered the network alarm.

The real power of the Path Computation application is when it’s paired with Service Restoration. Virtuora is capable of restoring network services based on what’s going on in the network currently. If a critical service goes down, Service Restoration invokes the Path Computation application and automatically switches to a protected path, or gracefully reverts to the original path once conditions clear.

Virtuora goes beyond the logical path, taking into account diverse physical routes that aren’t so obvious. It knows, and shows, fiber pairs and fiber bundles that logically appear to be on different wavelengths, but are in fact physically on the same one. Virtuora will not allow operators to create a circuit using the same link.

Using the application is simple. A network operator opens the Virtuora network controller and clicks a button to create a path for a circuit. Using an intuitive questionnaire-type dialog box on the console, the constraints of the circuit are described and entered, quickly returning a result. The outcome is –Z provisioning that is wholly intentional about network configuration, as well as the current and future state of the network.

Every network using Virtuora can take advantage of the Path Computation application. Because the output is a REST call with the best path calculated, separate but integrated systems like the NFV Orchestrator—and proprietary OSS/BSS systems—can use it.

The Virtuora Path Computation application demonstrates the value of disaggregated SDN architecture, modular applications, and an open-source controller. By separating the control functions from the hardware and logically centralizing and managing them, Virtuora customers gain the ability to work across multiple devices, layers, and vendors quickly, efficiently, reliably, and best of all—automatically. Get started building your network for the connected world using the Virtuora Path Computation application. #fujitsu #humancentricinnovation

*** Special thanks for Anuj Dutia for providing subject matter expertise for this blog post.

From Static Hardware to Dynamic Software: Building Better Networks for the Connected World

SWBU-1024x600

One of the most popular statistics cited for technology diffusion and the associated hyper-accelerated increase in technology adoption is the so-called “Angry Birds” Internet meme. The premise is this: it took the telephone 75 years to get to 50 million users, but it took the Angry Birds gaming application 35 days. A quick google fact check shows these stats to be a bit squishy, but the conclusion is the same. The technology is here, and the time is now. Angry Birds: Space was able to get to 50 million users in 35 days because the network was in place to support widespread adoption of the application.

rh-ictbp-gr1

This rate of uptake and its incremental revenue can only be achieved with a network and supporting software that is ready to exploit it, and users who are hungry for the service. Your customers have the appetite to consume services at Angry-Bird speed. Right now. Don’t you want a bigger piece of that pie?

These global networking trends are driving the network evolution:

  • Mobility, including digital business, the enabled consumer, and the distributed workforce
  • Social media (YouTube, LinkedIn, Facebook, Twitter, Instagram, Snapchat, WhatsApp)
  • Cloud commerce (online shopping and auctions, the application economy, the sharing economy, and streaming entertainment)
  • Big data and its monetization—social media monetizes the information and network-usage patterns, analytics, better interaction, more interaction
  • Internet of Things (IOT)

The typical network and its supporting operations requires weeks or months to roll out new revenue-generating services, as opposed to cloud providers, who can turn up new services instantly. These cloud builders are dynamic and capable of delivering single-purpose services that fulfill a timely need or trend. Service providers need innovation, now more than ever, to get into that game. We must transition from our traditional static operating model to on-demand, automated, and programmable networks. And this transition demands a new way of thinking about carrier networks.

To get there, you need an open, adaptable, programmable network.

By “open,” we mean open-open. Publishing a Software Developer’s Kit (SDK) to a vendor-proprietary system and calling that “open” isn’t what we mean. When we say “open,” we are referring to interoperable systems that are built on open-source technologies. We mean vendors that will bring you more innovative technologies, with inherently complementary services and appliances. Open-source simplifies things—and avoids vendor lock-in.

An “adaptable network” has reduced operations and management complexity, improved service creation and activation times, and is nimble. Spend your precious resources developing a conscious network that is capable of self-optimizing. When you see this, make that change. Quickly. Without rolling a truck.

When we talk about network programmability, we are referring to more than simple provisioning. We are describing the kind of network intelligence that facilitates zero-touch provisioning, resilience, and fault management. This kind of network intelligence will allow us to create a service-driven network that we can do different, exciting things with—things unimaginable only a few short years ago.

The network you have today can do this, but it must evolve.

  • Open up your network. Partner with multiple suppliers and build new NEs, defining the services you want to deliver along the way. No excess, no functionality that you don’t need. No wasted resources or capital expense.
  • Start thinking about disaggregating the hardware: big iron routers, packet shelves, form factors, software, lambda, transport, switching, and access. Pull that hardware out of the static chassis. Going forward, only buy network elements that you use and pay for, and use software to manage and program it.
  • Get comfortable with the idea of using centralized software that has a global view of the network; automatically allocate resources where they are needed and when they are needed, dynamically optimizing network performance in real time.
  • Be creative about what kind of services and products a virtualized datacenter can deliver.
  • Above all—put your customers first. Don’t start with the premise that you can only do what the network will allow you to do; think about the services consumers are clamoring for, then find a way to deliver that.

At Fujitsu, we think about Human-centric innovation all the time. We see that consumer expectations are changing and increasing, and we are responding with networks that are more aware, more adaptive, and more agile. We believe the network must transform to meet its customers’ needs. We are committed to quickly and effectively connecting people to the experiences they seek by facilitating access to content and information, not just the underlying technology.

It’s a very exciting time at Fujitsu. In the coming weeks, please visit us here to see how this vision is playing out… we’re also interested in your feedback. DM us on Twitter @FujitsuFNC, or email us on LinkedIn.

Author’s Note: Special thanks goes to fellow author and principal solutions architect Bill Beesley, who provided most of the technical perspective for this blog post in his presentation at our Fujitsu Solution Days demo events in November 2014 and 2015.

How Disaggregation is Paving our Path Forward

Abstract background from metallic cubes

The optical networking industry is on the edge of revolutionary change, and while it may sound trite to talk about “the network of the future,” this is what’s approaching in a very real and immediate sense.

Two long-term market trends—industry consolidation and the convergence of IT and networking—have propelled the industry to its first major inflection point in decades. Two technological trends—the software revolution and the disaggregation phenomenon— are taking us forward.

Of these trends, disaggregation is the key to open, agile, plug-and-play networking as we will know it in five years’ time. This is because before we can progress, we have to separate the individual functions and capabilities that comprise today’s tightly integrated hardware systems. Disaggregation is, therefore, not just the latest buzzword. It’s a prerequisite that must be met before we can form an ecosystem of new industry partnerships, and collaborate to rebuild and improve those componentized functions—and add the automation and intelligence that the market’s clamoring to buy.

Fujitsu is shifting the architectural design of its optical networking platforms away from complex, vertically integrated hardware-based structures into a disaggregated architecture. This change will give rise to re-aggregation in the form of economical, generic hardware elements, open-standards software frameworks, and interoperable functional modules. We call this “componentization.”

Once disaggregation has happened, we can drive fast implementation times, simpler development cycles, lower costs, and an overall climate of unprecedented innovation, collaboration, and opportunity not just for the industry itself, but for our customers and their customers as well.

Innovation, collaboration and opportunity—the broad benefits resulting from the disaggregation journey—all grow best in an open, nonproprietary climate. They thrive when the overall environment is flexible, partnership-oriented and fast-paced. And while we’re aware of and prepared for the inevitable transition period of backwards compatibility, we’re committed to taking the first bold steps down the road to disaggregation in fall 2015. It’s an exciting time to be at Fujitsu, an exciting time to be in the optical networking business, and it’s going to get more and more interesting from here.

The Promise of VoLTE: It’s Only a Matter of Time

VoLTE

Voice over LTE (VoLTE) is considered by many to be revolutionary both for mobile operators and for subscribers. Operators, once they have established their VoLTE networks, will no longer have to maintain separate networks for voice (circuit-switched) and data (packet-switched). This will save on operational and capital expenses. Subscribers who use VoLTE will be able to use high-quality voice and data applications simultaneously, and the clarity of their voice calls will improve.

So why has VoLTE taken longer than anticipated to deploy? The answer lies in several challenges, which I’d like to discuss briefly.

  • The successful roll-out of HD voice and video calling services requires VoLTE technology to be simultaneously available in both the mobile network core and on mobile handsets. Most mobile operators’ core infrastructures are not fully equipped to simultaneously support circuit-switched and packet voice. In addition, VoLTE-enabled handsets are not yet widely available.
  • VoLTE promises to move wireless calls from the legacy circuit-switched network to the all-IP-based LTE network. The formidable task of supporting both switched-circuit and packet-based technologies, however, is not economical for mobile operators in markets where LTE is not yet deployed. What’s more, mobile operators are still resolving VoLTE call interoperability issues to support customers who are roaming.
  • Finally, successful VoLTE deployment depends heavily on nationwide LTE deployment and the adoption of LTE-based small cells for in-building voice enhancements. Adoption of LTE-based small cells for residential and enterprise applications is still very low.

Despite these challenges, numerous efforts are underway to realize the promise of VoLTE.  To expedite nationwide rollout, several leading mobile operators have begun limited trials in major markets, testing voice quality and equipment interoperability. Suppliers are slowly rolling out VoLTE-enabled handsets; and a consortium of mobile operators is negotiating roaming agreements for smooth inter-network transitions. The potential long-term benefits of VoLTE to operators and consumers alike are too great to miss and they easily outweigh any temporary intermediate setbacks.