Service Provisioning is Easier with YANG and the 1FINITY Platform

Years ago, alarm monitoring and fault management were difficult across multivendor platforms. These tasks became significantly easier—and ubiquitous—after the introduction of the Simple Network Management Protocol (SNMP) and Management Information Base (MIB-II).

Similarly, multivendor network provisioning and equipment management has proved elusive. The reason is the complexity and variability of provisioning and management commands in equipment from multiple vendors.

Could a new protocol and data modeling language again provide the solution?

Absolutely. Over time, NETCONF and YANG will do for service provisioning what SNMP and MIB-II did for alarm management.

Recently, software-defined networking (SDN) has introduced IT and data center architectural concepts to network operators—that is, by separating the control plane and the forwarding plane in network devices, allowing their control from a central location. Innovative disaggregated hardware leverages this new SDN paradigm with centralized control and the use of YANG, a data modeling language and application program interface (API).

YANG offers an open model that, when coupled with the benefit of standard interfaces such as NETCONF and REST, finally supports multivendor management. This approach provides an efficient mechanism to overcome the complexity and idiosyncrasies inherent in each vendor’s implementation.

Fujitsu’s response to this evolution is the 1FINITY™ platform, a revolutionary disaggregated networking solution. Rather than creating a multifunction converged platform, each 1FINITY function resides in a 1RU blade: transponder/muxponder blades, lambda DWDM blades, and switching blades. Each blade delivers a single function that previously resided in a converged architecture—the result is scalability and pay-as-you-grow flexibility.

Each 1FINITY blade has an open API and supports NETCONF and YANG, paving the way for a network fully rooted in the new SDN and YANG paradigm. New 1FINITY blades are easy to program via an open, interoperable controller, such as Fujitsu Virtuora® NC. Since each blade has a YANG model, it’s easy to include provisioning and management in a networkwide element management function.

Any open-source SDN controller that enables multivendor and multilayer awareness of the virtual network will revolutionize network management and control systems. Awareness of different layers and different vendor platforms will result in faster time to revenue, greater customer satisfaction, increased network optimization, and new services that are easier to implement.

Multilayer Data Connectivity Orchestration: Exploring the CENGN Proof of Concept

While computer technology in a wider sense has advanced rapidly and dramatically over the past three decades, networking has remained virtually unchanged since the 1990s. One of the main problems facing providers around the world is that the numerous multivendor legacy systems still in service don’t support fast, accurate remote identification, troubleshooting and fault resolution. The lack of remote fault resolution capabilities is compounded by the complex, closed, and proprietary nature of legacy systems, as well as the proliferation of protocols at the Southbound end. As a result, networks are difficult to optimize, troubleshoot, automate, and customize. SDN (Software-Defined Networking) is set to solve these issues by decoupling the control plane, plus bringing the benefits of cost reduction, overhead reduction, virtual network management, virtual packet forwarding, extensibility, better network management, faster service activation, reduced downtime, ease of use and open standards.

Why Multilayer SDN is Needed

One of the issues facing network operators is that there is no SDN controller with a streamlined topology view of both optical transport and packet layers. That’s why coordination between transport and IP/MPLS layer management is one of the most promising approaches to optimized, simplified multilayer network architecture. However, this coordination brings significant technical challenge, since it involves interoperation between very different technologies on each of the network layers, each with its own protocols, approach and legacy for network control and management.

Traditionally, transport networks have relied on centralized network management through a Network Management System or Element Management System (NMS/EMS), whereas the IP/MPLS layer uses a distributed control plane to build highly robust and dynamic network topologies. These fundamentally different approaches to network control have been a significant challenge over the years when the industry has tried to realize a closer integration between both network layers.

Although there has been a lot of R&D in this area (one example would be OpenFlow adding optical transport extensions on version 1.3 onwards), there are few, if any, successful implementations of multilayer orchestration through SDN.

It’s important to mention a common misconception about SDN, which is the assumption that SDN goes hand-in-hand with the OpenFlow protocol. OpenFlow (which is an evolution of Ethane protocol) is just a means to an end, namely separation of the control and data plane. OpenFlow is a communication protocol that gives access to the forwarding plane of a network element (switch, router, optical equipment, etc.). SDN isn’t dependent on Openflow specifically; it can also be implemented using other protocols for southbound communication, such as Netconf/Yang, BGP, XMPP, etc.

A Multilayer, Multivendor SDN Proof of Concept

To address the issues outlined above, CENGN (Canada’s Centre of Excellence in Next Generation Networks), in collaboration with Juniper Networks, Fujitsu, Telus and cenX, initiated a PoC to demonstrate true end-to-end multilayer SDN orchestration of an MPLS-based WAN over optical infrastructure.

In the PoC, the CENX Cortx Service Orchestrator serves as a higher layer orchestrator that optimally synchronizes the MPLS and Optical layers. The MPLS layer uses Juniper’s NorthStar SDN controller for Layer 2–3 devices and the Optical transport layer uses Fujitsu Virtuora® Network Controller. All northbound integration is through a REST API and upon notification of failures or policy violations this API dynamically adjusts the optical or packet layers via the SDN controllers, ensuring optimal routing and policy conformance.

Scenarios

The proof of concept consists of the following scenarios:

STEP 1: FAILURE IN OPTICAL DOMAIN

  • Optical link failure (via cable pull or manual port failure).
  • Cortx Orchestrator gets link failure alarms from Virtuora, stores them and updates path info.

STEP 2: PACKET REROUTE

  • Cortx Service Orchestrator receives link failure alarms from Juniper MPLS, stores alarm.
  • Cortx Service Orchestrator receives updated topology information from SDN controllers.
  • Juniper MPLS automatically re-routes blue label-switched path and notifies Cortx Service Orchestrator of link state changes.

STEP 3: CORTX SRLG NOTIFICATION

  • Cortx Service Orchestrator processes new topology and raises alert of network policy. violation, which remains in effect until the situation is corrected.
  • Cortx Service Orchestrator notifies the operations user of policy violation.

STEP 4: PACKET DOMAIN ADJUSTMENTS

  • Virtuora turns up optical links and alerts Cortx Service Orchestrator of topology change
  • Policy violation is cleared when condition corrected
  • LSP is rerouted through new provisioned optical paths

Conclusion

This is an excellent model of how a collaborative, multivendor, multilayer approach based on open standards can drive the industry towards the network of the future. By providing a functional example of real-time operations across multivendor platforms, this project has shown that multilayer data connectivity orchestration—and the benefits it offers—is feasible in a realistic situation. Other proofs of concept at the CENGN laboratories will continue to advance SDN and NFV technologies, helping to refine functionality and move towards production systems.

YANG: the Very Model of a Modern Network Manager

Network management is undergoing a change towards openness, driven by the competing desires to reduce vendor lock-in without increasing operational effort. Software Defined Networking (SDN) is maturing from programmatic interfaces into suites of applications, powered by network controllers developed through multivendor open-source efforts. One such controller is OpenDaylight, which brings a compelling feature to SDN: YANG models of devices and services.

YANG is a standard data definition language which was humorously named “Yet Another Next Generation,” in part because it grew out of efforts to create a next-generation SNMP syntax, which was later applied to the next-generation NETCONF network device configuration protocol. YANG provides the structure and information to describe the behavior, capabilities, and restrictions of devices in a manner that can be incorporated into an abstracted framework. OpenDaylight uses YANG models to present a unified user interface that hides the differences between devices from different vendors and allows them to work together without requiring an administrator to know the details of those devices.

The concept of using automation to reduce both the required level of device knowledge and the possibility of mistakes due to mistyping is not new. For many years customers have used EMS (Element Management Systems) and NMS (Network Management Systems) to create configuration templates and push bulk changes to devices. Most of these tools are vendor-specific, but they succeed at reducing the level of effort. Other organizations have also created home-grown tools using scripting languages like Perl to interface with device CLI via Expect. This technique takes both programming skill and device knowledge to develop the tools, but can result in solutions to specific problems at the cost of the script being specific to a single vendor, device model, and OS version. However, with the addition of YANG, there is the potential to create cross-vendor tools that solve the same problem with less effort. When used in OpenDaylight, YANG models also provide an additional feature: OpenDaylight creates a REST interface, called restconf, based on the YANG models it’s imported.  Since the restconf calls are based on the abstracted YANG model, it’s possible to separate the core logic from the details of device configuration, so a script can potentially work with multiple different devices. At the OpenDaylight Summit in July 2016, Brian Freeman of AT&T said that the use of YANG models “allows us to prototype apps in hours.” YANG clearly has potential to result in better tools faster.

That’s not to say that YANG is easy. YANG enables application user interfaces to be easy, but there’s a lot of detail that goes into a YANG model for a device, and for the services that that device can provide.  A useful YANG model should describe the hierarchical configuration options, the valid ranges, and the constraints.  Therefore, the source of device-specific YANG models will mostly be vendors.

The code example contains a constraint in a YANG model, taken from a presentation at IETF 71.

Notice that the leaf element “retry-timer” has a constraint comparing it to the leaf “access-timeout,” with an explanatory error message. Since the goal of YANG in Open Daylight is to describe the device’s behavior to a computer program to automate the configuration of that device, a model can’t rely on a trained administrator to know that certain options can’t be used with each other: the model must prevent the OpenDaylight application from telling a device to do something it can’t do.

While YANG as a language is well specified, further work is needed. True inter-vendor interoperability must extend beyond listing all of the configuration options for each device, and must involve creating high-level abstractions for service definitions. While much has been done to create standard YANG service models for the OpenFlow protocol in L2 Ethernet switching, SDN is still relatively new to the optical network. It will be exciting to see standards converge to deliver on the promise of true multivendor openness.

The Heart of the Matter: Virtuora Path Computation

SWBU_blog_1024x600

The Virtuora Path Computation application is part of the Fujitsu Virtuora Product Suite for software-defined networking. The product suite is based on a modular architecture designed for simplicity, control, and extreme flexibility.  The suite includes a network controller (Virtuora NC) with supporting applications that deliver services to market faster and more competitively.

The Virtuora Path Computation application is an automated software engine capable of calculating the most optimal path for information being sent from one managed network element to another. It is different from traditional path computation in that it has the computational power to thrive in a multilayer, multivendor, multidomain network, as opposed to residing at the node or switch level.

The Virtuora Path Computation application accommodates three primary use cases:

  • When a network operator is activating or deactivating a service
  • When a working path has failed
  • When a network fault alarm has been activated

The Virtuora NC product performs service activation, restoration, and fault management across multiple layers of the physical network. The Virtuora Path Computation application can accommodate constraints like diversity (node, SRLG, and link) as well as cost (hops and latency). The application  also provides for more sophisticated path computation that can take into account price of services and risk of failure.

For example, Virtuora engages with the IP layer, assesses the capacity on the Ethernet layer and the physical bandwidth at the OTN and WDM layer. From there, it can activate an optimized service with or without constraints, provide a new path when a protected or non-protected path goes down, or route around the fault that triggered the network alarm.

The real power of the Path Computation application is when it’s paired with Service Restoration. Virtuora is capable of restoring network services based on what’s going on in the network currently. If a critical service goes down, Service Restoration invokes the Path Computation application and automatically switches to a protected path, or gracefully reverts to the original path once conditions clear.

Virtuora goes beyond the logical path, taking into account diverse physical routes that aren’t so obvious. It knows, and shows, fiber pairs and fiber bundles that logically appear to be on different wavelengths, but are in fact physically on the same one. Virtuora will not allow operators to create a circuit using the same link.

Using the application is simple. A network operator opens the Virtuora network controller and clicks a button to create a path for a circuit. Using an intuitive questionnaire-type dialog box on the console, the constraints of the circuit are described and entered, quickly returning a result. The outcome is –Z provisioning that is wholly intentional about network configuration, as well as the current and future state of the network.

Every network using Virtuora can take advantage of the Path Computation application. Because the output is a REST call with the best path calculated, separate but integrated systems like the NFV Orchestrator—and proprietary OSS/BSS systems—can use it.

The Virtuora Path Computation application demonstrates the value of disaggregated SDN architecture, modular applications, and an open-source controller. By separating the control functions from the hardware and logically centralizing and managing them, Virtuora customers gain the ability to work across multiple devices, layers, and vendors quickly, efficiently, reliably, and best of all—automatically. Get started building your network for the connected world using the Virtuora Path Computation application. #fujitsu #humancentricinnovation

*** Special thanks for Anuj Dutia for providing subject matter expertise for this blog post.

From Static Hardware to Dynamic Software: Building Better Networks for the Connected World

SWBU-1024x600

One of the most popular statistics cited for technology diffusion and the associated hyper-accelerated increase in technology adoption is the so-called “Angry Birds” Internet meme. The premise is this: it took the telephone 75 years to get to 50 million users, but it took the Angry Birds gaming application 35 days. A quick google fact check shows these stats to be a bit squishy, but the conclusion is the same. The technology is here, and the time is now. Angry Birds: Space was able to get to 50 million users in 35 days because the network was in place to support widespread adoption of the application.

rh-ictbp-gr1

This rate of uptake and its incremental revenue can only be achieved with a network and supporting software that is ready to exploit it, and users who are hungry for the service. Your customers have the appetite to consume services at Angry-Bird speed. Right now. Don’t you want a bigger piece of that pie?

These global networking trends are driving the network evolution:

  • Mobility, including digital business, the enabled consumer, and the distributed workforce
  • Social media (YouTube, LinkedIn, Facebook, Twitter, Instagram, Snapchat, WhatsApp)
  • Cloud commerce (online shopping and auctions, the application economy, the sharing economy, and streaming entertainment)
  • Big data and its monetization—social media monetizes the information and network-usage patterns, analytics, better interaction, more interaction
  • Internet of Things (IOT)

The typical network and its supporting operations requires weeks or months to roll out new revenue-generating services, as opposed to cloud providers, who can turn up new services instantly. These cloud builders are dynamic and capable of delivering single-purpose services that fulfill a timely need or trend. Service providers need innovation, now more than ever, to get into that game. We must transition from our traditional static operating model to on-demand, automated, and programmable networks. And this transition demands a new way of thinking about carrier networks.

To get there, you need an open, adaptable, programmable network.

By “open,” we mean open-open. Publishing a Software Developer’s Kit (SDK) to a vendor-proprietary system and calling that “open” isn’t what we mean. When we say “open,” we are referring to interoperable systems that are built on open-source technologies. We mean vendors that will bring you more innovative technologies, with inherently complementary services and appliances. Open-source simplifies things—and avoids vendor lock-in.

An “adaptable network” has reduced operations and management complexity, improved service creation and activation times, and is nimble. Spend your precious resources developing a conscious network that is capable of self-optimizing. When you see this, make that change. Quickly. Without rolling a truck.

When we talk about network programmability, we are referring to more than simple provisioning. We are describing the kind of network intelligence that facilitates zero-touch provisioning, resilience, and fault management. This kind of network intelligence will allow us to create a service-driven network that we can do different, exciting things with—things unimaginable only a few short years ago.

The network you have today can do this, but it must evolve.

  • Open up your network. Partner with multiple suppliers and build new NEs, defining the services you want to deliver along the way. No excess, no functionality that you don’t need. No wasted resources or capital expense.
  • Start thinking about disaggregating the hardware: big iron routers, packet shelves, form factors, software, lambda, transport, switching, and access. Pull that hardware out of the static chassis. Going forward, only buy network elements that you use and pay for, and use software to manage and program it.
  • Get comfortable with the idea of using centralized software that has a global view of the network; automatically allocate resources where they are needed and when they are needed, dynamically optimizing network performance in real time.
  • Be creative about what kind of services and products a virtualized datacenter can deliver.
  • Above all—put your customers first. Don’t start with the premise that you can only do what the network will allow you to do; think about the services consumers are clamoring for, then find a way to deliver that.

At Fujitsu, we think about Human-centric innovation all the time. We see that consumer expectations are changing and increasing, and we are responding with networks that are more aware, more adaptive, and more agile. We believe the network must transform to meet its customers’ needs. We are committed to quickly and effectively connecting people to the experiences they seek by facilitating access to content and information, not just the underlying technology.

It’s a very exciting time at Fujitsu. In the coming weeks, please visit us here to see how this vision is playing out… we’re also interested in your feedback. DM us on Twitter @FujitsuFNC, or email us on LinkedIn.

Author’s Note: Special thanks goes to fellow author and principal solutions architect Bill Beesley, who provided most of the technical perspective for this blog post in his presentation at our Fujitsu Solution Days demo events in November 2014 and 2015.