About Andres Viera

Andres is the Product Line Manager for Virtuora Service Activator powered by UBiqube. In this role, he is responsible for defining SDN/NFV control solutions, use cases, and detailed product requirements. Andres is also an expert in defining multi-layer network orchestration and control framework solutions, as well as defining IP layer service life-cycle management and network control applications. He holds a bachelor’s degree and master’s degree in mechanical engineering from Rensselaer Polytechnic Institute, and an MBA in Finance from Fordham University.

Discovery and Inventory Management in the Age of Network Automation

How does your operations team find out conclusively what devices are in the network? If you ask them for information about the network’s operational and configuration status, what answers might they give you? Does your team begin to provision services, only to find that the devices are already configured for a different service?

For most service providers, keeping tabs on what’s in the network is getting complicated, if not impossible. Inventory data is typically maintained manually, stored in spreadsheets or scattered among multiple inventory systems. As such, it’s often inaccurate, out of date and even useless.

Not knowing for sure what network devices you have, how they’re connected, where they’re located, what their status is, or what resources are available for new services can cause serious problems. The idea of running a business where the underlying assets used to drive revenue are unknown should be unthinkable. Unfortunately, many providers find themselves in exactly this situation, struggling to keep track of a misconfigured network with stranded assets.

Why? The answers are far from simple. Most inventory solutions require manual lookup in multiple databases, or a laborious process of discovery through intermediary network management systems. The challenge with these methods is that they’re not real time and they rely on complicated connection methods for discovering a network. There is a great deal of room for error, and the information needed to make decisions is never accurate. What’s missing is a real time unified network view for inventory management.

With network automation, control and advanced analytics in the ascendant, the data integrity and real-time status of network inventory are paramount. Many providers start deploying new network automation, analytics, and operations systems, only to realize that, without accurate network information, these new tools are unable to deliver the promised value. A solid foundation of real time network information is vital to increase visibility for operations, support network analytics, boost service productivity, and reduce costs.

The key principles that apply to network discovery and inventory management are as follows:

  1. Deploy centralized network discovery engine that connects directly to devices and controllers and is vendor-agnostic. The discovery function must support a set of protocols that connect to the network and are capable of adapting quickly to new devices and changes in existing devices.
  2. Implement real-time network discovery so that network data remains up to date in order to provide valid support for decisions about network services as well as inputs for network analytics. This information must be accurate at all times, to improve confidence in decision making.
  3. Choose a system that enables discovery of more than the physical resources (devices, cards and ports). You must remain aware of the logical resources as well, since these are the building blocks for the services the network delivers. Additionally, ensure that you can discover and audit resources and services in real time, and keep track of valuable components needed to provide revenue-generating services.
  4. Provide an open interface for processes and applications to query network information on which they depend in real time.

Achieving 100% real time network data accuracy is vital for a service provider’s business and network operations. Proper network discovery and inventory data integrity reduces the time it takes to deploy infrastructure; reduces the cost of operations by eliminating errors and network churn; and provides real-time information about the network. There’s no more worry over inaccurate data or nasty surprises over what the network looks like. A service provider can now ask questions about network utilization, new services, and provision services quickly without the risks presented by inaccurate data. In sum, but deploying a true real time discovery and inventory management solution, you can now leverage network assets to deliver more revenue with lower operating costs.

Four Key Enablers of Automated, Multi-Domain Optical Service Delivery

New advancements in software-defined control and network automation are enabling optical service delivery transformation. Stitching together network connectivity across vendor-specific domains is labor-intensive; now those manual processes can be automated with emerging solutions like multi-vendor optical domain control and end-to-end service orchestration. These new solutions provide centralized service control and management that are capable of reducing operational costs and errors, as well as speeding up service delivery times. While this sounds good, it can be all too easy to gloss over the complexities of decades-old optical connectivity services. In this blog post, I will explore the four enabling technologies for multi-domain optical service delivery as I see it.

The first enabler, optical service orchestration (OSO), is detailed here. In the not so distant past, most carriers deployed their wireline systems using a single vendor’s equipment in metro, core, and regional network segments. In some cases, optical overlay domains were deployed to mitigate supply variables and ensure competitive costs. While this maximized network performance, it also created siloed networks with proprietary management systems. The OSO solution that I imagine effectively becomes a controller of controllers, abstracting the complexities of the optical domain and providing the ability to connect and monitor the inputs/outputs to deliver services. As such, an OSO solution controls any vendor’s optical domain as a device, with the domain controller routing and managing the services lifecycle between vendor-specific end-points.

The second enabler is an open line system (OLS) consisting of multi-vendor ROADMs and amplifiers deployed in a best-fit mesh configuration. A network configured this way must be tested for alien wavelength support, which means defining the domain characteristics and doing mixed 3rd party optics performance testing. This testing requires considerable effort, and service operators often expect complete testing before deployment. The question is, who takes on the burden of testing in a multi-vendor network? Testing is a massive undertaking and operators do not have the budget or expertise; perhaps interoperability labs at MEF and CE services could help define it. Bottom line, there is no free lunch.

The third enabler is a real-time network design for the deployed network. Service operators deploy optical systems with 95%+ coverage of the network and are historically limited to vendor-specific designs. Currently, the design process requires offline tools and calculations by PhDs. A real-time network design tool that employs artificial intelligence algorithms promises to make real-time network design a reality. Longitudinal network knowledge combined with network control and path computation can examine the performance of optical line systems and work with the controller to optimize system design, variations in optical components, types, and quantity of fiber optical signals, component compatibility, fiber media properties, and system aging.

The final enablers are open controller APIs and network device models that support faster and flexible allocation of network resources to meet service demands. Open device models (IETF, OpenConfig, etc.) deliver common control for device-rich functionalities that support network abstraction. This helps service operators deliver operational efficiencies, on-boards new equipment faster, and provides the extensible framework for revenue-producing services in new areas, such as 5G and IoT applications.

Controller APIs enable standardized service lifecycle management in a multi-domain environment. Transport Application Programming Interface (T-API), a specification developed by the Open Networking Foundation (ONF), is an example of an open API specific to optical connectivity services. T-API provides a standard northbound interface for SDN control of transport gear, and supports real-time network planning, design, and responsive automation. This improves the availability and agility of high-level technology independent services, in addition to specific technology and policy-specific services. T-API can seamlessly connect the T-API client, like a carrier’s orchestration platform or a customer’s application, to the transport network domain controller. Some of the unique benefits of T-API include:

  • Unified domain control using a technology-agnostic framework based on abstracted information models. Unified control allows the carrier to deploy SDN broadly across equipment from different vendors, with different vintages, integrating both greenfield and brownfield environments.
  • Maintaining telecom management models that are familiar to telecom equipment vendors and network operations staff, making its adoption easier and reducing disruption of network operations.
  • Faster feature validation and incorporation into vendor and carrier software and equipment using a combination of standard specification development and open source software development.

Service operators are looking for transformation solutions with a visible path to implementation, and many solutions fall far short and are not economically viable. Fujitsu is actively co-creating with service operators and other vendors to integrate these four enabling technologies into mainstream, production deployments. Delivering ubiquitous, fully automated optical service connectivity management in a multi-vendor domain environment is finally within reach.

Abstract and virtualize, compartmentalize and simplify: Automating network connectivity services with Optical Service Orchestration

Service providers delivering network connectivity services are evolving the transport infrastructure to deliver services faster and more cost efficiently. Part of the strategy includes using a disaggregated network architecture that is open, programmable and highly automated. The second part of the approach takes into consideration how service providers can leverage that infrastructure to deliver new value-added services. There’s no question that the network can, but to what extent? How agile does the infrastructure need to be to accommodate dynamic services? What is required to shift the transport infrastructure more to the revenue infrastructure column rather than the overhead infrastructure column?

Today, service providers have deployed separate optical transport networks with each containing a single vendor’s proprietary network elements.  Optical line systems using analog amplification are customized and tuned to enhance the overall system performance, making it nearly impossible for different vendor devices to work together within the same domain. For years, service providers with simple point-to-point transmission have used alien wavelength deployments leveraging multi-vendor transmission on single vendor optical networks. However, as service providers look to add more flexibility to the network using configurable optical add/drop multiplexing, the ability to use different vendor components on legacy systems is impractical.

It is evident by historical deployments that optical vendors have competed for business based on system flexibility, capacity, and cost per KM. This has led to the deployment of optical domain islands. That doesn’t reflect a dastardly plan by any single vendor to corner the optical transport market. As outlined above, the drive to differentiate on performance and capacity contribute to monolithic, closed, and proprietary systems. In many cases network properties, span distance, or fiber type, dictates what system a service provider deploys. This leads to a deployment of separate optical system islands (optical domains). A provider has separate optical domains in metro networks, access network, and long haul networks. Each network is managed by a separate management system, which means that for service providers to configure services across the optical infrastructure, manual coordination is required.

Industry collaboration efforts such as the Optical Internetworking Forum (OIF) have contributed tremendously to interoperability of physical and link layers by developing implementation agreements, socializing standards, benchmark performance, and testing interoperability. These efforts have accelerated deployment of technology that lowers cost of implementing high capacity technology. However, service providers still face the expense and time of managing separate optical domains together and maintaining them over time.

Many service providers are leading the industry to supporting open optical systems. With open optical systems, optical networks are deployed in a greenfield environment where the vendors are natively and voluntarily interoperable. The Open ROADM MSA and participating vendors is one example. Open ROADM devices are part of a centrally controlled network that includes multiple vendors’ equipment, and functionality is defined by an open specification. This type of open network delivers value with lower equipment costs and reduced supply disruptions.

There is no escaping the complication that this type of networking makes it inherently difficult for service providers to introduce new vendors into a network that is delivering private line services. In this environment, operational costs are far more significant than equipment costs. Each system is configured independently, with time and extreme expertise across multiple functional areas required to bring them together to deliver end user services. New services face the same hurdles of time, field, and needed back office expertise, further incrementing the work needed to integrate existing elements.

To fully harness the power of automated provisioning and virtualization for network connectivity services, a different type of orchestration is required. We’ll call it Optical Service Orchestration (OSO.) With the OSO concept, service providers are able to lifecycle manage connectivity services across separate optical domains, and virtualize the optical domains, allowing end customers to manage their own private network.

Using OSO, service providers don’t have to change out the entire network. They can deliver a network connectivity service from one domain to another, whether it’s physical or virtual, with simple configuration changes that are controlled and managed by software-defined networking.

An Optical Service Orchestrator combines the existing network with innovative vendor approaches as it makes sense for the network and the business. Some domains are open; some are not. Some vendors want to participate in open technologies and communities, some do not. Some are highly focused on the performance that comes from a tightly coupled optical components. The truth is that vendors occupying the optical domain have been doing this for a long time and are evolving their technology to deliver next-generation digital services. It would be foolish to turn away from expert innovation in an attempt to commoditize network equipment.  Especially when the underlying optical component ecosystem is already commoditized.

In a typical operator optical network with a mix of legacy and open optical domain deployments, an OSO platform controls multiple optical domains, regardless how open the domain is, and automatically stitches services together across domains. Each domain becomes an abstracted ”network element” with discrete inputs and outputs, with the OSO orchestrating puts and gets into an automated workflow. This common controller extracts the optical topology to the IP and MPLS layer and then adds layer 2 and layer 3 services on top programmatically and automatically, spanning the physical and virtual network seamlessly.

The result is that the operator can deliver Ethernet private line service without having to understand and configure each vendor’s optical domain. The domain vendor controller handles the idiosyncrasies of the optical domain without having to give up on network performance (Cost / GB-KM). Abstract and virtualize, compartmentalize and simplify.

Service providers are able to leverage the OSO capabilities to virtualize transport networks by providing a simple customer web portal. The portal allows a service provider’s end customers to provision their own services on a virtual optical network using service templates with any number of network element configurations.

Service providers gain the ability to extend the life of their legacy gear, as well as allowing for the eventuality of introducing new gear into the network- all while using software to provision dynamic services. With the OSO, service providers can automate transport, lower costs all while growing and monetizing new network connectivity services.

Andres Viera will present “Enabling Automation in Optical Networks” at the NFV & Zero Touch Congress show, April 25 @ 4:05pm. Stop by Fujitsu booth #13 to learn more.