Don’t Wait for 5G to Make Network Slicing Pay Off

5G is just around the corner… or so the story goes. Yes, network service providers worldwide are busy preparing to deploy 5G, if they haven’t already started. But we have to face facts — 5G technology is still evolving, and 5G networks serving mass market mobile devices won’t be available for some time. So, while it’s still important to gain early competitive advantage in the race to 5G, achieving the full potential of next-generation networks will be a marathon, not a sprint.

However, you don’t have to wait for 5G to fully mature before you can take advantage of a key aspect of 5G networks. As you drive toward the goal of delivering 5G services, you can serve a variety of different subscriber needs now with network slicing.

Carving Up Capacity

In the traditional network, bandwidth was fairly monolithic. Allocating capacity for certain subscribers typically required implementation of a VPN or VLAN, reserving bandwidth in a static way that was less than efficient. With the promise of 5G, the ultimate goal will be to create scalable, end-to-end network slices that will be applied dynamically through automation.

But network slicing is not just for 5G. You can implement network slicing in today’s networks to deliver differentiated services to business and consumer customers now.

Network slicing enables the creation of multi-application networks that provide service differentiation with a certain bandwidth profile to meet specific customer needs. For example, you can define a set of requirements, such as low latency or high availability, to serve various categories of services — from automation and IoT, to augmented and virtual reality. This not only provides a more efficient way to manage applications and resources for service assurance, it also offers opportunities to drive more subscriber revenue.

Start Slicing Now

To determine how to get started, consider the different bandwidth profiles and applications that will benefit from network slicing, and develop a broader policy around how you can separate out the network. Virtualized services can be defined and separated by allocating resources in virtual network functions (VNFs) to assure the performance of each slice.

Getting a head start on network slicing now means you don’t have to wait for 5G to offer new value-added services. And although 5G standards are still evolving, the ONAP Project recently released a new 5G blueprint, including support for network slicing. This means you can start working toward implementation of this technology in an open, disaggregated manner that will dovetail with future 5G networks.

Monetize the Slice

5G networks are being rolled out this year, but we’re going to be waiting a while for full mobile capability. Network slicing provides a clear opportunity to deliver profitable new services and improved quality of service (QoS) now. Potential business use cases include deployment of virtual customer premise equipment (CPE) technology at the edge of your network to better serve both consumer and enterprise customers. Network slicing can also be employed to make critical communications services more reliable for public safety agencies and municipal governments. These are just a few examples of how you can increase profits through network slicing. To learn more, register for the webinar “Approaches to Solving Network Slicing Before 5G” with IHSMarkit and Fujitsu: Register Here

ONAP: Riding the open-source wave towards network automation

As digitization becomes increasingly important, communication service providers (CSPs) are constantly looking for innovative solutions driving more automated control into their networks. In the quest towards enabling faster service delivery and reducing operational expenditures, CSPs are faced with multiple challenges along the way that need to be addressed in order to achieve their business goals. Today’s operational support systems and networking infrastructure need to be refreshed in order to keep up with the scale and rising bandwidth demands, further accelerating the need for automation driven by SDN and NFV technologies.

In addressing some of these challenges, the telecom industry has started to embrace open-source solutions, bringing about more collaboration and harmonization. The ONAP (Open Network Automation Platform) project hosted by the Linux Foundation is a classic example of this. Over the last year we have witnessed an increased momentum among CSPs and vendors alike embracing ONAP as a unified orchestration and automation framework, with several of them making active contributions towards enhancing the project. At its core, ONAP provides a comprehensive platform for real-time, policy-driven orchestration and automation of physical and virtual network functions that will enable software, network, IT and cloud providers and developers to rapidly automate new services and support complete lifecycle management.

ONAP provides a common modular reference framework that defines key functional blocks and standard interfaces, which form a basis for service definition, resource onboarding, activation and control, and data analytics across a broad range of use cases. Common information models, external API support and generic management engines decouple the specific services and technologies, providing users with the flexibility to develop additional capabilities enabling new blueprints. With CSPs’ networks continuously evolving, the increased complexity of managing and implementing service offerings across multi-domain, multi-layer, multi-vendor environments is furthering the need for a unified approach to service orchestration and network management across legacy and modern infrastructures.

Although there is a high level of industry consensus on the architectural principles and interface definitions guiding the development of ONAP, we have a long road ahead towards secure and stable deployment in live networks. There are multiple options network operators are considering in integrating ONAP into their existing OSS environments. As with many open-source projects, we believe there will be markets for various distribution models providing network operators with flexibility on how they choose to consume ONAP and associated offerings, including: 

  • Integrated solutions with carrier-grade versions of individual ONAP modules  
  • Service models, applications and micro-services built to run in ONAP environments
  • Compliant networking infrastructure (physical/virtual), including PNFs, VNFs, domain controllers, etc. that plug into ONAP

There are multiple complexities involved in introducing ONAP into an existing OSS playground and the ability to successfully deploy and automate service delivery across the many network domains. Managing this incremental shift towards adopting ONAP modules / components, and having them co-exist with existing management systems, will be critical to enabling a smooth transition. The rise of 5G further necessitates the need for a scalable architectural platform to onboard and activate new services enabling a wider range of business opportunities for network operators, and to this extent ONAP seems like an attractive option. Having fully embraced open-source as a key catalyst to network automation, Fujitsu is actively engaging in the ONAP ecosystem. We are contributing to the development and extension of the ONAP framework towards addressing new use cases in partnership with network operators, with the goal of further driving community collaboration. As we continue to ride the open-source wave, we look forward to seeing the industry make this important digital transformation together.

MicroApplications – An Introduction to Solving Problems with Software in Small Packages

Why does a software solution have to be so difficult?  The answer is, it doesn’t.

MicroApplications (MicroApps) are small software applications frequently used for mobility platforms, like mobile phone apps.  When we use the term MicroApplications at Fujitsu we have a broader definition. 

A MicroApplication is a small software application that addresses a specific customer problem or use case. MicroApps are increasingly popular because they can be developed and deployed quickly and cost-effectively. 

Let’s break that down a bit more.

  1. Small – It’s in the name – Micro. The term small is in comparison to larger monolithic software applications used in the telecommunications industry, such as Element Management Systems (EMS) or Network Assurance platforms. These types of software platforms are designed to address many use cases and therefore are bigger — both in the amount of code required, as well as the time to develop and test them prior to rolling into production.
  • Specific – MicroApps are designed to address a specific customer need, typically an operational need, such as backing up a network element database, or extracting performance data from a router. 

MicroApps, however, are not necessarily simple to develop or implement. They can be small and focused, yet still address hard problems that have multiple degrees of complexity. An example of this would be around the IS-IS based routing protocol Open Systems Interconnect (OSI) used in traditional SONET/SDN network equipment. 

Like many routing protocols, a flat network is a problem from a routing table perspective. Fujitsu developed a MicroApp that helped discover, analyze, and subdivide these OSI networks into smaller routing instances to prevent oversubscription, and therefore communication loss. 

  • Faster & Cost-Effective – The third element of a MicroApp is how quickly its benefits can be achieved. Today more than ever, network service providers are looking for a faster return on investment (ROI) when considering a software solution to problems. One year or less is now considered a must but can be a challenge for larger software platforms. 

Because they are smaller and focused on a single use case, MicroApps can better meet this ROI timeline by simply delivering the solution faster. As an example of this, Fujitsu developed a fully functional, multi-vendor database back-up MicroApp for one of our customers in less than 90 days. This project was completed through a one-time purchase, the benefit of which could be realized in the same fiscal year. An equivalent network management system would have cost millions of dollars and carried hefty annual support contracts for years to come.

Fujitsu Network Communications has a long history in the telecommunications industry, both for our optical acumen but also as part of our larger heritage as one of the world’s leading ICT companies. We are committed to helping our customers on many levels of software development, and recognize the importance of MicroApps in the evolving world of telecommunications and software automation. To learn more, view the introductory video here:

 

Four Key Enablers of Automated, Multi-Domain Optical Service Delivery

New advancements in software-defined control and network automation are enabling optical service delivery transformation. Stitching together network connectivity across vendor-specific domains is labor-intensive; now those manual processes can be automated with emerging solutions like multi-vendor optical domain control and end-to-end service orchestration. These new solutions provide centralized service control and management that are capable of reducing operational costs and errors, as well as speeding up service delivery times. While this sounds good, it can be all too easy to gloss over the complexities of decades-old optical connectivity services. In this blog post, I will explore the four enabling technologies for multi-domain optical service delivery as I see it.

The first enabler, optical service orchestration (OSO), is detailed here. In the not so distant past, most carriers deployed their wireline systems using a single vendor’s equipment in metro, core, and regional network segments. In some cases, optical overlay domains were deployed to mitigate supply variables and ensure competitive costs. While this maximized network performance, it also created siloed networks with proprietary management systems. The OSO solution that I imagine effectively becomes a controller of controllers, abstracting the complexities of the optical domain and providing the ability to connect and monitor the inputs/outputs to deliver services. As such, an OSO solution controls any vendor’s optical domain as a device, with the domain controller routing and managing the services lifecycle between vendor-specific end-points.

The second enabler is an open line system (OLS) consisting of multi-vendor ROADMs and amplifiers deployed in a best-fit mesh configuration. A network configured this way must be tested for alien wavelength support, which means defining the domain characteristics and doing mixed 3rd party optics performance testing. This testing requires considerable effort, and service operators often expect complete testing before deployment. The question is, who takes on the burden of testing in a multi-vendor network? Testing is a massive undertaking and operators do not have the budget or expertise; perhaps interoperability labs at MEF and CE services could help define it. Bottom line, there is no free lunch.

The third enabler is a real-time network design for the deployed network. Service operators deploy optical systems with 95%+ coverage of the network and are historically limited to vendor-specific designs. Currently, the design process requires offline tools and calculations by PhDs. A real-time network design tool that employs artificial intelligence algorithms promises to make real-time network design a reality. Longitudinal network knowledge combined with network control and path computation can examine the performance of optical line systems and work with the controller to optimize system design, variations in optical components, types, and quantity of fiber optical signals, component compatibility, fiber media properties, and system aging.

The final enablers are open controller APIs and network device models that support faster and flexible allocation of network resources to meet service demands. Open device models (IETF, OpenConfig, etc.) deliver common control for device-rich functionalities that support network abstraction. This helps service operators deliver operational efficiencies, on-boards new equipment faster, and provides the extensible framework for revenue-producing services in new areas, such as 5G and IoT applications.

Controller APIs enable standardized service lifecycle management in a multi-domain environment. Transport Application Programming Interface (T-API), a specification developed by the Open Networking Foundation (ONF), is an example of an open API specific to optical connectivity services. T-API provides a standard northbound interface for SDN control of transport gear, and supports real-time network planning, design, and responsive automation. This improves the availability and agility of high-level technology independent services, in addition to specific technology and policy-specific services. T-API can seamlessly connect the T-API client, like a carrier’s orchestration platform or a customer’s application, to the transport network domain controller. Some of the unique benefits of T-API include:

  • Unified domain control using a technology-agnostic framework based on abstracted information models. Unified control allows the carrier to deploy SDN broadly across equipment from different vendors, with different vintages, integrating both greenfield and brownfield environments.
  • Maintaining telecom management models that are familiar to telecom equipment vendors and network operations staff, making its adoption easier and reducing disruption of network operations.
  • Faster feature validation and incorporation into vendor and carrier software and equipment using a combination of standard specification development and open source software development.

Service operators are looking for transformation solutions with a visible path to implementation, and many solutions fall far short and are not economically viable. Fujitsu is actively co-creating with service operators and other vendors to integrate these four enabling technologies into mainstream, production deployments. Delivering ubiquitous, fully automated optical service connectivity management in a multi-vendor domain environment is finally within reach.

Automation and Operations in the Modern Network: Bridging the Gap Between Legacy and Digital Infrastructure

In terms of network automation, we’re beyond removing manual tasks, speeding up production, and improving service quality. In the face of complex mesh network architectures, automated network slicing with guaranteed SLAs, as well as real-time spectrum and resource management, automation has become a foundational capability that is required for sheer survivability in basic network operations. The need for high-scale, intelligent control and management of end points, elastic capacity, and dynamic workloads and applications will only grow.

As the network virtualizes to accommodate connection-aware applications, the need for disaggregated, open and programmable hardware and software also gets stronger. To deliver on-demand services to bandwidth-hungry mobile consumers, the modern network must find ways to combine legacy gear that is vendor-proprietary and domain specific with virtual network elements and functions over merged wireline and wireless infrastructure. That requires software platforms and applications that connect and control physical and virtual network elements, automates network planning, provisioning and management, provides real-time network topologies, and increases the efficiency of traffic management and path computation up and down the network stack. It also paves the way for Communication Service Providers to implement service-oriented architectures that put business needs before arcane methods of network management that are required, but do not necessarily drive incremental revenue.

This type of agile network requires an agile deployment model that is predicated on open, disaggregated, and programmable physical and virtual infrastructure, as well as SDN-enabled applications that use open models. Disaggregated NE’s can deliver new capabilities without disrupting the existing production network, and SDN-enabled applications tie it all together seamlessly.

This approach has the advantage of increasing revenue velocity and speeding up the adoption of digital networking while maintaining the investment in the existing physical infrastructure.

Operationalizing the open and programmable network.

As closed and proprietary network segments give way to open network architectures that include Open Line Systems, Open ROADM, and the open APIs that connect them, operational gaps will emerge that require detailed integration and design considerations from a software perspective. This requires an understanding of disaggregation, service-oriented architectures, open APIs, and the ability to break all of that down into discrete datasets that can be mined by artificial intelligence so that CSPs know what levers to pull to improve the customer experience or deliver new types of services.

Microservices and container-based applications have the ability to fill those gaps without costly capital initiatives. Just as an SDN platform abstracts multi-vendor network elements from service provisioning applications to facilitate intent-based networking, container technologies abstract applications from the server environment where they actually run. Containerization provides a clean separation of “duties”; developers can focus on application logic and dependencies, while network operations can focus on management. Container-based microservices and microapplications can be deployed easily and consistently, regardless of the target environment.

This construct provides the ideal set up for operations teams that identify “holes” in the production environment. In the past, product managers and operations teams would have been forced to wait for lengthy development cycles to take advantage of new feature functionality. Now, with microservices and microapplications, new functionality can be developed quickly, more efficiently, and generate additional revenue inside the customer window of opportunity.

Microapplications are inherently cloud-native, and can be used to integrate newer technologies into monolithic systems without waiting for maintenance windows that may or may not include the capability. Examples of microservices include:

  • Customer IP circuit connections
  • A virtual gateway network element
  • Multi-vendor network element backup
  • IP forwarding loop detection
  • Bandwidth optimization

These microservices can augment existing SDN-enabled applications and infrastructure to provide precision solutions that impact revenue generating OSS/BSS applications. They also have the ability to accelerate lab testing and certification cycles so that new applications can be deployed faster and more effectively.

In addition to speed and efficiency, microservices and applications can also make the network more resilient and flexible. Since they can be deployed without impacting other services, developers can enhance the performance of one service without impacting other services.

All of this requires vendor adherence to, and cooperation with, open models that are streamlined for coordinated control and management across all network domains that include end-to-end connectivity services (MPLS, Carrier Ethernet, IP VPN, etc.) In the modern network, every touch point is engineered to do its job faster and more efficiently, whether it is legacy or digital network gear. Microservices and microapplications are a part of that solution, providing new capabilities that are free from traditional operational constraints, bridging the gap between legacy and digital infrastructure with precision solutions that drive revenue now, rather than later.

For more information about Fujitsu’s Microapplications practice, please visit http://www.fujitsu.com/us/products/network/products/microapplications-practice/

Operationalizing Disruption: A Shout-Out to the Grumpy Guy

The future network is reliant on disruptive technology. Let me already correct myself: The future network is reliant on actually implementing disruptive technology. That means clearing away the smoke and mirrors and passing the baton to the operations team who have the daily responsibility of taking SDN, NFV, SD-WAN and other technologies out of the proof-of-concept lab and putting them to work in the real world. This is what I mean by the term operationalizing disruption.

It seems incongruous but only on the surface: How can we make disruptive technology be no longer disruptive?  What it comes down to—when all the vendors have left the negotiating table—is a shift in emphasis to the practical aspects of running a reliable network. The network technology changes happening now are not linear go faster, further, or fatter incremental improvements. We already have methodologies in place to absorb those into today’s operational environments. Migration to disruptive technologies like SDN and NFV, though, is a fundamental shift and revolutionary—and it is uncharted territory.

As a trusted business partner, everything we do is about helping our customers successfully navigate positive change in their networks. Because when it all gets integrated and the new POC starts being implemented, it’s not about the shiny new stuff itself anymore—it’s about being able to control our customer’s end-user’s experience.

When we look at customer needs, each functional area has its own unique perspective. While the planners may be excited about modeling the new technology and adopting it before the competition, the CIO may be a little grimacy because of the need to code up and flow through a lot more connections in an already constrained budget.

But the operations side of the house has a unique challenge because they are entrusted to deliver reliability SLAs on the traditional network to generate the return for their corporation.  When it comes to network migrations, it can be a heavy workload to balance upgrades with consistent network performance. That’s why, during the early phases of disruptive change projects, the ops people at the table might be a little skeptical. Some mistake this for being innovation-unfriendly. Far from it. They have a right to be cautious. They’re the ones who deliver value for the entire organization because they’re the ones who keep the network performing continuously and predictably and daily to meet SLAs for banks, hospitals, data centers. Essentially they ensure everyone else gets paid. You can’t blame them for treating the latest disruptive brainchild with more than a few questions, especially if they are told how great it will all be…but nobody really knows how to control it, monitor it, or troubleshoot it.

It’s easy to focus on the cool factor of turning real network things into virtual network things. But the Operations view is undoubtedly that you have to keep these virtual things in the realm of reality, since they have to be reliable and useful in the real world.

So, here is a shout out to those grumpy guys – the unexpected heroes of network reliability and delivering daily on corporate financial performance.

At Fujitsu Network Communications, we recognize that operationalizing disruptive change probably means we have to invent some new science. We are working on defining the right skills, the new processes, and the best tools to help our customers accelerate their adoption of disruptive technology. By doing so, we help our customers bring their future into now.

Abstract and virtualize, compartmentalize and simplify: Automating network connectivity services with Optical Service Orchestration

Service providers delivering network connectivity services are evolving the transport infrastructure to deliver services faster and more cost efficiently. Part of the strategy includes using a disaggregated network architecture that is open, programmable and highly automated. The second part of the approach takes into consideration how service providers can leverage that infrastructure to deliver new value-added services. There’s no question that the network can, but to what extent? How agile does the infrastructure need to be to accommodate dynamic services? What is required to shift the transport infrastructure more to the revenue infrastructure column rather than the overhead infrastructure column?

Today, service providers have deployed separate optical transport networks with each containing a single vendor’s proprietary network elements.  Optical line systems using analog amplification are customized and tuned to enhance the overall system performance, making it nearly impossible for different vendor devices to work together within the same domain. For years, service providers with simple point-to-point transmission have used alien wavelength deployments leveraging multi-vendor transmission on single vendor optical networks. However, as service providers look to add more flexibility to the network using configurable optical add/drop multiplexing, the ability to use different vendor components on legacy systems is impractical.

It is evident by historical deployments that optical vendors have competed for business based on system flexibility, capacity, and cost per KM. This has led to the deployment of optical domain islands. That doesn’t reflect a dastardly plan by any single vendor to corner the optical transport market. As outlined above, the drive to differentiate on performance and capacity contribute to monolithic, closed, and proprietary systems. In many cases network properties, span distance, or fiber type, dictates what system a service provider deploys. This leads to a deployment of separate optical system islands (optical domains). A provider has separate optical domains in metro networks, access network, and long haul networks. Each network is managed by a separate management system, which means that for service providers to configure services across the optical infrastructure, manual coordination is required.

Industry collaboration efforts such as the Optical Internetworking Forum (OIF) have contributed tremendously to interoperability of physical and link layers by developing implementation agreements, socializing standards, benchmark performance, and testing interoperability. These efforts have accelerated deployment of technology that lowers cost of implementing high capacity technology. However, service providers still face the expense and time of managing separate optical domains together and maintaining them over time.

Many service providers are leading the industry to supporting open optical systems. With open optical systems, optical networks are deployed in a greenfield environment where the vendors are natively and voluntarily interoperable. The Open ROADM MSA and participating vendors is one example. Open ROADM devices are part of a centrally controlled network that includes multiple vendors’ equipment, and functionality is defined by an open specification. This type of open network delivers value with lower equipment costs and reduced supply disruptions.

There is no escaping the complication that this type of networking makes it inherently difficult for service providers to introduce new vendors into a network that is delivering private line services. In this environment, operational costs are far more significant than equipment costs. Each system is configured independently, with time and extreme expertise across multiple functional areas required to bring them together to deliver end user services. New services face the same hurdles of time, field, and needed back office expertise, further incrementing the work needed to integrate existing elements.

To fully harness the power of automated provisioning and virtualization for network connectivity services, a different type of orchestration is required. We’ll call it Optical Service Orchestration (OSO.) With the OSO concept, service providers are able to lifecycle manage connectivity services across separate optical domains, and virtualize the optical domains, allowing end customers to manage their own private network.

Using OSO, service providers don’t have to change out the entire network. They can deliver a network connectivity service from one domain to another, whether it’s physical or virtual, with simple configuration changes that are controlled and managed by software-defined networking.

An Optical Service Orchestrator combines the existing network with innovative vendor approaches as it makes sense for the network and the business. Some domains are open; some are not. Some vendors want to participate in open technologies and communities, some do not. Some are highly focused on the performance that comes from a tightly coupled optical components. The truth is that vendors occupying the optical domain have been doing this for a long time and are evolving their technology to deliver next-generation digital services. It would be foolish to turn away from expert innovation in an attempt to commoditize network equipment.  Especially when the underlying optical component ecosystem is already commoditized.

In a typical operator optical network with a mix of legacy and open optical domain deployments, an OSO platform controls multiple optical domains, regardless how open the domain is, and automatically stitches services together across domains. Each domain becomes an abstracted ”network element” with discrete inputs and outputs, with the OSO orchestrating puts and gets into an automated workflow. This common controller extracts the optical topology to the IP and MPLS layer and then adds layer 2 and layer 3 services on top programmatically and automatically, spanning the physical and virtual network seamlessly.

The result is that the operator can deliver Ethernet private line service without having to understand and configure each vendor’s optical domain. The domain vendor controller handles the idiosyncrasies of the optical domain without having to give up on network performance (Cost / GB-KM). Abstract and virtualize, compartmentalize and simplify.

Service providers are able to leverage the OSO capabilities to virtualize transport networks by providing a simple customer web portal. The portal allows a service provider’s end customers to provision their own services on a virtual optical network using service templates with any number of network element configurations.

Service providers gain the ability to extend the life of their legacy gear, as well as allowing for the eventuality of introducing new gear into the network- all while using software to provision dynamic services. With the OSO, service providers can automate transport, lower costs all while growing and monetizing new network connectivity services.

Andres Viera will present “Enabling Automation in Optical Networks” at the NFV & Zero Touch Congress show, April 25 @ 4:05pm. Stop by Fujitsu booth #13 to learn more.

Virtuora and YANG Models

By Kevin Dunsmore, with Rhonda Holloway

The Virtuora® Product Suite is a collection of software products that makes network management a breeze. A distinct advantage of the Virtuora™ software platform is its use of YANG models. These models are unique in that when someone tweaks a part of the model, the associated REST/RESTCONF is automatically generated upon recompiling. This new data becomes available via the API the moment recompiling is complete.

This ability is unique to Fujitsu. Other SDN platforms use YANG models, but not in the way Virtuora does. Some vendors have built their tools using Java and other programming languages. Whenever they want to change a driver, they must change their internal programming code and make the driver available via northbound APIs. This is extremely tedious and time-consuming, and there’s always the risk of “breaking” something if the code contains errors. On top of this, special code is typically required to “activate” and “delete” nodes, compounding the issue. As a result, many customers complain of long lags in getting new or enhanced support for SDN platforms.

Virtuora fixes this time lag problem through the implementation of YANG models. Here you can simply add or change a­ data element, recompile the model, and the new information instantly becomes available via REST. There’s no pulling apart code written in Java or another programming language  to add or change anything. Combined with OpenDaylight, the CRUD (Create, Read, Update, and Delete) is handled in one swift transaction. What takes another platform six months to do, Virtuora can do in one.

Think of YANG as your car’s gasoline. The controller is the engine, providing the power for the entire car to run. Applications are the steering wheel, giving users the control to drive Virtuora in the direction they please. YANG is the gasoline that ties the process together, giving the controller and applications the ability to run together and never out of sync. A small change to the steering well, or a modified engine part won’t affect the car’s ability to drive, because the gasoline will continue to adjust to the changes and keep the car running.

For a good example of how Fujitsu implements YANG models into our products, look at 1FINITY. Each 1FINITY blade has a YANG model, making it easy to include provisioning and management in a network-wide element management function. With YANG already working so well in our 1FINITY solution, we’re excited to include it in Virtuora.

The relationship between different models will need to be maintained. Luckily, Fujitsu has software support contracts that handle any changes made to the model. The underlying platform–OpenDaylight and, eventually, ONOS – handle “activate” and “delete” operations for us. Finally, Fujitsu is in discussions to develop a Software Development Kit (SDK) that would automatically ensure a change in one model is reflected in others.

At Fujitsu, we’re working hard to ensure that our customers have a smooth and productive experience using the Virtuora Product Suite. Our Services Support team is dedicated to working with each customer and handling all changes that need to be made. Our goal is to make the implementation process as quick and painless as possible. Thanks to our use of YANG models, we can make that happen.

The New Network Normal: Service-Oriented, Not Infrastructure-Oriented

Mobile broadband connections will account for almost 70% of the global base by 2020. The new types of services those customers consume will drive a tenfold increase in data traffic by 2019. At this rate, most of the world will be mobile, with “mobile” expectations. The “cloud” has become synonymous with mobility and is matching customers with new products and services more and more. More customers are coming, more services are coming, and more types of services are coming. More, more, more.

Carrier networks must embrace a new normal to support and drive this digital revolution. Unlike the static operating models of the past, a new dynamic system is emerging, and it’s not about the network at all. It’s about the applications that deliver services to paying customers— wherever they are, however they want them. This kind of dynamic network requires intelligence, extreme flexibility, modularity, and scalability. The new normal means creating innovative, differentiated services and combining these with the kind of intensely integrated, highly personalized relationships that enable services to delivered and  billed on-demand.

To be competitive in the new application economy, service providers need to dedicate more budget and resources to service innovation. However, multi-layer/multi-vendor network design necessitates that the lion’s share of any service provider’s budget goes to the network itself. At Fujitsu, we are changing that: we are working with our customers to architect an entirely new system: disaggregated, flattened, and virtual. And it doesn’t require a “scorched earth re-write” or “rip and replace” investment.

The new network normal means a new way of doing business for service providers, and it requires a different way of operating. In the old business model, service providers functioned like vending machine companies. A vending machine offered a pre-set lineup of products, snacks, and a single way to pay, namely your pocket change. Only field technicians could fill vending machines, only field technicians could fix broken machines, and only field technicians could deliver new vending machines to new locations. An entirely different staff collected the money and handled banking. Vending machine companies were forced to wait weeks, or even months, to receive payment for sold goods.

Vending machines in remote areas might not get serviced as often as population-dense areas. Technicians didn’t know which products were the most popular, but they knew which were the least! Plenty of people had dollar bills in their wallet- but no loose change. If the machine was out of stock, customers had to find another.

Companies lost sales because of the limitations of this infrastructure— not because there were no willing customers.

Vending machine companies developed new ways to accept payment, re-negotiated partnerships and delivery routes to refill popular product lines more often, and reorganized the labor force into groups who could fill and service machines simultaneously. In spite of these optimization tactics, much like service providers, vending machine companies were still ultimately reliant on physical devices and physical infrastructure to deliver a static line of products. Otherwise happy customers were required to seek other vendors when their needs were unfulfilled.

But unlike vending machine companies, service providers are not always selling a physical product. Service providers can re-package their products virtually— and it starts with virtualization of the network itself. Applying standard IT virtualization technologies to the service provider network allows administrators to shed the expense and constraints of single-purpose, hardware-based appliances.

Rolling out new services over traditional hardware-based network infrastructure used to take months or even years for service providers to achieve. Many time-consuming steps were required: service design, integration, testing, and provisioning. Virtualization addresses these wide-ranging use cases and more.

Software-defined networking, combined with network function virtualization, creates a single resource to manage and traverse an abstracted and unified fabric. As a result, application developers and network operators don’t have to worry about network connections; the intelligent network does that for them. Imagine seamlessly connecting applications and delivering new services, automatically, at the will of the end user. Virtualization provides this new normal: best-of-breed components that are intelligent, optimized end-to-end, fully utilized, and much less expensive. Budget previously dedicated to network infrastructure can now be released to support new applications and services for whole new categories of customers.

Thanks to readily-available data analytics on trending customer behavior, Network Operators will know exactly which products their customers are willing to buy and what they’re looking for—and they’ll be able to deliver them individually or as part of value-package offerings far beyond the current range of choices. Remote areas can get the same services and level of customer support that those in population-dense areas enjoy. Payment will be possible on-demand or by subscription. Premium convenience services will offer new flexibility for customers—and new revenue streams for providers.

Service providers will be able to differentiate their offerings beyond physical products, including bandwidth, SLAs and price points. Their enterprise customers will get better tools, on-demand provisioning, and tight integration between the carrier network, enterprise network, and cloud builders. Service Provider’s business customers will get on-demand services and always-on mobile connectivity. Other customers will get bundled services or high-bandwidth mobile connectivity only.

Not like a vending machine at all. Even the new ones that accept credit cards. Welcome to the new normal.

Service Provisioning is Easier with YANG and the 1FINITY Platform

Years ago, alarm monitoring and fault management were difficult across multivendor platforms. These tasks became significantly easier—and ubiquitous—after the introduction of the Simple Network Management Protocol (SNMP) and Management Information Base (MIB-II).

Similarly, multivendor network provisioning and equipment management has proved elusive. The reason is the complexity and variability of provisioning and management commands in equipment from multiple vendors.

Could a new protocol and data modeling language again provide the solution?

Absolutely. Over time, NETCONF and YANG will do for service provisioning what SNMP and MIB-II did for alarm management.

Recently, software-defined networking (SDN) has introduced IT and data center architectural concepts to network operators—that is, by separating the control plane and the forwarding plane in network devices, allowing their control from a central location. Innovative disaggregated hardware leverages this new SDN paradigm with centralized control and the use of YANG, a data modeling language and application program interface (API).

YANG offers an open model that, when coupled with the benefit of standard interfaces such as NETCONF and REST, finally supports multivendor management. This approach provides an efficient mechanism to overcome the complexity and idiosyncrasies inherent in each vendor’s implementation.

Fujitsu’s response to this evolution is the 1FINITY™ platform, a revolutionary disaggregated networking solution. Rather than creating a multifunction converged platform, each 1FINITY function resides in a 1RU blade: transponder/muxponder blades, lambda DWDM blades, and switching blades. Each blade delivers a single function that previously resided in a converged architecture—the result is scalability and pay-as-you-grow flexibility.

Each 1FINITY blade has an open API and supports NETCONF and YANG, paving the way for a network fully rooted in the new SDN and YANG paradigm. New 1FINITY blades are easy to program via an open, interoperable controller, such as Fujitsu Virtuora® NC. Since each blade has a YANG model, it’s easy to include provisioning and management in a networkwide element management function.

Any open-source SDN controller that enables multivendor and multilayer awareness of the virtual network will revolutionize network management and control systems. Awareness of different layers and different vendor platforms will result in faster time to revenue, greater customer satisfaction, increased network optimization, and new services that are easier to implement.