WHAT IS A “SMART CITY?” PART 2

In Part 1 of this article, we talked about some of the characteristics of a smart city, including hyperconnectivity, people-centric technology, and increased efficiency of city-provided services. But although those things are critically important, they’re not the end of the smart cities story.

Economic development is an important driver for most cities considering an upgrade to “smart” status, with most cities looking to attract new businesses to their community. But how? In 1942, economist and social scientist Joseph Schumpeter coined the term, “innovation economics,” which, he argued, meant that innovation was a major factor in spurring economic growth and change as it created “temporary monopolies” when new products and technologies were invented, that then encouraged the development of competing products and processes, thereby creating beneficial economic conditions. He further believed that government’s most important role was in creating a fertile ground in which these innovations could occur. In this sense, the smart, connected, and efficient city is the technological soil in which the seeds of economic growth will be planted, yielding profits and benefits that will in turn enrich both individuals and society at large. Therefore, the cities that are at the forefront of smart cities transformation will reap the largest benefits from this explosive, and in many case much-needed, growth.

For example, an unique and innovative display of economic development using smart technology is taking place right now in South Korea. A major grocery retailer wanted to expand business, but without opening additional physical locations. The answer proved to be “virtual shelves” in the city’s subway stations. Wall-length billboards display goods for sale, complete with images and prices, allowing customers to order by scanning QR codes, paying, and arranging for delivery within a day. This optimizes commuter time in the stations, and expands business for the retailer without the expense of a building, rent, utilities, maintenance, staff, and all the other requirements of a physical location. The result is that this retailer has reached the number one position in the online market, and the number two position in terms of brick-and-mortar stores.

Besides these obvious advantages, an area in which smart cities can actually save lives, and one that is top of mind around the world right now, is by helping to deal with natural disasters, before, during, and after the event. Sensors can continually monitor air and water quality, weather and seismic events, and even increased radiation levels, for example, thus providing critical early warnings of disasters about to happen, and can disperse that information to residents via smart phone apps. Once an event occurs, smart data can be used to provide much-needed safety information. During Hurricane Harvey, for example, data collected via connected systems was able to provide residents with real-time information about increased water levels through information from county flood gauges, as well as identify passable evacuation routes and assistance, available shelters, food banks, and more. Drones can be – and are being – used to survey damage and to aid in recovery efforts, reducing the risk for human crews. And this is clearly the tip of the iceberg as regards ways in which “smart” technology will be able to aid in human response to natural disasters.

Of course, these are only a few of the ways in which smart technology can benefit communities. Every city and county has its own needs, especially in the early planning stages of digital transformation. What’s important to remember, however, is that smart cities aren’t coming, they’re already here, and the earliest adopters of this incredible technology will be the ones to reap the greatest benefits from it. Those that delay, or who reject the smart cities model altogether, will quickly find themselves woefully behind the curve, unable to compete with those communities that showed more foresight in these early days. Customers and residents are constantly increasing their demands for bandwidth as the fuel needed to drive their desire for connectivity, and the communities that can provide these services seamlessly and easily win the lion’s share of business and revenue. It’s never too early to start thinking about smart city transformation, so what are you waiting for?

WHAT IS A “SMART CITY?” PART 1

Unless you’ve been living in a bunker deep underground for the last ten years, you’ve no doubt heard talk about “smart cities.” Everyone’s talking about it, and a few truly forward thinking cities around the world are making it happen. But what exactly is a “smart city,” and what does it mean to you?

The short answer is that the smart city concept is the logical and foreseeable outcome of a world in which connectivity has become an integral part of our daily lives. In a smart city, things like utilities, transportation, education, housing, and more are all connected via sensors that provide data in order to improve the quality of life of the city’s residents. Civic leaders use this data to make better, “smarter” decisions for the way the city operates and interacts with its citizens. It’s a way to make infrastructure more efficient, to make government more transparent, and to make day-to-day interactions with technology smoother.

The best smart city improvements are based on a people-centric model, in which technology is merely a tool that improves the lives of those it touches by solving problems that might otherwise be insurmountable. Imagine a “smart” parking lot that can alert you to an available parking space via an app on your phone, reducing or eliminating your time driving around hopelessly looking for one. Or how about  a smart communications system for emergency personnel, able to assess a situation holistically, summon the appropriate personnel, identify and notify the nearest hospital with the appropriate treatment facilities, and even turn traffic lights green as needed for the ambulance en route, thereby decreasing response time significantly.

These aren’t simply concepts found in science-fiction novels, but initiatives actually put in place today in smart cities around the world. By making use of data collected from a variety of sources in an intelligently-connected infrastructure, and parsing that data in useful ways, these smart applications can be used to improve the quality, performance and efficiency of everything from major water utilities to individual home appliances. Europe and Asia have been making these steps forward for some time but America is catching up now in cities like New York, Boston, San Francisco, and even Wichita.

From a municipal perspective, smart technology is being used to streamline city-provided services, and to oversee and regulate services provided by outside organizations in order to minimize frustration and dissatisfaction and to maximize economic growth and development. In Amsterdam, for example, the city has installed “smart” garbage bins, so that trash is collected only when the bin is full, thus making garbage collection more efficient and less costly.

There’s even more to know about smart cities, and we’ll cover that in “What is a ‘Smart City?’” Part 2.

C-RAN Mobile Architecture Migration: Fujitsu’s Smart xHaul is an Efficient Solution

To adapt mobile network architectures and address increasing bandwidth demands, service providers are deploying C-RAN architectures to improve performance and reduce costs

Deployment of C-RAN architectures has enabled increased deployment of optical mobile fronthaul solutions to deliver low-latency, high-bandwidth connectivity between remote radio heads and baseband unit electronics. Service providers are recognizing the necessity to reduce their mobile networking costs by better aligning the total electronics capacity of their networks with the total network utilization at any given time. They realize that by separating the electronics into a centralized pool where multiple radios or remote radio heads can share access to it, they can drive down capital costs and eliminate underutilized capacity. Centralized baseband units also enable easier handoffs and dynamic RF decisions based upon input from a combined set of radios.

As service providers deploy C-RAN architectures, they face significant challenges and decisions, specifically the selection of their mobile fronthaul solution. The CPRI protocol is extremely latency sensitive, which results in a latency link budget that limits the distance between RRH and base-band units to less than 20 km. The mobile fronthaul transmission equipment must minimize its latency contribution, or this distance will become even shorter. CPRI signaling is also highly inefficient, consuming as much as 16x transmission bandwidth versus the actual data rate seen by mobile applications.

To address which solutions best target these requirements, ACG Research analyzed the total cost of ownership and compared the economics of P2P dedicated dark fiber to that of active DWDM solutions like Fujitsu’s Smart xHaul. We analyzed the operational expense of the Smart xHaul solution over five years and compared it to competing mobile fronthaul alternatives. The analyses focused on the deployment of 150 macro cell sites, each supporting three frequency bands and three sectors. We also considered deployment of five small cells per macro cell site for a total of 750 small cell deployments.

The results demonstrate that although the capital expense of deploying a DWDM solution such as Smart xHaul is multiple times greater than the capex of P2P dark fiber, the reduction in fibers due to signal multiplexing and the advanced service assurance capabilities delivers 66% lower opex and 30% TCO savings. When looking at competing DWDM solutions, we also find that the advanced functions of the Smart xHaul solution deliver 60% lower opex associated with detecting, identifying root cause and resolving field issues.

In addition, industry-leading features in the Smart xHaul solution provide the ability to distinguish between optical transport and radio service impairments, which are identified by inspecting the actual CPRI packet frames. When combined with the other performance monitoring and service assurance capabilities, CPRI frame inspection results in rapid issue identification, assignment and resolution.

Click to download the paper and read how, in contrast with a dedicated dark fiber solution, the Smart xHaul solution is flexible and supports multiple network architectures.

Click for the HotSeat video of Tim Doiron, ACG Research analyst, and Joe Mocerino, Fujitsu principal solutions architect, discuss the Smart xHaul solution and C-RAN mobile architecture migration.

Co-Creation is the Secret Sauce for Broadband Project Planning

Let’s face it—meeting rooms are boring. Usually bland, typically disheveled, and littered with odd remnants of past battles, today’s conference room is often where positive energy goes to die.

So we decided to redesign one of ours and rename it the Co-Creation Room, complete with wall-to-wall, floor-to-ceiling whiteboards. Sure, it’s just a small room but I have noticed something: it is one of the busiest conference rooms we have. It’s packed. All the time. People come together willingly – agreeing upfront to enter a crucible of co-creation – where ideas are democratized and the conversation advances past the reductive (“ok, so what do we do?”) to the expansive (“hey, what are the possibilities?”).

This theme of co-creation takes center stage when we work with customers on their broadband network projects. These projects are an incredibly diverse mix of participants, aspirations, challenges, and constraints which really brings home the necessity and power of co-creation.

Planning, funding, and designing wireline and wireless broadband networks are a question of bringing together multiple stakeholders with varied perspectives and fields of expertise, as well as negotiating complex rules of engagement, all while we plan and execute on a challenging multi-variable task. Success demands a blend of expertise, resources and political will—meaning the motivation to carry initiatives forward with enough momentum to carry through changes of leadership and priorities.

Many times prospective customers seek to start by bolstering their in-house expertise by asking for project feasibility studies  Good feasibility vendors should have knowledge of multi-vendor planning, engineering design, project and vendor management, supply chain logistics, attracting funds or investment, business modeling, and ongoing network maintenance and operations, to ensure a thorough study. Look for someone with experience across many technologies and vendors, not just one.

As a Network Integrator, we bring all the pieces together. But we do more than just get the ingredients into the kitchen. Our job is to make a complete meal. By democratizing creation, we like to expand the conversation—and broker the kind of communication that gets diverse people working together productively.

The integration partner has to simultaneously understand both the customer’s big picture and the nitty-gritty details. Our priority is to minimize project risk and drive things forward effectively.  Many times, we have to do the Rosetta Stone trick and broker mutual understanding among groups with different professional cultures, viewpoints, and language. We take that new shared understanding and harness it to co-create the best possible project outcome.

On a recent municipal broadband project, for example, we learned that city staff and network engineers, don’t speak the same language. A network engineer isn’t familiar with the ins and outs of water systems and a city public works director doesn’t know about provisioning network equipment.. But by building a trusted partner relationship, we  helped to build the shared understanding needed. With this new shared understanding, we realized that we really had re-defined what Co-Creation really means to us.

So, when you come to Fujitsu, you will see the Co-Creation Room along with this room-sized decal:

Co-Creation: Where everyone gets to hold the pen.

The Surprising Benefits of Uncoupling Aggregation from Transponding

Data Center Interconnect (DCI) traffic comprises various combinations of 10G and 100G on each service. In a typical application, DWDM is used to maximize the quantity of traffic that can be carried on single fiber.

Virtually all available products for this function combine aggregation and transponding into a single platform; they aggregate multiple 10G services into a single 100G and then transpond that 100G onto a lambda for multiplexing alongside other lambdas onto a single fiber. Decoupling aggregation and transponding into two different platforms is a new approach. At Fujitsu, this approach consists of a 10GbE to 100G Layer 1 aggregation device—the 1FINITY T400— and a separate 100GbE to 200G transponder—the 1FINITY T100— that serve the two halves of the formerly combined aggregation-transponding function. This decoupled configuration is unique to these 1FINITY platforms, and it offers unique advantages.

Paradoxically, at first glance, this type of “two-box” solution may seem less desirable. But there are several advantages to decoupling aggregation from transponding—particularly in DCI applications. Here’s a quick rundown of the benefits. As you’ll see, they’re similar to the overall benefits of the new disaggregated, blade-centric approach to data center interconnect architecture.

Efficient use of rack space: Physical separation of aggregation and transponding splits a single larger unit into two smaller ones: a dedicated transponder and a dedicated aggregator. As a result the overall capacity of existing racks is increased and as an added benefit, it is easier to find space for individual units and use up scattered empty 1RU slots, which helps make the fullest possible use of costly physical facilities.

Reducing “stranded” bandwidth: Many suppliers are using QSFP+ transponders, which offer programmable 40G or 100G. Bandwidth can be wasted when aggregating 10G services because 40 is not a factor of 100, which necessitates deployment in multiples of 200G in order to make the numbers work out; this frequently results in “over-buying” significant un-needed capacity.. The 1FINITY T400 aggregator deploys in chunks of 100G, which keeps stranded bandwidth to a minimum by reducing the over-buy factor.    

Simplified operations: Operational simplification occurs for two reasons. First, when upgrading the transponder, you simply change it out without affecting the aggregator. With aggregation decoupled from the transponder, changes such as upgrading the transponder or adjusting the mix of 10G/100G clients involve disconnection/reconnection of fewer fibers and require fewer re-provisioning commands. Line-side rate changes to the mix of 10 and 100G services involve roughly 60% of the operational activities in comparison with competing platforms. Client-side  rate changes involve 25% fewer operational activities. Fewer activities means fewer mistakes, less time per operation, and therefore less cost. Savings in this area mainly affect the expensive line side, which creates a larger cost reduction.

Overall, by separating the aggregator and transponder, Fujitsu can offer data centers significant savings through better use of resources as well as simplification of operations and provisioning. Find out more by visiting the Fujitsu 1FINITY platform Web page.

Virtuora and YANG Models

By Kevin Dunsmore, with Rhonda Holloway

The Virtuora® Product Suite is a collection of software products that makes network management a breeze. A distinct advantage of the Virtuora™ software platform is its use of YANG models. These models are unique in that when someone tweaks a part of the model, the associated REST/RESTCONF is automatically generated upon recompiling. This new data becomes available via the API the moment recompiling is complete.

This ability is unique to Fujitsu. Other SDN platforms use YANG models, but not in the way Virtuora does. Some vendors have built their tools using Java and other programming languages. Whenever they want to change a driver, they must change their internal programming code and make the driver available via northbound APIs. This is extremely tedious and time-consuming, and there’s always the risk of “breaking” something if the code contains errors. On top of this, special code is typically required to “activate” and “delete” nodes, compounding the issue. As a result, many customers complain of long lags in getting new or enhanced support for SDN platforms.

Virtuora fixes this time lag problem through the implementation of YANG models. Here you can simply add or change a­ data element, recompile the model, and the new information instantly becomes available via REST. There’s no pulling apart code written in Java or another programming language  to add or change anything. Combined with OpenDaylight, the CRUD (Create, Read, Update, and Delete) is handled in one swift transaction. What takes another platform six months to do, Virtuora can do in one.

Think of YANG as your car’s gasoline. The controller is the engine, providing the power for the entire car to run. Applications are the steering wheel, giving users the control to drive Virtuora in the direction they please. YANG is the gasoline that ties the process together, giving the controller and applications the ability to run together and never out of sync. A small change to the steering well, or a modified engine part won’t affect the car’s ability to drive, because the gasoline will continue to adjust to the changes and keep the car running.

For a good example of how Fujitsu implements YANG models into our products, look at 1FINITY. Each 1FINITY blade has a YANG model, making it easy to include provisioning and management in a network-wide element management function. With YANG already working so well in our 1FINITY solution, we’re excited to include it in Virtuora.

The relationship between different models will need to be maintained. Luckily, Fujitsu has software support contracts that handle any changes made to the model. The underlying platform–OpenDaylight and, eventually, ONOS – handle “activate” and “delete” operations for us. Finally, Fujitsu is in discussions to develop a Software Development Kit (SDK) that would automatically ensure a change in one model is reflected in others.

At Fujitsu, we’re working hard to ensure that our customers have a smooth and productive experience using the Virtuora Product Suite. Our Services Support team is dedicated to working with each customer and handling all changes that need to be made. Our goal is to make the implementation process as quick and painless as possible. Thanks to our use of YANG models, we can make that happen.

The New Network Normal: Service-Oriented, Not Infrastructure-Oriented

Mobile broadband connections will account for almost 70% of the global base by 2020. The new types of services those customers consume will drive a tenfold increase in data traffic by 2019. At this rate, most of the world will be mobile, with “mobile” expectations. The “cloud” has become synonymous with mobility and is matching customers with new products and services more and more. More customers are coming, more services are coming, and more types of services are coming. More, more, more.

Carrier networks must embrace a new normal to support and drive this digital revolution. Unlike the static operating models of the past, a new dynamic system is emerging, and it’s not about the network at all. It’s about the applications that deliver services to paying customers— wherever they are, however they want them. This kind of dynamic network requires intelligence, extreme flexibility, modularity, and scalability. The new normal means creating innovative, differentiated services and combining these with the kind of intensely integrated, highly personalized relationships that enable services to delivered and  billed on-demand.

To be competitive in the new application economy, service providers need to dedicate more budget and resources to service innovation. However, multi-layer/multi-vendor network design necessitates that the lion’s share of any service provider’s budget goes to the network itself. At Fujitsu, we are changing that: we are working with our customers to architect an entirely new system: disaggregated, flattened, and virtual. And it doesn’t require a “scorched earth re-write” or “rip and replace” investment.

The new network normal means a new way of doing business for service providers, and it requires a different way of operating. In the old business model, service providers functioned like vending machine companies. A vending machine offered a pre-set lineup of products, snacks, and a single way to pay, namely your pocket change. Only field technicians could fill vending machines, only field technicians could fix broken machines, and only field technicians could deliver new vending machines to new locations. An entirely different staff collected the money and handled banking. Vending machine companies were forced to wait weeks, or even months, to receive payment for sold goods.

Vending machines in remote areas might not get serviced as often as population-dense areas. Technicians didn’t know which products were the most popular, but they knew which were the least! Plenty of people had dollar bills in their wallet- but no loose change. If the machine was out of stock, customers had to find another.

Companies lost sales because of the limitations of this infrastructure— not because there were no willing customers.

Vending machine companies developed new ways to accept payment, re-negotiated partnerships and delivery routes to refill popular product lines more often, and reorganized the labor force into groups who could fill and service machines simultaneously. In spite of these optimization tactics, much like service providers, vending machine companies were still ultimately reliant on physical devices and physical infrastructure to deliver a static line of products. Otherwise happy customers were required to seek other vendors when their needs were unfulfilled.

But unlike vending machine companies, service providers are not always selling a physical product. Service providers can re-package their products virtually— and it starts with virtualization of the network itself. Applying standard IT virtualization technologies to the service provider network allows administrators to shed the expense and constraints of single-purpose, hardware-based appliances.

Rolling out new services over traditional hardware-based network infrastructure used to take months or even years for service providers to achieve. Many time-consuming steps were required: service design, integration, testing, and provisioning. Virtualization addresses these wide-ranging use cases and more.

Software-defined networking, combined with network function virtualization, creates a single resource to manage and traverse an abstracted and unified fabric. As a result, application developers and network operators don’t have to worry about network connections; the intelligent network does that for them. Imagine seamlessly connecting applications and delivering new services, automatically, at the will of the end user. Virtualization provides this new normal: best-of-breed components that are intelligent, optimized end-to-end, fully utilized, and much less expensive. Budget previously dedicated to network infrastructure can now be released to support new applications and services for whole new categories of customers.

Thanks to readily-available data analytics on trending customer behavior, Network Operators will know exactly which products their customers are willing to buy and what they’re looking for—and they’ll be able to deliver them individually or as part of value-package offerings far beyond the current range of choices. Remote areas can get the same services and level of customer support that those in population-dense areas enjoy. Payment will be possible on-demand or by subscription. Premium convenience services will offer new flexibility for customers—and new revenue streams for providers.

Service providers will be able to differentiate their offerings beyond physical products, including bandwidth, SLAs and price points. Their enterprise customers will get better tools, on-demand provisioning, and tight integration between the carrier network, enterprise network, and cloud builders. Service Provider’s business customers will get on-demand services and always-on mobile connectivity. Other customers will get bundled services or high-bandwidth mobile connectivity only.

Not like a vending machine at all. Even the new ones that accept credit cards. Welcome to the new normal.

Four Key Ingredients Solve Network Business Challenges

Network operators face seemingly conflicting challenges. They must maximize network assets, reduce costs, and introduce new revenue-generating services—all while maintaining existing legacy services. This may seem like an impossible combination to achieve, but just four key capabilities provide the right ingredients to reconcile apparently conflicting needs and profitably address these big business challenges:

  • Transport legacy services in groups. Individual legacy service instances are often transported separately, which makes inefficient use of network and fiber resources. It is more efficient to combine multiple instances into batches that can be transported together at higher bit rates.
  • Combine multiple services onto a single fiber. Fiber resources are expensive and constrained. Freeing up fiber capacity or reducing the number of leased fibers needed to sustain growing networks by transporting additional services over a single fiber pair saves on fiber resource costs.
  • Efficiently pack 100G wavelengths. Many 100G wavelengths are inefficiently utilized, cumulatively wasting a large amount of capacity. If more services can be transported over existing 100G wavelengths, the network is more efficient and additional costs can be avoided.
  • Provide transparent wholesale services. Services that support a range of SLA choices by allowing demarcation and providing visibility into traffic, management, and alarms are attractive to customers and a valuable source of revenue.

You may be surprised to find out that an often-overlooked technology, Optical Transport Network (OTN), provides all four of these capabilities. OTN is a standard (ITU-T G.709) digital wrapper technology that allows multiple services of various types to be packaged and transported together at higher rates. This universal package is ideal for transporting legacy services, which makes better use of network resources while simultaneously benefiting from modern technologies and rates. OTN also inherently allows an end customer access to network management and performance data. Finally, as networks move to 100G transport, OTN provides an easy means of filling partially utilized 100G wavelengths by transparently delivering a combination of services. Overall, OTN is a highly viable option that deserves serious consideration for network modernization. On grounds of both efficiency and ongoing revenue opportunities, OTN carries excellent potential for long-term ROI.

Service Provisioning is Easier with YANG and the 1FINITY Platform

Years ago, alarm monitoring and fault management were difficult across multivendor platforms. These tasks became significantly easier—and ubiquitous—after the introduction of the Simple Network Management Protocol (SNMP) and Management Information Base (MIB-II).

Similarly, multivendor network provisioning and equipment management has proved elusive. The reason is the complexity and variability of provisioning and management commands in equipment from multiple vendors.

Could a new protocol and data modeling language again provide the solution?

Absolutely. Over time, NETCONF and YANG will do for service provisioning what SNMP and MIB-II did for alarm management.

Recently, software-defined networking (SDN) has introduced IT and data center architectural concepts to network operators—that is, by separating the control plane and the forwarding plane in network devices, allowing their control from a central location. Innovative disaggregated hardware leverages this new SDN paradigm with centralized control and the use of YANG, a data modeling language and application program interface (API).

YANG offers an open model that, when coupled with the benefit of standard interfaces such as NETCONF and REST, finally supports multivendor management. This approach provides an efficient mechanism to overcome the complexity and idiosyncrasies inherent in each vendor’s implementation.

Fujitsu’s response to this evolution is the 1FINITY™ platform, a revolutionary disaggregated networking solution. Rather than creating a multifunction converged platform, each 1FINITY function resides in a 1RU blade: transponder/muxponder blades, lambda DWDM blades, and switching blades. Each blade delivers a single function that previously resided in a converged architecture—the result is scalability and pay-as-you-grow flexibility.

Each 1FINITY blade has an open API and supports NETCONF and YANG, paving the way for a network fully rooted in the new SDN and YANG paradigm. New 1FINITY blades are easy to program via an open, interoperable controller, such as Fujitsu Virtuora® NC. Since each blade has a YANG model, it’s easy to include provisioning and management in a networkwide element management function.

Any open-source SDN controller that enables multivendor and multilayer awareness of the virtual network will revolutionize network management and control systems. Awareness of different layers and different vendor platforms will result in faster time to revenue, greater customer satisfaction, increased network optimization, and new services that are easier to implement.

Multilayer Data Connectivity Orchestration: Exploring the CENGN Proof of Concept

While computer technology in a wider sense has advanced rapidly and dramatically over the past three decades, networking has remained virtually unchanged since the 1990s. One of the main problems facing providers around the world is that the numerous multivendor legacy systems still in service don’t support fast, accurate remote identification, troubleshooting and fault resolution. The lack of remote fault resolution capabilities is compounded by the complex, closed, and proprietary nature of legacy systems, as well as the proliferation of protocols at the Southbound end. As a result, networks are difficult to optimize, troubleshoot, automate, and customize. SDN (Software-Defined Networking) is set to solve these issues by decoupling the control plane, plus bringing the benefits of cost reduction, overhead reduction, virtual network management, virtual packet forwarding, extensibility, better network management, faster service activation, reduced downtime, ease of use and open standards.

Why Multilayer SDN is Needed

One of the issues facing network operators is that there is no SDN controller with a streamlined topology view of both optical transport and packet layers. That’s why coordination between transport and IP/MPLS layer management is one of the most promising approaches to optimized, simplified multilayer network architecture. However, this coordination brings significant technical challenge, since it involves interoperation between very different technologies on each of the network layers, each with its own protocols, approach and legacy for network control and management.

Traditionally, transport networks have relied on centralized network management through a Network Management System or Element Management System (NMS/EMS), whereas the IP/MPLS layer uses a distributed control plane to build highly robust and dynamic network topologies. These fundamentally different approaches to network control have been a significant challenge over the years when the industry has tried to realize a closer integration between both network layers.

Although there has been a lot of R&D in this area (one example would be OpenFlow adding optical transport extensions on version 1.3 onwards), there are few, if any, successful implementations of multilayer orchestration through SDN.

It’s important to mention a common misconception about SDN, which is the assumption that SDN goes hand-in-hand with the OpenFlow protocol. OpenFlow (which is an evolution of Ethane protocol) is just a means to an end, namely separation of the control and data plane. OpenFlow is a communication protocol that gives access to the forwarding plane of a network element (switch, router, optical equipment, etc.). SDN isn’t dependent on Openflow specifically; it can also be implemented using other protocols for southbound communication, such as Netconf/Yang, BGP, XMPP, etc.

A Multilayer, Multivendor SDN Proof of Concept

To address the issues outlined above, CENGN (Canada’s Centre of Excellence in Next Generation Networks), in collaboration with Juniper Networks, Fujitsu, Telus and cenX, initiated a PoC to demonstrate true end-to-end multilayer SDN orchestration of an MPLS-based WAN over optical infrastructure.

In the PoC, the CENX Cortx Service Orchestrator serves as a higher layer orchestrator that optimally synchronizes the MPLS and Optical layers. The MPLS layer uses Juniper’s NorthStar SDN controller for Layer 2–3 devices and the Optical transport layer uses Fujitsu Virtuora® Network Controller. All northbound integration is through a REST API and upon notification of failures or policy violations this API dynamically adjusts the optical or packet layers via the SDN controllers, ensuring optimal routing and policy conformance.

Scenarios

The proof of concept consists of the following scenarios:

STEP 1: FAILURE IN OPTICAL DOMAIN

  • Optical link failure (via cable pull or manual port failure).
  • Cortx Orchestrator gets link failure alarms from Virtuora, stores them and updates path info.

STEP 2: PACKET REROUTE

  • Cortx Service Orchestrator receives link failure alarms from Juniper MPLS, stores alarm.
  • Cortx Service Orchestrator receives updated topology information from SDN controllers.
  • Juniper MPLS automatically re-routes blue label-switched path and notifies Cortx Service Orchestrator of link state changes.

STEP 3: CORTX SRLG NOTIFICATION

  • Cortx Service Orchestrator processes new topology and raises alert of network policy. violation, which remains in effect until the situation is corrected.
  • Cortx Service Orchestrator notifies the operations user of policy violation.

STEP 4: PACKET DOMAIN ADJUSTMENTS

  • Virtuora turns up optical links and alerts Cortx Service Orchestrator of topology change
  • Policy violation is cleared when condition corrected
  • LSP is rerouted through new provisioned optical paths

Conclusion

This is an excellent model of how a collaborative, multivendor, multilayer approach based on open standards can drive the industry towards the network of the future. By providing a functional example of real-time operations across multivendor platforms, this project has shown that multilayer data connectivity orchestration—and the benefits it offers—is feasible in a realistic situation. Other proofs of concept at the CENGN laboratories will continue to advance SDN and NFV technologies, helping to refine functionality and move towards production systems.