Integrated Laboratory Testing – An Investment that Pays off for Rail Operators

The traditional approach that rail operators have taken to their communications networks is changing to support new IP video, voice and data applications, as well as improved mobile connectivity and stronger cybersecurity. The advent of the flexible converged network is bringing new challenges, one of which is to turn up the heat on pre-deployment testing. Factory Acceptance Testing (FAT) is no longer enough because, while it adequately covers issues relating to individual components, FAT falls short when it comes to identifying issues that arise when multiple system components come together in a fully integrated system.

The answer is to bring system components together and put them through their paces in a controlled laboratory environment before live deployment. For want of a better name, this approach is known as Integrated FAT (IFAT). But setting up a fully capable laboratory and hiring the necessary experts requires significant upfront investment. It’s easy to imagine the level of expenditure needed won’t pay off, but in fact it’s more than justifiable when the cost-saving benefits are taken into account over the longer term.

The simple reason is that integrated testing improves reliability and drastically reduces network downtime, and every minute of downtime is expensive. That’s all there is to it. Discovering and correcting issues before committing to live traffic is far less costly and disruptive than troubleshooting and correction under the pressures of daily operation. Many organizations have no grasp of the costly ripple effects that network downtime has on their business: lost revenue, lost information, damaged reputations and lost customers.

Leaving aside the rewards in terms of reduced downtime, a laboratory outfitted for IFAT brings with it other valuable benefits. Improved cybersecurity is just one of these. Networks are becoming more enmeshed with IT systems, making them more vulnerable to cyber-attack to a degree that cybersecurity has become a critical issue.  For instance, according to the Ponemon Institute’s study, “2017 Cost of Cyber Crime,” the average annualized cost of cybercrime for the transportation industry was $7.36M.

Change control is another area in which lab-based IFAT delivers benefits, in terms of improved reliability and network service quality. Changes equal risk because every change has the potential for unforeseen side effects. For example, imagine you bring up a new circuit between two communication centers and find that application traffic is unexpectedly following an asymmetrical path. Traffic goes out from Comms Center A to Comms Center B on the old circuit, but it comes back on the new one. This is a fairly common scenario—but now there’s a decision to make: Do you try to fix the issue, or back out your change and wait until next month to bring the new circuit into production? What does the change control procedure say? Is there a change control procedure? Will this asymmetrical routing situation even pose a problem?

This is a lot of information to quickly process for an operations tech who most likely does not have a full view of the big picture, and who is running on pizza, Cokes, day-old coffee, and minimal sleep. It is not rare to have an engineer make a small change to fix a routing issue only to cause a major failure. Having a lab facility to duplicate, isolate, make corrections, and develop methods of procedure not only eliminates this risk, but gives your engineers confidence that when they return to the field, everything will go as planned.

Additional valuable benefits of IFAT derive from making full use of the facility as a permanent fixture for ongoing upgrade testing (hardware/software), proofs of concept, staff training, and trouble simulations or disaster recovery drills.

While the transportation industry stands to benefit immensely from advanced networks that can support improved passenger comfort, better real time communication and higher safety standards, the industry needs to go beyond testing components in isolation from one another and embrace deeper and more comprehensive integrated testing in laboratory environments. IFAT offers the best chances of achieving a successful and predictable outcome that avoids costly redesign and troubleshooting during outage operations.

What the NFL Tells us About DCI

Data Center Interconnect has historically been driven by the pressure of simple demand: the kind of demand that’s satisfied by big, fast, dumb point-to-point pipes. But the value and potential of “big and fast” are held in check by “dumb.” It’s like football; bigger and faster will only take you so far in the National Football League (NFL). As game plans get more complicated, players are expected to think strategically about the other team’s offense or defense. Similarly, DCI is also getting more complicated as the pressure builds—and those big, fast pipes must ditch the dumb and get smart.

Data centers already have requirements in place for encryption, streaming telemetry and LLDP, all of which mean adding intelligence. Flex-grid; mixed modulation schemes; the growing mix of baud rates; and multiple FEC options (not to mention mesh connectivity in the planning arena) also demand more “brains” to match the brawn. The challenging task of selecting the optimal modulation, baud, grid and FEC is impossible unless the intelligence is there.

Variable and unpredictable traffic loads add another layer of complexity; business and the internet are inherently chaotic. The historical trend of “designing for the worst case,” (AKA “busy hour design”) is no longer economical. Data centers need capabilities to handle changing workloads gracefully and efficiently without overbuilding. These trends have significant positive implications for DCI; the agility and intelligence needed to meet dynamic workloads will improve the operational efficiency of the whole network. Put simply—bigger, faster, smarter pipes in DCI are just like NFL players who are also strategic thinkers. In both cases, add brains to brawn and the game is on.

Open Networks, Open for Business

For the ICT industry, this nascent era of business models based on cloud computing and OTT content is characterized by a heady brew of innovation, change and growth. Open networking offers service providers a route to much-needed rapid service deployment, agile innovation, and leaner spending. For these reasons, the industry is pushing for open-source standards and transport equipment vendors are capitalizing on this new thinking. Migration is underway from traditional proprietary converged platforms to more modular/single use-case form-factors and functionality.

What is an Open Optical Network?

You might ask, what are the key features of an open optical network? Essentially it boils down to networks operating on an industry-agreed common, multivendor foundation. This includes the ability to have open software and open line systems that comply with open standards for interoperability. In sum, this means a mix-and-match multivendor network environment where all the parts “speak” a common language of control and data exchange.

Open Hardware

Optical networking hardware, such as Reconfigurable Add Drop Multiplexers (ROADMs) and transponders, is evolving in terms of form factor, functionality, and functional disaggregation. Equipment is changing from the large, converged platforms of the past decade to smaller units engineered for single use-cases; simplified network design and operation; efficient space utilization; and lower power consumption. Other essential features of open hardware are plug-and-play or self-installing components; automated provisioning; and software features and interfaces that enable easy integration and meaningful data exchange with different management systems.

Open Software

A notable aspect of open networking is the decoupling of software from hardware development and the transition from proprietary, embedded software to open-source code. Open software should include a single provisioning model with both service activation and service assurance, in addition to a centralized service rollout model. Open software management systems must also be capable of managing third-party systems or tools, and compliant with new standards or initiatives. The network elements must also support open APIs, enabling open management.

Benefits

Perhaps the most obvious benefit from open networking is that service providers are no longer locked in to a specific vendor’s hardware or controller software. When service providers can freely combine equipment from multiple vendors, they have freedom of choice that can directly reduce costs, and when an entire network is managed via common open interfaces and protocols, networks get tested, validated and deployed faster. Moreover, if every part of the network, figuratively speaking, shares a common language, it is easier to eliminate overbuilds or stranded bandwidth. Thus, open networking not only gives providers greater freedom of choice and speed of execution, it helps them to make the fullest use of existing resources. Ultimately, in business terms, this can result in faster service roll-outs.

Another benefit of open networking is that it will ultimately provide a shared technological framework to support innovation. The standards being implemented in the communications network industry are common across the entire IT industry, meaning that service providers have an open invitation to an innovation ecosystem.

Challenges

The primary challenge is successfully navigating the transition from traditional telecom standards to newer open-source standards—not least because the standards themselves are still evolving. “Openness” is not a binary state and the industry must tackle hardware and software components possessing various degrees of openness and interoperability.

On the hardware side, we see everything from closed-and-proprietary paradigms all the way to plug-and-play installation, functional disaggregation, and ultimately, interoperability. Likewise on the software side, we see a similar spectrum, from closed-and-proprietary to open standards, open software platforms, open APIs and ultimately, open applications. Several non-proprietary initiatives are driving open networking forward, including OpenDaylight, ONOS/CORD, ONAP, OpenStack, and the Open ROADM MSA, to name a few.

Conclusions

Open networking is signaling the desire for equipment with a narrower use case and simpler feature sets that enables low-cost and simpler operations. Flexibility, scalability and simplicity are the keys to realizing the potential of the open network.

Open networking supports ecosystem-based innovation and multi-sourcing, which boost cost, competition and supply reliability, while avoiding vendor lock-in and reducing burdensome complexity. Scalable, modular equipment reduces first cost and adds flexible pay-as-you go bandwidth growth, benefiting service providers by broadening their range of capital spending options and timelines. Open networking makes operations simpler and improves service creation and activation times, overall helping to “crack the tough nut” of reducing operational and ongoing costs.

WHAT IS A “SMART CITY?” PART 2

In Part 1 of this article, we talked about some of the characteristics of a smart city, including hyperconnectivity, people-centric technology, and increased efficiency of city-provided services. But although those things are critically important, they’re not the end of the smart cities story.

Economic development is an important driver for most cities considering an upgrade to “smart” status, with most cities looking to attract new businesses to their community. But how? In 1942, economist and social scientist Joseph Schumpeter coined the term, “innovation economics,” which, he argued, meant that innovation was a major factor in spurring economic growth and change as it created “temporary monopolies” when new products and technologies were invented, that then encouraged the development of competing products and processes, thereby creating beneficial economic conditions. He further believed that government’s most important role was in creating a fertile ground in which these innovations could occur. In this sense, the smart, connected, and efficient city is the technological soil in which the seeds of economic growth will be planted, yielding profits and benefits that will in turn enrich both individuals and society at large. Therefore, the cities that are at the forefront of smart cities transformation will reap the largest benefits from this explosive, and in many case much-needed, growth.

For example, an unique and innovative display of economic development using smart technology is taking place right now in South Korea. A major grocery retailer wanted to expand business, but without opening additional physical locations. The answer proved to be “virtual shelves” in the city’s subway stations. Wall-length billboards display goods for sale, complete with images and prices, allowing customers to order by scanning QR codes, paying, and arranging for delivery within a day. This optimizes commuter time in the stations, and expands business for the retailer without the expense of a building, rent, utilities, maintenance, staff, and all the other requirements of a physical location. The result is that this retailer has reached the number one position in the online market, and the number two position in terms of brick-and-mortar stores.

Besides these obvious advantages, an area in which smart cities can actually save lives, and one that is top of mind around the world right now, is by helping to deal with natural disasters, before, during, and after the event. Sensors can continually monitor air and water quality, weather and seismic events, and even increased radiation levels, for example, thus providing critical early warnings of disasters about to happen, and can disperse that information to residents via smart phone apps. Once an event occurs, smart data can be used to provide much-needed safety information. During Hurricane Harvey, for example, data collected via connected systems was able to provide residents with real-time information about increased water levels through information from county flood gauges, as well as identify passable evacuation routes and assistance, available shelters, food banks, and more. Drones can be – and are being – used to survey damage and to aid in recovery efforts, reducing the risk for human crews. And this is clearly the tip of the iceberg as regards ways in which “smart” technology will be able to aid in human response to natural disasters.

Of course, these are only a few of the ways in which smart technology can benefit communities. Every city and county has its own needs, especially in the early planning stages of digital transformation. What’s important to remember, however, is that smart cities aren’t coming, they’re already here, and the earliest adopters of this incredible technology will be the ones to reap the greatest benefits from it. Those that delay, or who reject the smart cities model altogether, will quickly find themselves woefully behind the curve, unable to compete with those communities that showed more foresight in these early days. Customers and residents are constantly increasing their demands for bandwidth as the fuel needed to drive their desire for connectivity, and the communities that can provide these services seamlessly and easily win the lion’s share of business and revenue. It’s never too early to start thinking about smart city transformation, so what are you waiting for?

WHAT IS A “SMART CITY?” PART 1

Unless you’ve been living in a bunker deep underground for the last ten years, you’ve no doubt heard talk about “smart cities.” Everyone’s talking about it, and a few truly forward thinking cities around the world are making it happen. But what exactly is a “smart city,” and what does it mean to you?

The short answer is that the smart city concept is the logical and foreseeable outcome of a world in which connectivity has become an integral part of our daily lives. In a smart city, things like utilities, transportation, education, housing, and more are all connected via sensors that provide data in order to improve the quality of life of the city’s residents. Civic leaders use this data to make better, “smarter” decisions for the way the city operates and interacts with its citizens. It’s a way to make infrastructure more efficient, to make government more transparent, and to make day-to-day interactions with technology smoother.

The best smart city improvements are based on a people-centric model, in which technology is merely a tool that improves the lives of those it touches by solving problems that might otherwise be insurmountable. Imagine a “smart” parking lot that can alert you to an available parking space via an app on your phone, reducing or eliminating your time driving around hopelessly looking for one. Or how about  a smart communications system for emergency personnel, able to assess a situation holistically, summon the appropriate personnel, identify and notify the nearest hospital with the appropriate treatment facilities, and even turn traffic lights green as needed for the ambulance en route, thereby decreasing response time significantly.

These aren’t simply concepts found in science-fiction novels, but initiatives actually put in place today in smart cities around the world. By making use of data collected from a variety of sources in an intelligently-connected infrastructure, and parsing that data in useful ways, these smart applications can be used to improve the quality, performance and efficiency of everything from major water utilities to individual home appliances. Europe and Asia have been making these steps forward for some time but America is catching up now in cities like New York, Boston, San Francisco, and even Wichita.

From a municipal perspective, smart technology is being used to streamline city-provided services, and to oversee and regulate services provided by outside organizations in order to minimize frustration and dissatisfaction and to maximize economic growth and development. In Amsterdam, for example, the city has installed “smart” garbage bins, so that trash is collected only when the bin is full, thus making garbage collection more efficient and less costly.

There’s even more to know about smart cities, and we’ll cover that in “What is a ‘Smart City?’” Part 2.

C-RAN Mobile Architecture Migration: Fujitsu’s Smart xHaul is an Efficient Solution

To adapt mobile network architectures and address increasing bandwidth demands, service providers are deploying C-RAN architectures to improve performance and reduce costs

Deployment of C-RAN architectures has enabled increased deployment of optical mobile fronthaul solutions to deliver low-latency, high-bandwidth connectivity between remote radio heads and baseband unit electronics. Service providers are recognizing the necessity to reduce their mobile networking costs by better aligning the total electronics capacity of their networks with the total network utilization at any given time. They realize that by separating the electronics into a centralized pool where multiple radios or remote radio heads can share access to it, they can drive down capital costs and eliminate underutilized capacity. Centralized baseband units also enable easier handoffs and dynamic RF decisions based upon input from a combined set of radios.

As service providers deploy C-RAN architectures, they face significant challenges and decisions, specifically the selection of their mobile fronthaul solution. The CPRI protocol is extremely latency sensitive, which results in a latency link budget that limits the distance between RRH and base-band units to less than 20 km. The mobile fronthaul transmission equipment must minimize its latency contribution, or this distance will become even shorter. CPRI signaling is also highly inefficient, consuming as much as 16x transmission bandwidth versus the actual data rate seen by mobile applications.

To address which solutions best target these requirements, ACG Research analyzed the total cost of ownership and compared the economics of P2P dedicated dark fiber to that of active DWDM solutions like Fujitsu’s Smart xHaul. We analyzed the operational expense of the Smart xHaul solution over five years and compared it to competing mobile fronthaul alternatives. The analyses focused on the deployment of 150 macro cell sites, each supporting three frequency bands and three sectors. We also considered deployment of five small cells per macro cell site for a total of 750 small cell deployments.

The results demonstrate that although the capital expense of deploying a DWDM solution such as Smart xHaul is multiple times greater than the capex of P2P dark fiber, the reduction in fibers due to signal multiplexing and the advanced service assurance capabilities delivers 66% lower opex and 30% TCO savings. When looking at competing DWDM solutions, we also find that the advanced functions of the Smart xHaul solution deliver 60% lower opex associated with detecting, identifying root cause and resolving field issues.

In addition, industry-leading features in the Smart xHaul solution provide the ability to distinguish between optical transport and radio service impairments, which are identified by inspecting the actual CPRI packet frames. When combined with the other performance monitoring and service assurance capabilities, CPRI frame inspection results in rapid issue identification, assignment and resolution.

Click to download the paper and read how, in contrast with a dedicated dark fiber solution, the Smart xHaul solution is flexible and supports multiple network architectures.

Click for the HotSeat video of Tim Doiron, ACG Research analyst, and Joe Mocerino, Fujitsu principal solutions architect, discuss the Smart xHaul solution and C-RAN mobile architecture migration.

Co-Creation is the Secret Sauce for Broadband Project Planning

Let’s face it—meeting rooms are boring. Usually bland, typically disheveled, and littered with odd remnants of past battles, today’s conference room is often where positive energy goes to die.

So we decided to redesign one of ours and rename it the Co-Creation Room, complete with wall-to-wall, floor-to-ceiling whiteboards. Sure, it’s just a small room but I have noticed something: it is one of the busiest conference rooms we have. It’s packed. All the time. People come together willingly – agreeing upfront to enter a crucible of co-creation – where ideas are democratized and the conversation advances past the reductive (“ok, so what do we do?”) to the expansive (“hey, what are the possibilities?”).

This theme of co-creation takes center stage when we work with customers on their broadband network projects. These projects are an incredibly diverse mix of participants, aspirations, challenges, and constraints which really brings home the necessity and power of co-creation.

Planning, funding, and designing wireline and wireless broadband networks are a question of bringing together multiple stakeholders with varied perspectives and fields of expertise, as well as negotiating complex rules of engagement, all while we plan and execute on a challenging multi-variable task. Success demands a blend of expertise, resources and political will—meaning the motivation to carry initiatives forward with enough momentum to carry through changes of leadership and priorities.

Many times prospective customers seek to start by bolstering their in-house expertise by asking for project feasibility studies  Good feasibility vendors should have knowledge of multi-vendor planning, engineering design, project and vendor management, supply chain logistics, attracting funds or investment, business modeling, and ongoing network maintenance and operations, to ensure a thorough study. Look for someone with experience across many technologies and vendors, not just one.

As a Network Integrator, we bring all the pieces together. But we do more than just get the ingredients into the kitchen. Our job is to make a complete meal. By democratizing creation, we like to expand the conversation—and broker the kind of communication that gets diverse people working together productively.

The integration partner has to simultaneously understand both the customer’s big picture and the nitty-gritty details. Our priority is to minimize project risk and drive things forward effectively.  Many times, we have to do the Rosetta Stone trick and broker mutual understanding among groups with different professional cultures, viewpoints, and language. We take that new shared understanding and harness it to co-create the best possible project outcome.

On a recent municipal broadband project, for example, we learned that city staff and network engineers, don’t speak the same language. A network engineer isn’t familiar with the ins and outs of water systems and a city public works director doesn’t know about provisioning network equipment.. But by building a trusted partner relationship, we  helped to build the shared understanding needed. With this new shared understanding, we realized that we really had re-defined what Co-Creation really means to us.

So, when you come to Fujitsu, you will see the Co-Creation Room along with this room-sized decal:

Co-Creation: Where everyone gets to hold the pen.

The Surprising Benefits of Uncoupling Aggregation from Transponding

Data Center Interconnect (DCI) traffic comprises various combinations of 10G and 100G on each service. In a typical application, DWDM is used to maximize the quantity of traffic that can be carried on single fiber.

Virtually all available products for this function combine aggregation and transponding into a single platform; they aggregate multiple 10G services into a single 100G and then transpond that 100G onto a lambda for multiplexing alongside other lambdas onto a single fiber. Decoupling aggregation and transponding into two different platforms is a new approach. At Fujitsu, this approach consists of a 10GbE to 100G Layer 1 aggregation device—the 1FINITY T400— and a separate 100GbE to 200G transponder—the 1FINITY T100— that serve the two halves of the formerly combined aggregation-transponding function. This decoupled configuration is unique to these 1FINITY platforms, and it offers unique advantages.

Paradoxically, at first glance, this type of “two-box” solution may seem less desirable. But there are several advantages to decoupling aggregation from transponding—particularly in DCI applications. Here’s a quick rundown of the benefits. As you’ll see, they’re similar to the overall benefits of the new disaggregated, blade-centric approach to data center interconnect architecture.

Efficient use of rack space: Physical separation of aggregation and transponding splits a single larger unit into two smaller ones: a dedicated transponder and a dedicated aggregator. As a result the overall capacity of existing racks is increased and as an added benefit, it is easier to find space for individual units and use up scattered empty 1RU slots, which helps make the fullest possible use of costly physical facilities.

Reducing “stranded” bandwidth: Many suppliers are using QSFP+ transponders, which offer programmable 40G or 100G. Bandwidth can be wasted when aggregating 10G services because 40 is not a factor of 100, which necessitates deployment in multiples of 200G in order to make the numbers work out; this frequently results in “over-buying” significant un-needed capacity.. The 1FINITY T400 aggregator deploys in chunks of 100G, which keeps stranded bandwidth to a minimum by reducing the over-buy factor.    

Simplified operations: Operational simplification occurs for two reasons. First, when upgrading the transponder, you simply change it out without affecting the aggregator. With aggregation decoupled from the transponder, changes such as upgrading the transponder or adjusting the mix of 10G/100G clients involve disconnection/reconnection of fewer fibers and require fewer re-provisioning commands. Line-side rate changes to the mix of 10 and 100G services involve roughly 60% of the operational activities in comparison with competing platforms. Client-side  rate changes involve 25% fewer operational activities. Fewer activities means fewer mistakes, less time per operation, and therefore less cost. Savings in this area mainly affect the expensive line side, which creates a larger cost reduction.

Overall, by separating the aggregator and transponder, Fujitsu can offer data centers significant savings through better use of resources as well as simplification of operations and provisioning. Find out more by visiting the Fujitsu 1FINITY platform Web page.

Virtuora and YANG Models

By Kevin Dunsmore, with Rhonda Holloway

The Virtuora® Product Suite is a collection of software products that makes network management a breeze. A distinct advantage of the Virtuora™ software platform is its use of YANG models. These models are unique in that when someone tweaks a part of the model, the associated REST/RESTCONF is automatically generated upon recompiling. This new data becomes available via the API the moment recompiling is complete.

This ability is unique to Fujitsu. Other SDN platforms use YANG models, but not in the way Virtuora does. Some vendors have built their tools using Java and other programming languages. Whenever they want to change a driver, they must change their internal programming code and make the driver available via northbound APIs. This is extremely tedious and time-consuming, and there’s always the risk of “breaking” something if the code contains errors. On top of this, special code is typically required to “activate” and “delete” nodes, compounding the issue. As a result, many customers complain of long lags in getting new or enhanced support for SDN platforms.

Virtuora fixes this time lag problem through the implementation of YANG models. Here you can simply add or change a­ data element, recompile the model, and the new information instantly becomes available via REST. There’s no pulling apart code written in Java or another programming language  to add or change anything. Combined with OpenDaylight, the CRUD (Create, Read, Update, and Delete) is handled in one swift transaction. What takes another platform six months to do, Virtuora can do in one.

Think of YANG as your car’s gasoline. The controller is the engine, providing the power for the entire car to run. Applications are the steering wheel, giving users the control to drive Virtuora in the direction they please. YANG is the gasoline that ties the process together, giving the controller and applications the ability to run together and never out of sync. A small change to the steering well, or a modified engine part won’t affect the car’s ability to drive, because the gasoline will continue to adjust to the changes and keep the car running.

For a good example of how Fujitsu implements YANG models into our products, look at 1FINITY. Each 1FINITY blade has a YANG model, making it easy to include provisioning and management in a network-wide element management function. With YANG already working so well in our 1FINITY solution, we’re excited to include it in Virtuora.

The relationship between different models will need to be maintained. Luckily, Fujitsu has software support contracts that handle any changes made to the model. The underlying platform–OpenDaylight and, eventually, ONOS – handle “activate” and “delete” operations for us. Finally, Fujitsu is in discussions to develop a Software Development Kit (SDK) that would automatically ensure a change in one model is reflected in others.

At Fujitsu, we’re working hard to ensure that our customers have a smooth and productive experience using the Virtuora Product Suite. Our Services Support team is dedicated to working with each customer and handling all changes that need to be made. Our goal is to make the implementation process as quick and painless as possible. Thanks to our use of YANG models, we can make that happen.

The New Network Normal: Service-Oriented, Not Infrastructure-Oriented

Mobile broadband connections will account for almost 70% of the global base by 2020. The new types of services those customers consume will drive a tenfold increase in data traffic by 2019. At this rate, most of the world will be mobile, with “mobile” expectations. The “cloud” has become synonymous with mobility and is matching customers with new products and services more and more. More customers are coming, more services are coming, and more types of services are coming. More, more, more.

Carrier networks must embrace a new normal to support and drive this digital revolution. Unlike the static operating models of the past, a new dynamic system is emerging, and it’s not about the network at all. It’s about the applications that deliver services to paying customers— wherever they are, however they want them. This kind of dynamic network requires intelligence, extreme flexibility, modularity, and scalability. The new normal means creating innovative, differentiated services and combining these with the kind of intensely integrated, highly personalized relationships that enable services to delivered and  billed on-demand.

To be competitive in the new application economy, service providers need to dedicate more budget and resources to service innovation. However, multi-layer/multi-vendor network design necessitates that the lion’s share of any service provider’s budget goes to the network itself. At Fujitsu, we are changing that: we are working with our customers to architect an entirely new system: disaggregated, flattened, and virtual. And it doesn’t require a “scorched earth re-write” or “rip and replace” investment.

The new network normal means a new way of doing business for service providers, and it requires a different way of operating. In the old business model, service providers functioned like vending machine companies. A vending machine offered a pre-set lineup of products, snacks, and a single way to pay, namely your pocket change. Only field technicians could fill vending machines, only field technicians could fix broken machines, and only field technicians could deliver new vending machines to new locations. An entirely different staff collected the money and handled banking. Vending machine companies were forced to wait weeks, or even months, to receive payment for sold goods.

Vending machines in remote areas might not get serviced as often as population-dense areas. Technicians didn’t know which products were the most popular, but they knew which were the least! Plenty of people had dollar bills in their wallet- but no loose change. If the machine was out of stock, customers had to find another.

Companies lost sales because of the limitations of this infrastructure— not because there were no willing customers.

Vending machine companies developed new ways to accept payment, re-negotiated partnerships and delivery routes to refill popular product lines more often, and reorganized the labor force into groups who could fill and service machines simultaneously. In spite of these optimization tactics, much like service providers, vending machine companies were still ultimately reliant on physical devices and physical infrastructure to deliver a static line of products. Otherwise happy customers were required to seek other vendors when their needs were unfulfilled.

But unlike vending machine companies, service providers are not always selling a physical product. Service providers can re-package their products virtually— and it starts with virtualization of the network itself. Applying standard IT virtualization technologies to the service provider network allows administrators to shed the expense and constraints of single-purpose, hardware-based appliances.

Rolling out new services over traditional hardware-based network infrastructure used to take months or even years for service providers to achieve. Many time-consuming steps were required: service design, integration, testing, and provisioning. Virtualization addresses these wide-ranging use cases and more.

Software-defined networking, combined with network function virtualization, creates a single resource to manage and traverse an abstracted and unified fabric. As a result, application developers and network operators don’t have to worry about network connections; the intelligent network does that for them. Imagine seamlessly connecting applications and delivering new services, automatically, at the will of the end user. Virtualization provides this new normal: best-of-breed components that are intelligent, optimized end-to-end, fully utilized, and much less expensive. Budget previously dedicated to network infrastructure can now be released to support new applications and services for whole new categories of customers.

Thanks to readily-available data analytics on trending customer behavior, Network Operators will know exactly which products their customers are willing to buy and what they’re looking for—and they’ll be able to deliver them individually or as part of value-package offerings far beyond the current range of choices. Remote areas can get the same services and level of customer support that those in population-dense areas enjoy. Payment will be possible on-demand or by subscription. Premium convenience services will offer new flexibility for customers—and new revenue streams for providers.

Service providers will be able to differentiate their offerings beyond physical products, including bandwidth, SLAs and price points. Their enterprise customers will get better tools, on-demand provisioning, and tight integration between the carrier network, enterprise network, and cloud builders. Service Provider’s business customers will get on-demand services and always-on mobile connectivity. Other customers will get bundled services or high-bandwidth mobile connectivity only.

Not like a vending machine at all. Even the new ones that accept credit cards. Welcome to the new normal.