Network Slicing Made Simple

To deliver on the promise of 5G, this next-generation technology will enable multiple new service streams virtualized through a common infrastructure. With all the different use cases for 5G, these services will have diverse performance requirements, which adds to the challenges of delivering them in an efficient way. To overcome these challenges, tomorrow’s networks will rely on network slicing.

The 5G radio consists of three distinct elements as defined by the Third Generation Partnership Project (3GPP): radio unit (RU), distribution unit (DU) and central unit (CU). In the 5G New Radio (5G NR), multiple RUs hand off data to the DU. Network slicing begins within the DU by identifying specific services and allocating virtualized, isolated resources. The transport network interoperates with the DU and CU for dynamic service delivery and resource allocation, while the network operation uses multiprotocol label switching (MPLS) segment routing for dynamic establishment of resources. 

There is, however, a simpler and more cost-effective way of engineering and maintaining the MPLS segment routing elements. This involves physically separating the control and user planes using disaggregation, and operating the control plane in the cloud. Contrasting the cloud control plane to a traditional router will illustrate the benefits of this approach. 

A traditional router platform consists of an integrated control and user plane, in the form of a chassis and plug-in cards. These chassis come in multiple sizes based on performance and capacity supported. Each chassis dimension has integrated control and user plane regardless of the chassis size. Therefore, scaling is limited to that fixed dimension, meaning they always scale up to a limit. This means that — from Day One — the platform will typically only run at 20 to 30 percent capacity, but will still have to reserve the full footprint, power and thermal allocation of full loading. This is a very inefficient use of CAPEX. Furthermore, each of the deployment sites runs the risk of under- or over-engineering the capacity. Too small a dimension with an under-capacity site results in loss of revenue through unfulfilled demands, while an over-engineered site is an inefficient use of capital.

Control Capacity in the Cloud

Alternatively, the disaggregated approach consists of a programmable, purpose-built blade forming the MPLS-segment routing common infrastructure, and a decoupled virtual control plane in the cloud. When a new service is required, a virtual routing instance is generated in the control plane and provisioned throughout the virtual network, including resilient alternate pathways, end-to-end, based on the service level agreement (SLA).

Once calculated for the virtual network, the programming is pushed down into the common infrastructure. These cloud micro-services offer real protocol isolation per virtual router instance, where each protocol is running in its own container and brought together as one virtual router application instance. Multiple virtual router instances with full isolation can share the same network element hardware, offering a very CAPEX-efficient scaling operation. We refer to this as a scale-out approach via linear resource scaling, resulting in better infrastructure utilization versus traditional routers. 

Applying the cloud control plane approach to network slicing based on upcoming 5G services offers simplified operations and capacity scaling using virtualization to dynamically allocate and provision services to customers. As services are provisioned, the virtual routing instances are provisioned end-to-end for each service and customer on a global basis, then pushed down to the programmable network elements running the user plane.

This simplified operation offers full resource guarantees with reduced operational complexity, resulting in faster time to market/revenue return, while lowering the cost per bit using a capacity efficient virtualized network. This allows for the construction of one common infrastructure where individual network elements are minimized and right sized for capacity with multiple virtual networks, enabling many diverse service use cases to fully realize the potential of 5G.

A Domain Approach Could Simplify 5G Network Management

With the advent of 5G, a much more highly virtualized and dense mobile network infrastructure will place greater demands on management.  5G virtualization presents new challenges, both through individual components running as Virtual Network Functions (VNFs) and through network slicing, chiefly because these factors result in complex networks and consequently, much more complex network management.

Work is underway among the industry groups charged with developing and ratifying standards for 5G implementation. However, the current visions for slice management run a high risk of making network management so complex that it will significantly impact 5G roll-out and flexibility. The burdens of complexity will likely drive service providers to avoid the problem by adopting single-vendor network solutions. This will impact openness, and reduced commitment to openness carries a high price.

But what if there were an approach that simplifies slice management and allows service providers to bring 5G quickly to market.  Such an approach could base network management on a simple technology domain-based model, using standard interfaces per-domain to address 5G management and then evolving this design after initial deployment.

Engineering principles tell us the way to solve a complex problem is to break it down into simpler smaller problems. For 5G this means breaking the network into domains that can be managed individually but also linked to each other for capacity planning, service management, correlation, etc. A great deal of work is going into slice management for 5G, but it is also essential to think about the big picture and consider the entire approach for a fully manageable, easily implementable 5G network.

Figure 1: The 5G domains we expect to manage 

By breaking the problem down to domains, we can rely on each domain to understand the best way to provide resources for each 5G service class. These domains could also own the job of keeping service classes separate, so as to provide each as a separate network slice.  It would then be the job of multi-domain orchestration to manage the combined resources to provide the end-to-end network and make it visible to the service layer.

The domains shown in Figure 1, and their interfaces are as follows:

  • User Domain: The 5G user equipment, such as a smartphone, set-top box, PC, or IoT device. User equipment management standards will be part of the base specifications for 5G.
  • Virtualized Radio Access: Contains the remote radio, distributed unit and central unit, and where possible, runs as VNFs on commercial off-the-shelf (COTS) compute, storage and inter-networking provided by the virtualized infrastructure domain. The ORAN (Open Radio Access Network) Alliance is standardizing management interfaces for the 5G RAN as well as for interworking interfaces in the network.
  • Transport Domain: The transport domain is potentially split beyond what is in 4G. It contains fronthaul, midhaul, and backhaul elements, and will typically be an Ethernet over optical infrastructure. This domain may contain a cloud control layer based on virtualized compute. Existing transport interfaces across IP, Ethernet and optical layers are usable here including: Transport API (TAPI), Metro Ethernet Forum lifecycle orchestration (MEF LSO), and TM Forum interfaces. Open-source tools like OpenDaylight will be relevant to building interoperable controllers in this domain.
  • 5G Core: The core 5G network functions for functions such as authentication, access and mobility management and policy control. The 5G Core runs as VNFs on COTS provided by the virtualized infrastructure domain. 5G Core domain functions will have management interfaces defined per-function as part of the base 5G specifications.
  • 5G Services Domain: The 5G services domain understands the business logic and service class requirements for 5G services. Various standards and open source technologies may be applicable such as TM Forum and Open Network Automation Platform (ONAP), as well as work in the 5G Public Private Partnership (5G PPP) and other bodies.
  • Virtualized Infrastructure Domain: This domain includes the COTS infrastructure and the software stack for virtualization, including technologies and APIs from OpenStack, Kubernetes, and the Cloud Native Computing foundation. Telecom-specific software such as ONAP and Open Source Mano (OSM) can be applied.

Figure 2: A domain management approach to the complex 5G infrastructure 

In the scenario represented by Figure 2, each domain understands how to deliver its own appropriate set of network slices. This set of slices is then brought together by the multidomain orchestration layer to deliver an end-to-end network. The service layer can then request an end-to-end network from the orchestration layer that specifies the service class required.

Clearly, there will be cases where one domain needs visibility or control of an adjacent domain to provide the service level required. The multidomain orchestration layer could provide a dependency model that ensures such dependencies between domains. Ultimately some form of peer interworking between domains will be needed.

Looking at the long term, one desired goal may be to reduce the overall number of domains by combining management to get better capacity utilization and control over the infrastructure. However, separation allows for smoother initial roll-outs while retaining the openness desired by network operators. Another goal will be to begin to implement the full network slicing models envisioned by groups like 5G PPP and European Telecommunications Standards Institute (ETSI) as the 5G network matures.

The simplicity of a technology domain-based approach in early roll-outs of 5G will ensure that operators can mix and match technologies and avoid vendor lock-in, while still providing the services needed by customers and fulfilling the overall potential of the 5G network.

Networks and Vehicles Follow Similar Journey to Automation

Autonomous vehicles (let’s call them AVs) and Autonomous Networks (ANs) are road-mates; they’ve essentially traveled the same route in the quest for full automation. They share the overarching Holy Grail objective of zero-touch operation, undisturbed by human hand as they go about the full range of their respective operations.

The Society of Automotive Engineers (SAE) has defined a six-degree taxonomy that classifies the level and type of automation capabilities in a given vehicle. This is summarized on Wikipedia’s Self-Driving Car page and illustrated in Figure 1.

Figure 1: SAE levels of vehicle automation

Both AVs and ANs have already arrived at their third level of automation, i.e. partial automation, where most of what they do is automated—but human supervision, monitoring, and even interaction is still needed. And just as AVs have relied upon an evolving set of building blocks over decades, ANs have also employed and built upon a number of tools along the way. Figure 2 illustrates this cumulative evolution.

Figure 2: Building blocks of network evolution

There are many examples of these building blocks in the network world. For instance, we have the availability and growing adoption of zero-touch provisioning (ZTP); YANG model-based open interfaces (NETCONF, REST APIs, gNMI/gNOI); gRPC-based deep-streaming telemetry; extensive, detailed logging and monitoring; and streaming for rapid fault isolation and prediction.

Perhaps the most critical characteristic that AVs and ANs share is that in order for their potential to be fulfilled, diverse stakeholders need to come together and coordinate. In the AV world, massive efforts are underway at every level (governments, cities and towns, car companies, insurance companies, and technology vendors) to standardize and streamline end-to-end operations based on key principles of interoperation, openness and reliability.

For ANs, there is a similar and pressing need by networking community for collaborative, coordinated development of an open, generic framework for a fully autonomous optical network, which could be used for setting up reference use cases that can be extended to various network architectures. This framework should be driven by the primary requirement of ZERO human intervention in network operations after initial deployment—including configuration, monitoring, fault isolation, and fault resolution. The framework should leverage currently available tools and technologies for full-featured and automation-ready software, such as Fujitsu System Software version 2 (FSS2) for network element management, in conjunction with Fujitsu Virtuora®, an open network control solution for network element and network management.

Efforts to achieve autonomous networks and autonomous vehicles show strong similarities in terms of both pace and trends.  These similarities are driven by common objectives to, primarily, address scale and the need for a growing number of applications, while tackling the human error element, and enabled by an intertwined and cross-dependent set of technology advancements and adaptations.

DCI Growth Planning and the Bandwidth Amplification Effect

As more and more traffic is driven into data centers, in turn pressure builds between and among data centers. This phenomenon is known as the “bandwidth amplification effect,” which essentially means that when X amount of user traffic passes into a data center, it generates many times that amount of traffic within the data center and between that data center and others. This is why there is an urgent need for more data center interconnect (DCI) bandwidth and higher line rates to support these demands.

Operators have a couple of options for meeting DCI traffic demand. One is to increase fiber count and the other is to increase the line data rate. Increasing the data rate is far more common and economical and is accomplished with new bandwidth-variable transponders. Data rate increases may seem like the obvious remedy for boosting DCI bandwidth, but this option brings consequent issues and impairments along the optical path. These issues must be corrected via ROADMs and amplifiers. Although the modulation scheme is the most important aspect to consider when increasing DCI bandwidth, several other factors come into play. Among these are dispersion compensation, error correction, link distance (reach), amplification, channel width, and spectral tilt.

My new article on Lightwave summarizes the challenges and technologies associated with growing DCI traffic through higher line rates, and discusses each of the most important factors to be considered when planning the best way forward. Moving to higher line rates for DCI is an effective and economical way to address continued DCI growth, but a variety of equipment upgrades and new techniques are needed to adequately address new optical impairments and achieve the benefits of higher line rates.

Diversity and Digital Transformation: How Fujitsu Uses Innovation to Improve Inclusion

Fujitsu Network Communications is one of the leading companies in America when it comes to networking technology and solutions. They haven’t just made a name for themselves in tech, however. They’ve also made a huge impact when it comes to diversity and inclusion in the workplace, focusing on initiatives that help balance the makeup of the workforce and ensure all types of people have access to opportunities for advancement and career growth.

Greater Diversity and Increased Inclusion Equals Better ROI

As a diversity and inclusion champion, Fujitsu values the opportunity to enable collaboration and participation from a wide range of people. By making a concerted effort to be inclusive and invite a diverse group into the process, they have become the leader in a digital transformation that is rapidly reshaping the technological landscape. Empowering more people to have input has resulted in solutions that serve a larger group of people and meet a wider range of needs across a larger spectrum of the population, thereby increasing ROI.

Fujitsu is Led By Champions of Diversity

Fujitsu doesn’t simply uphold a general philosophy of inclusion and diversity. They also place leaders at the helm of the company who make acceptance and diversity a priority in hiring, promotion, and all facets of corporate life.

One such leader is Director of ICP and North American Carrier Sales for Fujitsu, Heidi Westbrook. Recently interviewed at the Women in Comms conference in Austin, TX on May 14, 2018, Westbrook explained how Fujitsu’s open work culture doesn’t just impact the machines people work with and the processes they use, but also defines the makeup of the company and the opportunities to which people have access.

As a female leader in a traditionally male-dominated industry, Westbrook encourages women and those who feel in the minority to advocate for themselves at work. She feels that, while self-advocacy is a forced or learned skill, it can also be encouraged by having a network of people around that know your value and are familiar with the unique talents you have to contribute to the organization. Westbrook reminds women that while they absolutely face unique challenges in the workforce, those challenges can be used to make them stronger. For example, women who are mothers and have families at home, she says, can succeed at both, noting that when someone does well at home, they typically do well at work– and vice versa.

A final piece of advice from Westbrook: Have discussions about goals and remember what your talents are, without ever losing confidence in those talents. Women should remember that they are delivering value to the organization because of what they bring to the table, and keep focused on their strong points, no matter what obstacles they face.

Watch the entire interview with Westbrook here: https://www.lightreading.com/business-employment/women-in-comms/fujitsus-sales-director-be-your-own-champion/v/d-id/743119

At Fujitsu, company leaders understand that diversity spurs innovation and leads to more successful digital transformation. By focusing on diversifying the workforce and opening up opportunities to people of all races, genders, backgrounds, and more, the company does important work to create a more open and welcoming world.

Fujitsu Honors Local Teachers and Students for STEM Accomplishments

STEM education (Science, Technology, Math and Engineering) is an interdisciplinary approach to education where students learn science and math-centric fields via hands-on lessons. STEM has become a priority in American schools thanks in part to a critical need for those with the knowledge that STEM education teaches.

Studies show that 80% of jobs in the next decade will require a STEM skill. Unfortunately, the U.S. is lagging behind the rest of the world when it comes to teaching these critical skills. In fact, America ranks 29th in math and 22nd in science skills, and only 16 percent of American high school seniors who are proficient in math are actually interested in STEM careers.

Because of this lag in STEM skills and enthusiasm, Fujitsu Network Communications recognizes the importance of encouraging schools and educators to promote STEM, showing kids both how essential and how fulfilling STEM educational experiences can be.

As one of the world’s leading ICT companies, Fujitsu is well aware of how science and technology education can impact the future of the world. We understand that in order to inspire students, STEM skills need to be championed at every stage of learning: by parents, teachers, schools, mentors, non-profits and businesses alike. In 2010, we established the Fujitsu Teacher Trailblazer Award, an honor given to Richardson Independent School District (RISD) K-6 teachers who successfully integrate creative, innovative uses of technology as part of the instruction process. In 2018, we also began awarding Fujitsu STEM scholarships to graduating seniors from RISD and local alternative schools who plan to enter a two- or four-year college or university to major in a STEM field.

To qualify for the Fujitsu Teacher Trailblazer Award (which comes with a $5,000 prize), an RISD teacher must implement technology as part of the instruction process. They must also use innovative questioning and inquiry techniques to challenge students and harness instructional strategies to actively engage students in the learning process. The Fujitsu trailblazing teacher doesn’t just excel in the classroom, but also seeks out and engages in professional development activities.

This year’s Trailblazer winners are Audrey Leppke, a first-grade teacher at Math Science Technology Magnet School; and Sarah Beasley, a third-grade teacher at Lake Highlands Elementary School. Each received $5,000 personal award to recognize their great efforts.

In order to qualify for one of two $5,000 Fujitsu STEM scholarships, an RISD high school senior must have a minimum GPA of 3.0 and have taken four years of science, technology, engineering and/or math classes with Bs or better. They must also be planning to enter a two- or four-year college or university to major in a STEM field. This year’s scholarships recipients are Adam Gallo and Joshua Harris.

STEM education is a growing priority in America, and Fujitsu has dedicated itself to furthering the cause in the area around its headquarters. By encouraging teachers and students to delve in and learn more about how they can benefit a deeper knowledge of science, math, and technology, we are helping create a larger group of career-ready people who will soon enter the workforce – and spreading the values of STEM subjects to younger generations.

Integrated Laboratory Testing – An Investment that Pays off for Rail Operators

The traditional approach that rail operators have taken to their communications networks is changing to support new IP video, voice and data applications, as well as improved mobile connectivity and stronger cybersecurity. The advent of the flexible converged network is bringing new challenges, one of which is to turn up the heat on pre-deployment testing. Factory Acceptance Testing (FAT) is no longer enough because, while it adequately covers issues relating to individual components, FAT falls short when it comes to identifying issues that arise when multiple system components come together in a fully integrated system.

The answer is to bring system components together and put them through their paces in a controlled laboratory environment before live deployment. For want of a better name, this approach is known as Integrated FAT (IFAT). But setting up a fully capable laboratory and hiring the necessary experts requires significant upfront investment. It’s easy to imagine the level of expenditure needed won’t pay off, but in fact it’s more than justifiable when the cost-saving benefits are taken into account over the longer term.

The simple reason is that integrated testing improves reliability and drastically reduces network downtime, and every minute of downtime is expensive. That’s all there is to it. Discovering and correcting issues before committing to live traffic is far less costly and disruptive than troubleshooting and correction under the pressures of daily operation. Many organizations have no grasp of the costly ripple effects that network downtime has on their business: lost revenue, lost information, damaged reputations and lost customers.

Leaving aside the rewards in terms of reduced downtime, a laboratory outfitted for IFAT brings with it other valuable benefits. Improved cybersecurity is just one of these. Networks are becoming more enmeshed with IT systems, making them more vulnerable to cyber-attack to a degree that cybersecurity has become a critical issue.  For instance, according to the Ponemon Institute’s study, “2017 Cost of Cyber Crime,” the average annualized cost of cybercrime for the transportation industry was $7.36M.

Change control is another area in which lab-based IFAT delivers benefits, in terms of improved reliability and network service quality. Changes equal risk because every change has the potential for unforeseen side effects. For example, imagine you bring up a new circuit between two communication centers and find that application traffic is unexpectedly following an asymmetrical path. Traffic goes out from Comms Center A to Comms Center B on the old circuit, but it comes back on the new one. This is a fairly common scenario—but now there’s a decision to make: Do you try to fix the issue, or back out your change and wait until next month to bring the new circuit into production? What does the change control procedure say? Is there a change control procedure? Will this asymmetrical routing situation even pose a problem?

This is a lot of information to quickly process for an operations tech who most likely does not have a full view of the big picture, and who is running on pizza, Cokes, day-old coffee, and minimal sleep. It is not rare to have an engineer make a small change to fix a routing issue only to cause a major failure. Having a lab facility to duplicate, isolate, make corrections, and develop methods of procedure not only eliminates this risk, but gives your engineers confidence that when they return to the field, everything will go as planned.

Additional valuable benefits of IFAT derive from making full use of the facility as a permanent fixture for ongoing upgrade testing (hardware/software), proofs of concept, staff training, and trouble simulations or disaster recovery drills.

While the transportation industry stands to benefit immensely from advanced networks that can support improved passenger comfort, better real time communication and higher safety standards, the industry needs to go beyond testing components in isolation from one another and embrace deeper and more comprehensive integrated testing in laboratory environments. IFAT offers the best chances of achieving a successful and predictable outcome that avoids costly redesign and troubleshooting during outage operations.

What the NFL Tells us About DCI

Data Center Interconnect has historically been driven by the pressure of simple demand: the kind of demand that’s satisfied by big, fast, dumb point-to-point pipes. But the value and potential of “big and fast” are held in check by “dumb.” It’s like football; bigger and faster will only take you so far in the National Football League (NFL). As game plans get more complicated, players are expected to think strategically about the other team’s offense or defense. Similarly, DCI is also getting more complicated as the pressure builds—and those big, fast pipes must ditch the dumb and get smart.

Data centers already have requirements in place for encryption, streaming telemetry and LLDP, all of which mean adding intelligence. Flex-grid; mixed modulation schemes; the growing mix of baud rates; and multiple FEC options (not to mention mesh connectivity in the planning arena) also demand more “brains” to match the brawn. The challenging task of selecting the optimal modulation, baud, grid and FEC is impossible unless the intelligence is there.

Variable and unpredictable traffic loads add another layer of complexity; business and the internet are inherently chaotic. The historical trend of “designing for the worst case,” (AKA “busy hour design”) is no longer economical. Data centers need capabilities to handle changing workloads gracefully and efficiently without overbuilding. These trends have significant positive implications for DCI; the agility and intelligence needed to meet dynamic workloads will improve the operational efficiency of the whole network. Put simply—bigger, faster, smarter pipes in DCI are just like NFL players who are also strategic thinkers. In both cases, add brains to brawn and the game is on.

Open Networks, Open for Business

For the ICT industry, this nascent era of business models based on cloud computing and OTT content is characterized by a heady brew of innovation, change and growth. Open networking offers service providers a route to much-needed rapid service deployment, agile innovation, and leaner spending. For these reasons, the industry is pushing for open-source standards and transport equipment vendors are capitalizing on this new thinking. Migration is underway from traditional proprietary converged platforms to more modular/single use-case form-factors and functionality.

What is an Open Optical Network?

You might ask, what are the key features of an open optical network? Essentially it boils down to networks operating on an industry-agreed common, multivendor foundation. This includes the ability to have open software and open line systems that comply with open standards for interoperability. In sum, this means a mix-and-match multivendor network environment where all the parts “speak” a common language of control and data exchange.

Open Hardware

Optical networking hardware, such as Reconfigurable Add Drop Multiplexers (ROADMs) and transponders, is evolving in terms of form factor, functionality, and functional disaggregation. Equipment is changing from the large, converged platforms of the past decade to smaller units engineered for single use-cases; simplified network design and operation; efficient space utilization; and lower power consumption. Other essential features of open hardware are plug-and-play or self-installing components; automated provisioning; and software features and interfaces that enable easy integration and meaningful data exchange with different management systems.

Open Software

A notable aspect of open networking is the decoupling of software from hardware development and the transition from proprietary, embedded software to open-source code. Open software should include a single provisioning model with both service activation and service assurance, in addition to a centralized service rollout model. Open software management systems must also be capable of managing third-party systems or tools, and compliant with new standards or initiatives. The network elements must also support open APIs, enabling open management.

Benefits

Perhaps the most obvious benefit from open networking is that service providers are no longer locked in to a specific vendor’s hardware or controller software. When service providers can freely combine equipment from multiple vendors, they have freedom of choice that can directly reduce costs, and when an entire network is managed via common open interfaces and protocols, networks get tested, validated and deployed faster. Moreover, if every part of the network, figuratively speaking, shares a common language, it is easier to eliminate overbuilds or stranded bandwidth. Thus, open networking not only gives providers greater freedom of choice and speed of execution, it helps them to make the fullest use of existing resources. Ultimately, in business terms, this can result in faster service roll-outs.

Another benefit of open networking is that it will ultimately provide a shared technological framework to support innovation. The standards being implemented in the communications network industry are common across the entire IT industry, meaning that service providers have an open invitation to an innovation ecosystem.

Challenges

The primary challenge is successfully navigating the transition from traditional telecom standards to newer open-source standards—not least because the standards themselves are still evolving. “Openness” is not a binary state and the industry must tackle hardware and software components possessing various degrees of openness and interoperability.

On the hardware side, we see everything from closed-and-proprietary paradigms all the way to plug-and-play installation, functional disaggregation, and ultimately, interoperability. Likewise on the software side, we see a similar spectrum, from closed-and-proprietary to open standards, open software platforms, open APIs and ultimately, open applications. Several non-proprietary initiatives are driving open networking forward, including OpenDaylight, ONOS/CORD, ONAP, OpenStack, and the Open ROADM MSA, to name a few.

Conclusions

Open networking is signaling the desire for equipment with a narrower use case and simpler feature sets that enables low-cost and simpler operations. Flexibility, scalability and simplicity are the keys to realizing the potential of the open network.

Open networking supports ecosystem-based innovation and multi-sourcing, which boost cost, competition and supply reliability, while avoiding vendor lock-in and reducing burdensome complexity. Scalable, modular equipment reduces first cost and adds flexible pay-as-you go bandwidth growth, benefiting service providers by broadening their range of capital spending options and timelines. Open networking makes operations simpler and improves service creation and activation times, overall helping to “crack the tough nut” of reducing operational and ongoing costs.

Times, they are a-Changin’ (and the Pace is a-Heatin’ Up)

Three decades is a long time to be in the same industry, even one as historically slow-moving as telecommunications. It’s certainly long enough to become familiar with the typical rate of change. Looking back over my thirty-year telecom tenure, it’s clear that bigger changes are happening at an accelerating pace.

A quick look at how long it takes people to pick up new technologies is enough to prove this observation. By considering technologies that have come to dominate our lives over the past 100 years and examining how long it took each to reach 50 million users, we discover a few interesting things.

Let’s start with the technology that started the communication-at-a-distance revolution: the ubiquitous telephone. It took 75 years for Bell to attract 50 million subscribers after rolling out the telephone in 1876. Then, from the first TV broadcast in 1929, it took a relatively short 33 years to garner 50 million viewers. The World-Wide Web took only four years, starting in 1991, to reach this milestone. More recently Angry Birds, as mentioned elsewhere on this site by Rhonda Holloway, hit the market in 2009 and it took just 35 days for 50 million users to catch on.

With adoption time frames collapsing from almost a century to a little over a month, clearly the pace of adoption is accelerating. But astute readers will point out that I’m not exactly making fair comparisons regarding technology deployments. The first two (the telephone and television) depend on infrastructure deployments that require huge investments of expertise, construction, equipment and time. The second two (the WWW and Angry Birds) are “just software,” which, without seeming disingenuous, is much easier and faster to deploy.

And that is indeed the case; software is in general easier to deploy and the future of networking is not hardware; it’s software. To manage the hyper-connected, always-on, high-bandwidth demands of the Internet of Everything, networks will be forced to evolve in ways that are unimaginable if we keep thinking about operating them in the same hardware-oriented way we always have. The network must become a programmable entity and evolve beyond mere physical infrastructure.

Are your network and your operations capabilities prepared for Angry Birds deployment speed? My next few posts will explain how you can achieve a programmable network, leverage new hardware and software technology advancements and ultimately, implement the disaggregated network of the future.