Get Smart: Why the Future of Your City Depends on Smart Infrastructure

The single biggest factor in determining the fate of your city’s digital future is its technological infrastructure. Because we live in an internet-based, digital age, if your city wants to be at the forefront of progress, economic development, growth, and relevance – it must invest wisely.

The right kind of infrastructure – high-speed fiber and wireless broadband – is essential. By comparison, the strength of a building lies in its foundation. A poorly constructed foundation can’t be counted on to support the load of the entire structure. Likewise, your city’s broadband infrastructure must be a rock-solid foundation so it, too, can provide the critical platform to deliver enhanced services, innovate and enable a smart city.

Those cities that have invested in a broadband infrastructure view it as an asset, just as valuable to their community as its other public infrastructure – water, streets, sewer lines, or gas/electric utilities.

Today, incumbent carriers aren’t upgrading networks or extending broadband services fast enough for unserved or underserved smaller and rural communities. As a result, many communities are left to lease aged, copper-based networks. Unfortunately, these communities’ economic fates become dependent, in part, upon the incumbent carriers’ network modernization timetables. By not being able to take control over their destinies, many cities adopt a wait-and-see approach which puts them at a big disadvantage to other, more proactive cities. It puts the city and its residents behind the technology curve and forces them to play the catch-up game.

Modern cities require modern infrastructure. In order for your city to solidify itself as economically viable, competitive, and a desirable place to live, you must undergo a digital transformation. Doing so is the catalyst for a fundamental reshaping of your city’s digital future. With a modern broadband foundation, you will have the primary building blocks for cloud infrastructure, sensors, smart services and applications. All of which give your municipality an edge as you evolve toward a smart city – a smart infrastructure with the connected connectivity, data, artificial intelligence (AI) and the enhanced capability to solve pressing civic issues.

The Drawbacks of an Outdated City Communications Infrastructure

Being deprived access to the most up-to-date technology can make your city feel old-fashioned and pose an inconvenience to your residents, anchor institutions, and businesses. A leased, copper-based network can be rife with challenges, including limited bandwidth capacity, a lack of intelligence to inform, and interoperability complexities when rolling out smart city technologies. Not to mention that copper can carry far less data at far slower speeds than fiber – which means many of the cloud- or Internet-based technologies associated with smart infrastructure cannot effectively be operated or hosted.

Bottom-line, an outdated infrastructure is a limiting factor. For example, it can be a deterrent for attracting outside business investments, tourism, people migration, and job growth.

How can a City Leverage Smart Applications?

Connected applications offer your city numerous possibilities to take advantage of a smart infrastructure. Your city personnel can use the data you’ve collected from connected infrastructure to make informed decisions about what makes your city run best – and this is what ultimately makes a city smart.

There is a wide range of smart city applications available today, including:

  • Active security: Increase your level of smart protection with technologies like facial and license plate recognition, gunshot detection, perimeter patrolling, and crowd counting. They give your security officers greater situational awareness on recognizing potential hazards, understanding when a situation is escalating, and knowing how to respond appropriately.
  • Parking and Transportation: Smart parking technology can detect parking space availability, automate metering, dynamically price spaces, issue tickets, and collect payments.Also, by leveraging connected cameras coupled with AI, traffic engineers can better manage traffic flows and synchronize signals. This capability provides the smarts necessary to lessen congestion, reduce air pollution, and ease commuting stress.

Co-creation Brings it all Together

When you’re ready to make the bold step forward on your digital transformation journey, you don’t have to go it alone. As your innovation partner, we’ll take a collaborative approach to plan, design, integrate, and implement your vision from concept to reality – whether that’s building a multivendor broadband network from scratch or upgrading it with smart infrastructure, including operating and maintaining it. Working together, we’ll co-create a unique solution that delivers real outcomes – real success to your community, including public safety, economic opportunity, operational efficiencies, and civic engagement.

Operationalizing Disruption: A Shout-Out to the Grumpy Guy

The future network is reliant on disruptive technology. Let me already correct myself: The future network is reliant on actually implementing disruptive technology. That means clearing away the smoke and mirrors and passing the baton to the operations team who have the daily responsibility of taking SDN, NFV, SD-WAN and other technologies out of the proof-of-concept lab and putting them to work in the real world. This is what I mean by the term operationalizing disruption.

It seems incongruous but only on the surface: How can we make disruptive technology be no longer disruptive?  What it comes down to—when all the vendors have left the negotiating table—is a shift in emphasis to the practical aspects of running a reliable network. The network technology changes happening now are not linear go faster, further, or fatter incremental improvements. We already have methodologies in place to absorb those into today’s operational environments. Migration to disruptive technologies like SDN and NFV, though, is a fundamental shift and revolutionary—and it is uncharted territory.

As a trusted business partner, everything we do is about helping our customers successfully navigate positive change in their networks. Because when it all gets integrated and the new POC starts being implemented, it’s not about the shiny new stuff itself anymore—it’s about being able to control our customer’s end-user’s experience.

When we look at customer needs, each functional area has its own unique perspective. While the planners may be excited about modeling the new technology and adopting it before the competition, the CIO may be a little grimacy because of the need to code up and flow through a lot more connections in an already constrained budget.

But the operations side of the house has a unique challenge because they are entrusted to deliver reliability SLAs on the traditional network to generate the return for their corporation.  When it comes to network migrations, it can be a heavy workload to balance upgrades with consistent network performance. That’s why, during the early phases of disruptive change projects, the ops people at the table might be a little skeptical. Some mistake this for being innovation-unfriendly. Far from it. They have a right to be cautious. They’re the ones who deliver value for the entire organization because they’re the ones who keep the network performing continuously and predictably and daily to meet SLAs for banks, hospitals, data centers. Essentially they ensure everyone else gets paid. You can’t blame them for treating the latest disruptive brainchild with more than a few questions, especially if they are told how great it will all be…but nobody really knows how to control it, monitor it, or troubleshoot it.

It’s easy to focus on the cool factor of turning real network things into virtual network things. But the Operations view is undoubtedly that you have to keep these virtual things in the realm of reality, since they have to be reliable and useful in the real world.

So, here is a shout out to those grumpy guys – the unexpected heroes of network reliability and delivering daily on corporate financial performance.

At Fujitsu Network Communications, we recognize that operationalizing disruptive change probably means we have to invent some new science. We are working on defining the right skills, the new processes, and the best tools to help our customers accelerate their adoption of disruptive technology. By doing so, we help our customers bring their future into now.

Diversity and Digital Transformation: How Fujitsu Uses Innovation to Improve Inclusion

Fujitsu Network Communications is one of the leading companies in America when it comes to networking technology and solutions. They haven’t just made a name for themselves in tech, however. They’ve also made a huge impact when it comes to diversity and inclusion in the workplace, focusing on initiatives that help balance the makeup of the workforce and ensure all types of people have access to opportunities for advancement and career growth.

Greater Diversity and Increased Inclusion Equals Better ROI

As a diversity and inclusion champion, Fujitsu values the opportunity to enable collaboration and participation from a wide range of people. By making a concerted effort to be inclusive and invite a diverse group into the process, they have become the leader in a digital transformation that is rapidly reshaping the technological landscape. Empowering more people to have input has resulted in solutions that serve a larger group of people and meet a wider range of needs across a larger spectrum of the population, thereby increasing ROI.

Fujitsu is Led By Champions of Diversity

Fujitsu doesn’t simply uphold a general philosophy of inclusion and diversity. They also place leaders at the helm of the company who make acceptance and diversity a priority in hiring, promotion, and all facets of corporate life.

One such leader is Director of ICP and North American Carrier Sales for Fujitsu, Heidi Westbrook. Recently interviewed at the Women in Comms conference in Austin, TX on May 14, 2018, Westbrook explained how Fujitsu’s open work culture doesn’t just impact the machines people work with and the processes they use, but also defines the makeup of the company and the opportunities to which people have access.

As a female leader in a traditionally male-dominated industry, Westbrook encourages women and those who feel in the minority to advocate for themselves at work. She feels that, while self-advocacy is a forced or learned skill, it can also be encouraged by having a network of people around that know your value and are familiar with the unique talents you have to contribute to the organization. Westbrook reminds women that while they absolutely face unique challenges in the workforce, those challenges can be used to make them stronger. For example, women who are mothers and have families at home, she says, can succeed at both, noting that when someone does well at home, they typically do well at work– and vice versa.

A final piece of advice from Westbrook: Have discussions about goals and remember what your talents are, without ever losing confidence in those talents. Women should remember that they are delivering value to the organization because of what they bring to the table, and keep focused on their strong points, no matter what obstacles they face.

Watch the entire interview with Westbrook here: https://www.lightreading.com/business-employment/women-in-comms/fujitsus-sales-director-be-your-own-champion/v/d-id/743119

At Fujitsu, company leaders understand that diversity spurs innovation and leads to more successful digital transformation. By focusing on diversifying the workforce and opening up opportunities to people of all races, genders, backgrounds, and more, the company does important work to create a more open and welcoming world.

Fujitsu Honors Local Teachers and Students for STEM Accomplishments

STEM education (Science, Technology, Math and Engineering) is an interdisciplinary approach to education where students learn science and math-centric fields via hands-on lessons. STEM has become a priority in American schools thanks in part to a critical need for those with the knowledge that STEM education teaches.

Studies show that 80% of jobs in the next decade will require a STEM skill. Unfortunately, the U.S. is lagging behind the rest of the world when it comes to teaching these critical skills. In fact, America ranks 29th in math and 22nd in science skills, and only 16 percent of American high school seniors who are proficient in math are actually interested in STEM careers.

Because of this lag in STEM skills and enthusiasm, Fujitsu Network Communications recognizes the importance of encouraging schools and educators to promote STEM, showing kids both how essential and how fulfilling STEM educational experiences can be.

As one of the world’s leading ICT companies, Fujitsu is well aware of how science and technology education can impact the future of the world. We understand that in order to inspire students, STEM skills need to be championed at every stage of learning: by parents, teachers, schools, mentors, non-profits and businesses alike. In 2010, we established the Fujitsu Teacher Trailblazer Award, an honor given to Richardson Independent School District (RISD) K-6 teachers who successfully integrate creative, innovative uses of technology as part of the instruction process. In 2018, we also began awarding Fujitsu STEM scholarships to graduating seniors from RISD and local alternative schools who plan to enter a two- or four-year college or university to major in a STEM field.

To qualify for the Fujitsu Teacher Trailblazer Award (which comes with a $5,000 prize), an RISD teacher must implement technology as part of the instruction process. They must also use innovative questioning and inquiry techniques to challenge students and harness instructional strategies to actively engage students in the learning process. The Fujitsu trailblazing teacher doesn’t just excel in the classroom, but also seeks out and engages in professional development activities.

This year’s Trailblazer winners are Audrey Leppke, a first-grade teacher at Math Science Technology Magnet School; and Sarah Beasley, a third-grade teacher at Lake Highlands Elementary School. Each received $5,000 personal award to recognize their great efforts.

In order to qualify for one of two $5,000 Fujitsu STEM scholarships, an RISD high school senior must have a minimum GPA of 3.0 and have taken four years of science, technology, engineering and/or math classes with Bs or better. They must also be planning to enter a two- or four-year college or university to major in a STEM field. This year’s scholarships recipients are Adam Gallo and Joshua Harris.

STEM education is a growing priority in America, and Fujitsu has dedicated itself to furthering the cause in the area around its headquarters. By encouraging teachers and students to delve in and learn more about how they can benefit a deeper knowledge of science, math, and technology, we are helping create a larger group of career-ready people who will soon enter the workforce – and spreading the values of STEM subjects to younger generations.

Digital Transformation in the Hyperconnected World of 5G

Can you feel the anticipation? As we approach the era of 5G, excitement continues to build over the potential for new, disruptive digital services that are expected to flourish in tomorrow’s hyperconnected world. Digital technology is already transforming every facet of business and society, and the pace will only accelerate in the next phase of network evolution.

But despite the hype, this transformation doesn’t just happen overnight. If only we could flip a switch and (poof!) we suddenly have a complete ecosystem capable of supporting all the services that 5G and the Internet of Things (IoT) will deliver. To enable a responsive network that can live up to the hype, disparate new and legacy technologies will need to come together in a flexible and open infrastructure.

So how do we build a flexible platform that’s open, yet secure? At Fujitsu, we believe that digital co-creation is the answer. As the industry prepares for the next wave of network evolution, co-creation will enable information sharing and innovation beyond boundaries to deliver real digital transformation and business value.

Outside the Box

Arguably, the true promise of 5G will be the development of entirely new business models like we’ve never seen before. To deliver on that promise, network service providers will require a scalable ecosystem that spans technologies, industries and vendors. Secure, seamless, end-to-end connections across wireless and wireline technologies would be nearly impossible with yesterday’s proprietary architecture.

This vision of hyperconnectivity will be key to realizing innovative 5G business models, powering flexible bandwidth on demand to support the digital service ecosystem. Service-aware platforms that incorporate artificial intelligence, machine learning and big data analysis will enable a broad range of offerings, from high-speed home entertainment and IoT initiatives, to autonomous cars and smart cities. In order for tomorrow’s networks to provide a secure exchange of information across boundaries, however, service providers will require open, programmable interfaces for collaboration.

At Fujitsu, we’re uniquely positioned to help build this ecosystem, delivering a very scalable optical network, as well as intelligent software, to enable end-to-end 5G services across both the wireline transport network and the wireless radio access network (RAN). That’s why we are working closely with our customers as they plan and deploy the network infrastructure that will enable the hyperconnected 5G vision. This co-creation — with customers and industry partners— is about helping to advance the ecosystem and develop digital business models that will benefit network service providers, their subscribers and society overall.

For example, digital co-creation led us to develop our Virtual Access Network (vAN) solution, a flexible and cost-effective approach to delivering access services. With the vAN solution, service providers can support small and medium businesses with services that were previously cost-prohibitive, particularly in rural areas. Through the process of co-creation, we developed a new service that allows customers to save time, money and resources.

To Tomorrow and Beyond

The evolution of the hyperconnected world is quickly accelerating toward a future full of opportunity. Digital co-creation will be fundamental to making sure that service providers, and the entire ecosystem, are well-equipped to fully realize the 5G vision. And service-aware, conscious networks built on flexible, programmable, open platforms will be the engine that powers that digital transformation. To learn more about our vision for 5G, visit: https://fast.wistia.com/embed/iframe/r2fsy5ad9c.

Abstract and virtualize, compartmentalize and simplify: Automating network connectivity services with Optical Service Orchestration

Service providers delivering network connectivity services are evolving the transport infrastructure to deliver services faster and more cost efficiently. Part of the strategy includes using a disaggregated network architecture that is open, programmable and highly automated. The second part of the approach takes into consideration how service providers can leverage that infrastructure to deliver new value-added services. There’s no question that the network can, but to what extent? How agile does the infrastructure need to be to accommodate dynamic services? What is required to shift the transport infrastructure more to the revenue infrastructure column rather than the overhead infrastructure column?

Today, service providers have deployed separate optical transport networks with each containing a single vendor’s proprietary network elements.  Optical line systems using analog amplification are customized and tuned to enhance the overall system performance, making it nearly impossible for different vendor devices to work together within the same domain. For years, service providers with simple point-to-point transmission have used alien wavelength deployments leveraging multi-vendor transmission on single vendor optical networks. However, as service providers look to add more flexibility to the network using configurable optical add/drop multiplexing, the ability to use different vendor components on legacy systems is impractical.

It is evident by historical deployments that optical vendors have competed for business based on system flexibility, capacity, and cost per KM. This has led to the deployment of optical domain islands. That doesn’t reflect a dastardly plan by any single vendor to corner the optical transport market. As outlined above, the drive to differentiate on performance and capacity contribute to monolithic, closed, and proprietary systems. In many cases network properties, span distance, or fiber type, dictates what system a service provider deploys. This leads to a deployment of separate optical system islands (optical domains). A provider has separate optical domains in metro networks, access network, and long haul networks. Each network is managed by a separate management system, which means that for service providers to configure services across the optical infrastructure, manual coordination is required.

Industry collaboration efforts such as the Optical Internetworking Forum (OIF) have contributed tremendously to interoperability of physical and link layers by developing implementation agreements, socializing standards, benchmark performance, and testing interoperability. These efforts have accelerated deployment of technology that lowers cost of implementing high capacity technology. However, service providers still face the expense and time of managing separate optical domains together and maintaining them over time.

Many service providers are leading the industry to supporting open optical systems. With open optical systems, optical networks are deployed in a greenfield environment where the vendors are natively and voluntarily interoperable. The Open ROADM MSA and participating vendors is one example. Open ROADM devices are part of a centrally controlled network that includes multiple vendors’ equipment, and functionality is defined by an open specification. This type of open network delivers value with lower equipment costs and reduced supply disruptions.

There is no escaping the complication that this type of networking makes it inherently difficult for service providers to introduce new vendors into a network that is delivering private line services. In this environment, operational costs are far more significant than equipment costs. Each system is configured independently, with time and extreme expertise across multiple functional areas required to bring them together to deliver end user services. New services face the same hurdles of time, field, and needed back office expertise, further incrementing the work needed to integrate existing elements.

To fully harness the power of automated provisioning and virtualization for network connectivity services, a different type of orchestration is required. We’ll call it Optical Service Orchestration (OSO.) With the OSO concept, service providers are able to lifecycle manage connectivity services across separate optical domains, and virtualize the optical domains, allowing end customers to manage their own private network.

Using OSO, service providers don’t have to change out the entire network. They can deliver a network connectivity service from one domain to another, whether it’s physical or virtual, with simple configuration changes that are controlled and managed by software-defined networking.

An Optical Service Orchestrator combines the existing network with innovative vendor approaches as it makes sense for the network and the business. Some domains are open; some are not. Some vendors want to participate in open technologies and communities, some do not. Some are highly focused on the performance that comes from a tightly coupled optical components. The truth is that vendors occupying the optical domain have been doing this for a long time and are evolving their technology to deliver next-generation digital services. It would be foolish to turn away from expert innovation in an attempt to commoditize network equipment.  Especially when the underlying optical component ecosystem is already commoditized.

In a typical operator optical network with a mix of legacy and open optical domain deployments, an OSO platform controls multiple optical domains, regardless how open the domain is, and automatically stitches services together across domains. Each domain becomes an abstracted ”network element” with discrete inputs and outputs, with the OSO orchestrating puts and gets into an automated workflow. This common controller extracts the optical topology to the IP and MPLS layer and then adds layer 2 and layer 3 services on top programmatically and automatically, spanning the physical and virtual network seamlessly.

The result is that the operator can deliver Ethernet private line service without having to understand and configure each vendor’s optical domain. The domain vendor controller handles the idiosyncrasies of the optical domain without having to give up on network performance (Cost / GB-KM). Abstract and virtualize, compartmentalize and simplify.

Service providers are able to leverage the OSO capabilities to virtualize transport networks by providing a simple customer web portal. The portal allows a service provider’s end customers to provision their own services on a virtual optical network using service templates with any number of network element configurations.

Service providers gain the ability to extend the life of their legacy gear, as well as allowing for the eventuality of introducing new gear into the network- all while using software to provision dynamic services. With the OSO, service providers can automate transport, lower costs all while growing and monetizing new network connectivity services.

Andres Viera will present “Enabling Automation in Optical Networks” at the NFV & Zero Touch Congress show, April 25 @ 4:05pm. Stop by Fujitsu booth #13 to learn more.

Integrated Laboratory Testing – An Investment that Pays off for Rail Operators

The traditional approach that rail operators have taken to their communications networks is changing to support new IP video, voice and data applications, as well as improved mobile connectivity and stronger cybersecurity. The advent of the flexible converged network is bringing new challenges, one of which is to turn up the heat on pre-deployment testing. Factory Acceptance Testing (FAT) is no longer enough because, while it adequately covers issues relating to individual components, FAT falls short when it comes to identifying issues that arise when multiple system components come together in a fully integrated system.

The answer is to bring system components together and put them through their paces in a controlled laboratory environment before live deployment. For want of a better name, this approach is known as Integrated FAT (IFAT). But setting up a fully capable laboratory and hiring the necessary experts requires significant upfront investment. It’s easy to imagine the level of expenditure needed won’t pay off, but in fact it’s more than justifiable when the cost-saving benefits are taken into account over the longer term.

The simple reason is that integrated testing improves reliability and drastically reduces network downtime, and every minute of downtime is expensive. That’s all there is to it. Discovering and correcting issues before committing to live traffic is far less costly and disruptive than troubleshooting and correction under the pressures of daily operation. Many organizations have no grasp of the costly ripple effects that network downtime has on their business: lost revenue, lost information, damaged reputations and lost customers.

Leaving aside the rewards in terms of reduced downtime, a laboratory outfitted for IFAT brings with it other valuable benefits. Improved cybersecurity is just one of these. Networks are becoming more enmeshed with IT systems, making them more vulnerable to cyber-attack to a degree that cybersecurity has become a critical issue.  For instance, according to the Ponemon Institute’s study, “2017 Cost of Cyber Crime,” the average annualized cost of cybercrime for the transportation industry was $7.36M.

Change control is another area in which lab-based IFAT delivers benefits, in terms of improved reliability and network service quality. Changes equal risk because every change has the potential for unforeseen side effects. For example, imagine you bring up a new circuit between two communication centers and find that application traffic is unexpectedly following an asymmetrical path. Traffic goes out from Comms Center A to Comms Center B on the old circuit, but it comes back on the new one. This is a fairly common scenario—but now there’s a decision to make: Do you try to fix the issue, or back out your change and wait until next month to bring the new circuit into production? What does the change control procedure say? Is there a change control procedure? Will this asymmetrical routing situation even pose a problem?

This is a lot of information to quickly process for an operations tech who most likely does not have a full view of the big picture, and who is running on pizza, Cokes, day-old coffee, and minimal sleep. It is not rare to have an engineer make a small change to fix a routing issue only to cause a major failure. Having a lab facility to duplicate, isolate, make corrections, and develop methods of procedure not only eliminates this risk, but gives your engineers confidence that when they return to the field, everything will go as planned.

Additional valuable benefits of IFAT derive from making full use of the facility as a permanent fixture for ongoing upgrade testing (hardware/software), proofs of concept, staff training, and trouble simulations or disaster recovery drills.

While the transportation industry stands to benefit immensely from advanced networks that can support improved passenger comfort, better real time communication and higher safety standards, the industry needs to go beyond testing components in isolation from one another and embrace deeper and more comprehensive integrated testing in laboratory environments. IFAT offers the best chances of achieving a successful and predictable outcome that avoids costly redesign and troubleshooting during outage operations.

What the NFL Tells us About DCI

Data Center Interconnect has historically been driven by the pressure of simple demand: the kind of demand that’s satisfied by big, fast, dumb point-to-point pipes. But the value and potential of “big and fast” are held in check by “dumb.” It’s like football; bigger and faster will only take you so far in the National Football League (NFL). As game plans get more complicated, players are expected to think strategically about the other team’s offense or defense. Similarly, DCI is also getting more complicated as the pressure builds—and those big, fast pipes must ditch the dumb and get smart.

Data centers already have requirements in place for encryption, streaming telemetry and LLDP, all of which mean adding intelligence. Flex-grid; mixed modulation schemes; the growing mix of baud rates; and multiple FEC options (not to mention mesh connectivity in the planning arena) also demand more “brains” to match the brawn. The challenging task of selecting the optimal modulation, baud, grid and FEC is impossible unless the intelligence is there.

Variable and unpredictable traffic loads add another layer of complexity; business and the internet are inherently chaotic. The historical trend of “designing for the worst case,” (AKA “busy hour design”) is no longer economical. Data centers need capabilities to handle changing workloads gracefully and efficiently without overbuilding. These trends have significant positive implications for DCI; the agility and intelligence needed to meet dynamic workloads will improve the operational efficiency of the whole network. Put simply—bigger, faster, smarter pipes in DCI are just like NFL players who are also strategic thinkers. In both cases, add brains to brawn and the game is on.

Open Networks, Open for Business

For the ICT industry, this nascent era of business models based on cloud computing and OTT content is characterized by a heady brew of innovation, change and growth. Open networking offers service providers a route to much-needed rapid service deployment, agile innovation, and leaner spending. For these reasons, the industry is pushing for open-source standards and transport equipment vendors are capitalizing on this new thinking. Migration is underway from traditional proprietary converged platforms to more modular/single use-case form-factors and functionality.

What is an Open Optical Network?

You might ask, what are the key features of an open optical network? Essentially it boils down to networks operating on an industry-agreed common, multivendor foundation. This includes the ability to have open software and open line systems that comply with open standards for interoperability. In sum, this means a mix-and-match multivendor network environment where all the parts “speak” a common language of control and data exchange.

Open Hardware

Optical networking hardware, such as Reconfigurable Add Drop Multiplexers (ROADMs) and transponders, is evolving in terms of form factor, functionality, and functional disaggregation. Equipment is changing from the large, converged platforms of the past decade to smaller units engineered for single use-cases; simplified network design and operation; efficient space utilization; and lower power consumption. Other essential features of open hardware are plug-and-play or self-installing components; automated provisioning; and software features and interfaces that enable easy integration and meaningful data exchange with different management systems.

Open Software

A notable aspect of open networking is the decoupling of software from hardware development and the transition from proprietary, embedded software to open-source code. Open software should include a single provisioning model with both service activation and service assurance, in addition to a centralized service rollout model. Open software management systems must also be capable of managing third-party systems or tools, and compliant with new standards or initiatives. The network elements must also support open APIs, enabling open management.

Benefits

Perhaps the most obvious benefit from open networking is that service providers are no longer locked in to a specific vendor’s hardware or controller software. When service providers can freely combine equipment from multiple vendors, they have freedom of choice that can directly reduce costs, and when an entire network is managed via common open interfaces and protocols, networks get tested, validated and deployed faster. Moreover, if every part of the network, figuratively speaking, shares a common language, it is easier to eliminate overbuilds or stranded bandwidth. Thus, open networking not only gives providers greater freedom of choice and speed of execution, it helps them to make the fullest use of existing resources. Ultimately, in business terms, this can result in faster service roll-outs.

Another benefit of open networking is that it will ultimately provide a shared technological framework to support innovation. The standards being implemented in the communications network industry are common across the entire IT industry, meaning that service providers have an open invitation to an innovation ecosystem.

Challenges

The primary challenge is successfully navigating the transition from traditional telecom standards to newer open-source standards—not least because the standards themselves are still evolving. “Openness” is not a binary state and the industry must tackle hardware and software components possessing various degrees of openness and interoperability.

On the hardware side, we see everything from closed-and-proprietary paradigms all the way to plug-and-play installation, functional disaggregation, and ultimately, interoperability. Likewise on the software side, we see a similar spectrum, from closed-and-proprietary to open standards, open software platforms, open APIs and ultimately, open applications. Several non-proprietary initiatives are driving open networking forward, including OpenDaylight, ONOS/CORD, ONAP, OpenStack, and the Open ROADM MSA, to name a few.

Conclusions

Open networking is signaling the desire for equipment with a narrower use case and simpler feature sets that enables low-cost and simpler operations. Flexibility, scalability and simplicity are the keys to realizing the potential of the open network.

Open networking supports ecosystem-based innovation and multi-sourcing, which boost cost, competition and supply reliability, while avoiding vendor lock-in and reducing burdensome complexity. Scalable, modular equipment reduces first cost and adds flexible pay-as-you go bandwidth growth, benefiting service providers by broadening their range of capital spending options and timelines. Open networking makes operations simpler and improves service creation and activation times, overall helping to “crack the tough nut” of reducing operational and ongoing costs.

WHAT IS A “SMART CITY?” PART 2

In Part 1 of this article, we talked about some of the characteristics of a smart city, including hyperconnectivity, people-centric technology, and increased efficiency of city-provided services. But although those things are critically important, they’re not the end of the smart cities story.

Economic development is an important driver for most cities considering an upgrade to “smart” status, with most cities looking to attract new businesses to their community. But how? In 1942, economist and social scientist Joseph Schumpeter coined the term, “innovation economics,” which, he argued, meant that innovation was a major factor in spurring economic growth and change as it created “temporary monopolies” when new products and technologies were invented, that then encouraged the development of competing products and processes, thereby creating beneficial economic conditions. He further believed that government’s most important role was in creating a fertile ground in which these innovations could occur. In this sense, the smart, connected, and efficient city is the technological soil in which the seeds of economic growth will be planted, yielding profits and benefits that will in turn enrich both individuals and society at large. Therefore, the cities that are at the forefront of smart cities transformation will reap the largest benefits from this explosive, and in many case much-needed, growth.

For example, an unique and innovative display of economic development using smart technology is taking place right now in South Korea. A major grocery retailer wanted to expand business, but without opening additional physical locations. The answer proved to be “virtual shelves” in the city’s subway stations. Wall-length billboards display goods for sale, complete with images and prices, allowing customers to order by scanning QR codes, paying, and arranging for delivery within a day. This optimizes commuter time in the stations, and expands business for the retailer without the expense of a building, rent, utilities, maintenance, staff, and all the other requirements of a physical location. The result is that this retailer has reached the number one position in the online market, and the number two position in terms of brick-and-mortar stores.

Besides these obvious advantages, an area in which smart cities can actually save lives, and one that is top of mind around the world right now, is by helping to deal with natural disasters, before, during, and after the event. Sensors can continually monitor air and water quality, weather and seismic events, and even increased radiation levels, for example, thus providing critical early warnings of disasters about to happen, and can disperse that information to residents via smart phone apps. Once an event occurs, smart data can be used to provide much-needed safety information. During Hurricane Harvey, for example, data collected via connected systems was able to provide residents with real-time information about increased water levels through information from county flood gauges, as well as identify passable evacuation routes and assistance, available shelters, food banks, and more. Drones can be – and are being – used to survey damage and to aid in recovery efforts, reducing the risk for human crews. And this is clearly the tip of the iceberg as regards ways in which “smart” technology will be able to aid in human response to natural disasters.

Of course, these are only a few of the ways in which smart technology can benefit communities. Every city and county has its own needs, especially in the early planning stages of digital transformation. What’s important to remember, however, is that smart cities aren’t coming, they’re already here, and the earliest adopters of this incredible technology will be the ones to reap the greatest benefits from it. Those that delay, or who reject the smart cities model altogether, will quickly find themselves woefully behind the curve, unable to compete with those communities that showed more foresight in these early days. Customers and residents are constantly increasing their demands for bandwidth as the fuel needed to drive their desire for connectivity, and the communities that can provide these services seamlessly and easily win the lion’s share of business and revenue. It’s never too early to start thinking about smart city transformation, so what are you waiting for?