Digital Transformation in the Hyperconnected World of 5G

Can you feel the anticipation? As we approach the era of 5G, excitement continues to build over the potential for new, disruptive digital services that are expected to flourish in tomorrow’s hyperconnected world. Digital technology is already transforming every facet of business and society, and the pace will only accelerate in the next phase of network evolution.

But despite the hype, this transformation doesn’t just happen overnight. If only we could flip a switch and (poof!) we suddenly have a complete ecosystem capable of supporting all the services that 5G and the Internet of Things (IoT) will deliver. To enable a responsive network that can live up to the hype, disparate new and legacy technologies will need to come together in a flexible and open infrastructure.

So how do we build a flexible platform that’s open, yet secure? At Fujitsu, we believe that digital co-creation is the answer. As the industry prepares for the next wave of network evolution, co-creation will enable information sharing and innovation beyond boundaries to deliver real digital transformation and business value.

Outside the Box

Arguably, the true promise of 5G will be the development of entirely new business models like we’ve never seen before. To deliver on that promise, network service providers will require a scalable ecosystem that spans technologies, industries and vendors. Secure, seamless, end-to-end connections across wireless and wireline technologies would be nearly impossible with yesterday’s proprietary architecture.

This vision of hyperconnectivity will be key to realizing innovative 5G business models, powering flexible bandwidth on demand to support the digital service ecosystem. Service-aware platforms that incorporate artificial intelligence, machine learning and big data analysis will enable a broad range of offerings, from high-speed home entertainment and IoT initiatives, to autonomous cars and smart cities. In order for tomorrow’s networks to provide a secure exchange of information across boundaries, however, service providers will require open, programmable interfaces for collaboration.

At Fujitsu, we’re uniquely positioned to help build this ecosystem, delivering a very scalable optical network, as well as intelligent software, to enable end-to-end 5G services across both the wireline transport network and the wireless radio access network (RAN). That’s why we are working closely with our customers as they plan and deploy the network infrastructure that will enable the hyperconnected 5G vision. This co-creation — with customers and industry partners— is about helping to advance the ecosystem and develop digital business models that will benefit network service providers, their subscribers and society overall.

For example, digital co-creation led us to develop our Virtual Access Network (vAN) solution, a flexible and cost-effective approach to delivering access services. With the vAN solution, service providers can support small and medium businesses with services that were previously cost-prohibitive, particularly in rural areas. Through the process of co-creation, we developed a new service that allows customers to save time, money and resources.

To Tomorrow and Beyond

The evolution of the hyperconnected world is quickly accelerating toward a future full of opportunity. Digital co-creation will be fundamental to making sure that service providers, and the entire ecosystem, are well-equipped to fully realize the 5G vision. And service-aware, conscious networks built on flexible, programmable, open platforms will be the engine that powers that digital transformation. To learn more about our vision for 5G, visit: https://fast.wistia.com/embed/iframe/r2fsy5ad9c.

Abstract and virtualize, compartmentalize and simplify: Automating network connectivity services with Optical Service Orchestration

Service providers delivering network connectivity services are evolving the transport infrastructure to deliver services faster and more cost efficiently. Part of the strategy includes using a disaggregated network architecture that is open, programmable and highly automated. The second part of the approach takes into consideration how service providers can leverage that infrastructure to deliver new value-added services. There’s no question that the network can, but to what extent? How agile does the infrastructure need to be to accommodate dynamic services? What is required to shift the transport infrastructure more to the revenue infrastructure column rather than the overhead infrastructure column?

Today, service providers have deployed separate optical transport networks with each containing a single vendor’s proprietary network elements.  Optical line systems using analog amplification are customized and tuned to enhance the overall system performance, making it nearly impossible for different vendor devices to work together within the same domain. For years, service providers with simple point-to-point transmission have used alien wavelength deployments leveraging multi-vendor transmission on single vendor optical networks. However, as service providers look to add more flexibility to the network using configurable optical add/drop multiplexing, the ability to use different vendor components on legacy systems is impractical.

It is evident by historical deployments that optical vendors have competed for business based on system flexibility, capacity, and cost per KM. This has led to the deployment of optical domain islands. That doesn’t reflect a dastardly plan by any single vendor to corner the optical transport market. As outlined above, the drive to differentiate on performance and capacity contribute to monolithic, closed, and proprietary systems. In many cases network properties, span distance, or fiber type, dictates what system a service provider deploys. This leads to a deployment of separate optical system islands (optical domains). A provider has separate optical domains in metro networks, access network, and long haul networks. Each network is managed by a separate management system, which means that for service providers to configure services across the optical infrastructure, manual coordination is required.

Industry collaboration efforts such as the Optical Internetworking Forum (OIF) have contributed tremendously to interoperability of physical and link layers by developing implementation agreements, socializing standards, benchmark performance, and testing interoperability. These efforts have accelerated deployment of technology that lowers cost of implementing high capacity technology. However, service providers still face the expense and time of managing separate optical domains together and maintaining them over time.

Many service providers are leading the industry to supporting open optical systems. With open optical systems, optical networks are deployed in a greenfield environment where the vendors are natively and voluntarily interoperable. The Open ROADM MSA and participating vendors is one example. Open ROADM devices are part of a centrally controlled network that includes multiple vendors’ equipment, and functionality is defined by an open specification. This type of open network delivers value with lower equipment costs and reduced supply disruptions.

There is no escaping the complication that this type of networking makes it inherently difficult for service providers to introduce new vendors into a network that is delivering private line services. In this environment, operational costs are far more significant than equipment costs. Each system is configured independently, with time and extreme expertise across multiple functional areas required to bring them together to deliver end user services. New services face the same hurdles of time, field, and needed back office expertise, further incrementing the work needed to integrate existing elements.

To fully harness the power of automated provisioning and virtualization for network connectivity services, a different type of orchestration is required. We’ll call it Optical Service Orchestration (OSO.) With the OSO concept, service providers are able to lifecycle manage connectivity services across separate optical domains, and virtualize the optical domains, allowing end customers to manage their own private network.

Using OSO, service providers don’t have to change out the entire network. They can deliver a network connectivity service from one domain to another, whether it’s physical or virtual, with simple configuration changes that are controlled and managed by software-defined networking.

An Optical Service Orchestrator combines the existing network with innovative vendor approaches as it makes sense for the network and the business. Some domains are open; some are not. Some vendors want to participate in open technologies and communities, some do not. Some are highly focused on the performance that comes from a tightly coupled optical components. The truth is that vendors occupying the optical domain have been doing this for a long time and are evolving their technology to deliver next-generation digital services. It would be foolish to turn away from expert innovation in an attempt to commoditize network equipment.  Especially when the underlying optical component ecosystem is already commoditized.

In a typical operator optical network with a mix of legacy and open optical domain deployments, an OSO platform controls multiple optical domains, regardless how open the domain is, and automatically stitches services together across domains. Each domain becomes an abstracted ”network element” with discrete inputs and outputs, with the OSO orchestrating puts and gets into an automated workflow. This common controller extracts the optical topology to the IP and MPLS layer and then adds layer 2 and layer 3 services on top programmatically and automatically, spanning the physical and virtual network seamlessly.

The result is that the operator can deliver Ethernet private line service without having to understand and configure each vendor’s optical domain. The domain vendor controller handles the idiosyncrasies of the optical domain without having to give up on network performance (Cost / GB-KM). Abstract and virtualize, compartmentalize and simplify.

Service providers are able to leverage the OSO capabilities to virtualize transport networks by providing a simple customer web portal. The portal allows a service provider’s end customers to provision their own services on a virtual optical network using service templates with any number of network element configurations.

Service providers gain the ability to extend the life of their legacy gear, as well as allowing for the eventuality of introducing new gear into the network- all while using software to provision dynamic services. With the OSO, service providers can automate transport, lower costs all while growing and monetizing new network connectivity services.

Andres Viera will present “Enabling Automation in Optical Networks” at the NFV & Zero Touch Congress show, April 25 @ 4:05pm. Stop by Fujitsu booth #13 to learn more.

Integrated Laboratory Testing – An Investment that Pays off for Rail Operators

The traditional approach that rail operators have taken to their communications networks is changing to support new IP video, voice and data applications, as well as improved mobile connectivity and stronger cybersecurity. The advent of the flexible converged network is bringing new challenges, one of which is to turn up the heat on pre-deployment testing. Factory Acceptance Testing (FAT) is no longer enough because, while it adequately covers issues relating to individual components, FAT falls short when it comes to identifying issues that arise when multiple system components come together in a fully integrated system.

The answer is to bring system components together and put them through their paces in a controlled laboratory environment before live deployment. For want of a better name, this approach is known as Integrated FAT (IFAT). But setting up a fully capable laboratory and hiring the necessary experts requires significant upfront investment. It’s easy to imagine the level of expenditure needed won’t pay off, but in fact it’s more than justifiable when the cost-saving benefits are taken into account over the longer term.

The simple reason is that integrated testing improves reliability and drastically reduces network downtime, and every minute of downtime is expensive. That’s all there is to it. Discovering and correcting issues before committing to live traffic is far less costly and disruptive than troubleshooting and correction under the pressures of daily operation. Many organizations have no grasp of the costly ripple effects that network downtime has on their business: lost revenue, lost information, damaged reputations and lost customers.

Leaving aside the rewards in terms of reduced downtime, a laboratory outfitted for IFAT brings with it other valuable benefits. Improved cybersecurity is just one of these. Networks are becoming more enmeshed with IT systems, making them more vulnerable to cyber-attack to a degree that cybersecurity has become a critical issue.  For instance, according to the Ponemon Institute’s study, “2017 Cost of Cyber Crime,” the average annualized cost of cybercrime for the transportation industry was $7.36M.

Change control is another area in which lab-based IFAT delivers benefits, in terms of improved reliability and network service quality. Changes equal risk because every change has the potential for unforeseen side effects. For example, imagine you bring up a new circuit between two communication centers and find that application traffic is unexpectedly following an asymmetrical path. Traffic goes out from Comms Center A to Comms Center B on the old circuit, but it comes back on the new one. This is a fairly common scenario—but now there’s a decision to make: Do you try to fix the issue, or back out your change and wait until next month to bring the new circuit into production? What does the change control procedure say? Is there a change control procedure? Will this asymmetrical routing situation even pose a problem?

This is a lot of information to quickly process for an operations tech who most likely does not have a full view of the big picture, and who is running on pizza, Cokes, day-old coffee, and minimal sleep. It is not rare to have an engineer make a small change to fix a routing issue only to cause a major failure. Having a lab facility to duplicate, isolate, make corrections, and develop methods of procedure not only eliminates this risk, but gives your engineers confidence that when they return to the field, everything will go as planned.

Additional valuable benefits of IFAT derive from making full use of the facility as a permanent fixture for ongoing upgrade testing (hardware/software), proofs of concept, staff training, and trouble simulations or disaster recovery drills.

While the transportation industry stands to benefit immensely from advanced networks that can support improved passenger comfort, better real time communication and higher safety standards, the industry needs to go beyond testing components in isolation from one another and embrace deeper and more comprehensive integrated testing in laboratory environments. IFAT offers the best chances of achieving a successful and predictable outcome that avoids costly redesign and troubleshooting during outage operations.

What the NFL Tells us About DCI

Data Center Interconnect has historically been driven by the pressure of simple demand: the kind of demand that’s satisfied by big, fast, dumb point-to-point pipes. But the value and potential of “big and fast” are held in check by “dumb.” It’s like football; bigger and faster will only take you so far in the National Football League (NFL). As game plans get more complicated, players are expected to think strategically about the other team’s offense or defense. Similarly, DCI is also getting more complicated as the pressure builds—and those big, fast pipes must ditch the dumb and get smart.

Data centers already have requirements in place for encryption, streaming telemetry and LLDP, all of which mean adding intelligence. Flex-grid; mixed modulation schemes; the growing mix of baud rates; and multiple FEC options (not to mention mesh connectivity in the planning arena) also demand more “brains” to match the brawn. The challenging task of selecting the optimal modulation, baud, grid and FEC is impossible unless the intelligence is there.

Variable and unpredictable traffic loads add another layer of complexity; business and the internet are inherently chaotic. The historical trend of “designing for the worst case,” (AKA “busy hour design”) is no longer economical. Data centers need capabilities to handle changing workloads gracefully and efficiently without overbuilding. These trends have significant positive implications for DCI; the agility and intelligence needed to meet dynamic workloads will improve the operational efficiency of the whole network. Put simply—bigger, faster, smarter pipes in DCI are just like NFL players who are also strategic thinkers. In both cases, add brains to brawn and the game is on.

Open Networks, Open for Business

For the ICT industry, this nascent era of business models based on cloud computing and OTT content is characterized by a heady brew of innovation, change and growth. Open networking offers service providers a route to much-needed rapid service deployment, agile innovation, and leaner spending. For these reasons, the industry is pushing for open-source standards and transport equipment vendors are capitalizing on this new thinking. Migration is underway from traditional proprietary converged platforms to more modular/single use-case form-factors and functionality.

What is an Open Optical Network?

You might ask, what are the key features of an open optical network? Essentially it boils down to networks operating on an industry-agreed common, multivendor foundation. This includes the ability to have open software and open line systems that comply with open standards for interoperability. In sum, this means a mix-and-match multivendor network environment where all the parts “speak” a common language of control and data exchange.

Open Hardware

Optical networking hardware, such as Reconfigurable Add Drop Multiplexers (ROADMs) and transponders, is evolving in terms of form factor, functionality, and functional disaggregation. Equipment is changing from the large, converged platforms of the past decade to smaller units engineered for single use-cases; simplified network design and operation; efficient space utilization; and lower power consumption. Other essential features of open hardware are plug-and-play or self-installing components; automated provisioning; and software features and interfaces that enable easy integration and meaningful data exchange with different management systems.

Open Software

A notable aspect of open networking is the decoupling of software from hardware development and the transition from proprietary, embedded software to open-source code. Open software should include a single provisioning model with both service activation and service assurance, in addition to a centralized service rollout model. Open software management systems must also be capable of managing third-party systems or tools, and compliant with new standards or initiatives. The network elements must also support open APIs, enabling open management.

Benefits

Perhaps the most obvious benefit from open networking is that service providers are no longer locked in to a specific vendor’s hardware or controller software. When service providers can freely combine equipment from multiple vendors, they have freedom of choice that can directly reduce costs, and when an entire network is managed via common open interfaces and protocols, networks get tested, validated and deployed faster. Moreover, if every part of the network, figuratively speaking, shares a common language, it is easier to eliminate overbuilds or stranded bandwidth. Thus, open networking not only gives providers greater freedom of choice and speed of execution, it helps them to make the fullest use of existing resources. Ultimately, in business terms, this can result in faster service roll-outs.

Another benefit of open networking is that it will ultimately provide a shared technological framework to support innovation. The standards being implemented in the communications network industry are common across the entire IT industry, meaning that service providers have an open invitation to an innovation ecosystem.

Challenges

The primary challenge is successfully navigating the transition from traditional telecom standards to newer open-source standards—not least because the standards themselves are still evolving. “Openness” is not a binary state and the industry must tackle hardware and software components possessing various degrees of openness and interoperability.

On the hardware side, we see everything from closed-and-proprietary paradigms all the way to plug-and-play installation, functional disaggregation, and ultimately, interoperability. Likewise on the software side, we see a similar spectrum, from closed-and-proprietary to open standards, open software platforms, open APIs and ultimately, open applications. Several non-proprietary initiatives are driving open networking forward, including OpenDaylight, ONOS/CORD, ONAP, OpenStack, and the Open ROADM MSA, to name a few.

Conclusions

Open networking is signaling the desire for equipment with a narrower use case and simpler feature sets that enables low-cost and simpler operations. Flexibility, scalability and simplicity are the keys to realizing the potential of the open network.

Open networking supports ecosystem-based innovation and multi-sourcing, which boost cost, competition and supply reliability, while avoiding vendor lock-in and reducing burdensome complexity. Scalable, modular equipment reduces first cost and adds flexible pay-as-you go bandwidth growth, benefiting service providers by broadening their range of capital spending options and timelines. Open networking makes operations simpler and improves service creation and activation times, overall helping to “crack the tough nut” of reducing operational and ongoing costs.

WHAT IS A “SMART CITY?” PART 2

In Part 1 of this article, we talked about some of the characteristics of a smart city, including hyperconnectivity, people-centric technology, and increased efficiency of city-provided services. But although those things are critically important, they’re not the end of the smart cities story.

Economic development is an important driver for most cities considering an upgrade to “smart” status, with most cities looking to attract new businesses to their community. But how? In 1942, economist and social scientist Joseph Schumpeter coined the term, “innovation economics,” which, he argued, meant that innovation was a major factor in spurring economic growth and change as it created “temporary monopolies” when new products and technologies were invented, that then encouraged the development of competing products and processes, thereby creating beneficial economic conditions. He further believed that government’s most important role was in creating a fertile ground in which these innovations could occur. In this sense, the smart, connected, and efficient city is the technological soil in which the seeds of economic growth will be planted, yielding profits and benefits that will in turn enrich both individuals and society at large. Therefore, the cities that are at the forefront of smart cities transformation will reap the largest benefits from this explosive, and in many case much-needed, growth.

For example, an unique and innovative display of economic development using smart technology is taking place right now in South Korea. A major grocery retailer wanted to expand business, but without opening additional physical locations. The answer proved to be “virtual shelves” in the city’s subway stations. Wall-length billboards display goods for sale, complete with images and prices, allowing customers to order by scanning QR codes, paying, and arranging for delivery within a day. This optimizes commuter time in the stations, and expands business for the retailer without the expense of a building, rent, utilities, maintenance, staff, and all the other requirements of a physical location. The result is that this retailer has reached the number one position in the online market, and the number two position in terms of brick-and-mortar stores.

Besides these obvious advantages, an area in which smart cities can actually save lives, and one that is top of mind around the world right now, is by helping to deal with natural disasters, before, during, and after the event. Sensors can continually monitor air and water quality, weather and seismic events, and even increased radiation levels, for example, thus providing critical early warnings of disasters about to happen, and can disperse that information to residents via smart phone apps. Once an event occurs, smart data can be used to provide much-needed safety information. During Hurricane Harvey, for example, data collected via connected systems was able to provide residents with real-time information about increased water levels through information from county flood gauges, as well as identify passable evacuation routes and assistance, available shelters, food banks, and more. Drones can be – and are being – used to survey damage and to aid in recovery efforts, reducing the risk for human crews. And this is clearly the tip of the iceberg as regards ways in which “smart” technology will be able to aid in human response to natural disasters.

Of course, these are only a few of the ways in which smart technology can benefit communities. Every city and county has its own needs, especially in the early planning stages of digital transformation. What’s important to remember, however, is that smart cities aren’t coming, they’re already here, and the earliest adopters of this incredible technology will be the ones to reap the greatest benefits from it. Those that delay, or who reject the smart cities model altogether, will quickly find themselves woefully behind the curve, unable to compete with those communities that showed more foresight in these early days. Customers and residents are constantly increasing their demands for bandwidth as the fuel needed to drive their desire for connectivity, and the communities that can provide these services seamlessly and easily win the lion’s share of business and revenue. It’s never too early to start thinking about smart city transformation, so what are you waiting for?

WHAT IS A “SMART CITY?” PART 1

Unless you’ve been living in a bunker deep underground for the last ten years, you’ve no doubt heard talk about “smart cities.” Everyone’s talking about it, and a few truly forward thinking cities around the world are making it happen. But what exactly is a “smart city,” and what does it mean to you?

The short answer is that the smart city concept is the logical and foreseeable outcome of a world in which connectivity has become an integral part of our daily lives. In a smart city, things like utilities, transportation, education, housing, and more are all connected via sensors that provide data in order to improve the quality of life of the city’s residents. Civic leaders use this data to make better, “smarter” decisions for the way the city operates and interacts with its citizens. It’s a way to make infrastructure more efficient, to make government more transparent, and to make day-to-day interactions with technology smoother.

The best smart city improvements are based on a people-centric model, in which technology is merely a tool that improves the lives of those it touches by solving problems that might otherwise be insurmountable. Imagine a “smart” parking lot that can alert you to an available parking space via an app on your phone, reducing or eliminating your time driving around hopelessly looking for one. Or how about  a smart communications system for emergency personnel, able to assess a situation holistically, summon the appropriate personnel, identify and notify the nearest hospital with the appropriate treatment facilities, and even turn traffic lights green as needed for the ambulance en route, thereby decreasing response time significantly.

These aren’t simply concepts found in science-fiction novels, but initiatives actually put in place today in smart cities around the world. By making use of data collected from a variety of sources in an intelligently-connected infrastructure, and parsing that data in useful ways, these smart applications can be used to improve the quality, performance and efficiency of everything from major water utilities to individual home appliances. Europe and Asia have been making these steps forward for some time but America is catching up now in cities like New York, Boston, San Francisco, and even Wichita.

From a municipal perspective, smart technology is being used to streamline city-provided services, and to oversee and regulate services provided by outside organizations in order to minimize frustration and dissatisfaction and to maximize economic growth and development. In Amsterdam, for example, the city has installed “smart” garbage bins, so that trash is collected only when the bin is full, thus making garbage collection more efficient and less costly.

There’s even more to know about smart cities, and we’ll cover that in “What is a ‘Smart City?’” Part 2.

C-RAN Mobile Architecture Migration: Fujitsu’s Smart xHaul is an Efficient Solution

To adapt mobile network architectures and address increasing bandwidth demands, service providers are deploying C-RAN architectures to improve performance and reduce costs

Deployment of C-RAN architectures has enabled increased deployment of optical mobile fronthaul solutions to deliver low-latency, high-bandwidth connectivity between remote radio heads and baseband unit electronics. Service providers are recognizing the necessity to reduce their mobile networking costs by better aligning the total electronics capacity of their networks with the total network utilization at any given time. They realize that by separating the electronics into a centralized pool where multiple radios or remote radio heads can share access to it, they can drive down capital costs and eliminate underutilized capacity. Centralized baseband units also enable easier handoffs and dynamic RF decisions based upon input from a combined set of radios.

As service providers deploy C-RAN architectures, they face significant challenges and decisions, specifically the selection of their mobile fronthaul solution. The CPRI protocol is extremely latency sensitive, which results in a latency link budget that limits the distance between RRH and base-band units to less than 20 km. The mobile fronthaul transmission equipment must minimize its latency contribution, or this distance will become even shorter. CPRI signaling is also highly inefficient, consuming as much as 16x transmission bandwidth versus the actual data rate seen by mobile applications.

To address which solutions best target these requirements, ACG Research analyzed the total cost of ownership and compared the economics of P2P dedicated dark fiber to that of active DWDM solutions like Fujitsu’s Smart xHaul. We analyzed the operational expense of the Smart xHaul solution over five years and compared it to competing mobile fronthaul alternatives. The analyses focused on the deployment of 150 macro cell sites, each supporting three frequency bands and three sectors. We also considered deployment of five small cells per macro cell site for a total of 750 small cell deployments.

The results demonstrate that although the capital expense of deploying a DWDM solution such as Smart xHaul is multiple times greater than the capex of P2P dark fiber, the reduction in fibers due to signal multiplexing and the advanced service assurance capabilities delivers 66% lower opex and 30% TCO savings. When looking at competing DWDM solutions, we also find that the advanced functions of the Smart xHaul solution deliver 60% lower opex associated with detecting, identifying root cause and resolving field issues.

In addition, industry-leading features in the Smart xHaul solution provide the ability to distinguish between optical transport and radio service impairments, which are identified by inspecting the actual CPRI packet frames. When combined with the other performance monitoring and service assurance capabilities, CPRI frame inspection results in rapid issue identification, assignment and resolution.

Click to download the paper and read how, in contrast with a dedicated dark fiber solution, the Smart xHaul solution is flexible and supports multiple network architectures.

Click for the HotSeat video of Tim Doiron, ACG Research analyst, and Joe Mocerino, Fujitsu principal solutions architect, discuss the Smart xHaul solution and C-RAN mobile architecture migration.

Co-Creation is the Secret Sauce for Broadband Project Planning

Let’s face it—meeting rooms are boring. Usually bland, typically disheveled, and littered with odd remnants of past battles, today’s conference room is often where positive energy goes to die.

So we decided to redesign one of ours and rename it the Co-Creation Room, complete with wall-to-wall, floor-to-ceiling whiteboards. Sure, it’s just a small room but I have noticed something: it is one of the busiest conference rooms we have. It’s packed. All the time. People come together willingly – agreeing upfront to enter a crucible of co-creation – where ideas are democratized and the conversation advances past the reductive (“ok, so what do we do?”) to the expansive (“hey, what are the possibilities?”).

This theme of co-creation takes center stage when we work with customers on their broadband network projects. These projects are an incredibly diverse mix of participants, aspirations, challenges, and constraints which really brings home the necessity and power of co-creation.

Planning, funding, and designing wireline and wireless broadband networks are a question of bringing together multiple stakeholders with varied perspectives and fields of expertise, as well as negotiating complex rules of engagement, all while we plan and execute on a challenging multi-variable task. Success demands a blend of expertise, resources and political will—meaning the motivation to carry initiatives forward with enough momentum to carry through changes of leadership and priorities.

Many times prospective customers seek to start by bolstering their in-house expertise by asking for project feasibility studies  Good feasibility vendors should have knowledge of multi-vendor planning, engineering design, project and vendor management, supply chain logistics, attracting funds or investment, business modeling, and ongoing network maintenance and operations, to ensure a thorough study. Look for someone with experience across many technologies and vendors, not just one.

As a Network Integrator, we bring all the pieces together. But we do more than just get the ingredients into the kitchen. Our job is to make a complete meal. By democratizing creation, we like to expand the conversation—and broker the kind of communication that gets diverse people working together productively.

The integration partner has to simultaneously understand both the customer’s big picture and the nitty-gritty details. Our priority is to minimize project risk and drive things forward effectively.  Many times, we have to do the Rosetta Stone trick and broker mutual understanding among groups with different professional cultures, viewpoints, and language. We take that new shared understanding and harness it to co-create the best possible project outcome.

On a recent municipal broadband project, for example, we learned that city staff and network engineers, don’t speak the same language. A network engineer isn’t familiar with the ins and outs of water systems and a city public works director doesn’t know about provisioning network equipment.. But by building a trusted partner relationship, we  helped to build the shared understanding needed. With this new shared understanding, we realized that we really had re-defined what Co-Creation really means to us.

So, when you come to Fujitsu, you will see the Co-Creation Room along with this room-sized decal:

Co-Creation: Where everyone gets to hold the pen.

The Surprising Benefits of Uncoupling Aggregation from Transponding

Data Center Interconnect (DCI) traffic comprises various combinations of 10G and 100G on each service. In a typical application, DWDM is used to maximize the quantity of traffic that can be carried on single fiber.

Virtually all available products for this function combine aggregation and transponding into a single platform; they aggregate multiple 10G services into a single 100G and then transpond that 100G onto a lambda for multiplexing alongside other lambdas onto a single fiber. Decoupling aggregation and transponding into two different platforms is a new approach. At Fujitsu, this approach consists of a 10GbE to 100G Layer 1 aggregation device—the 1FINITY T400— and a separate 100GbE to 200G transponder—the 1FINITY T100— that serve the two halves of the formerly combined aggregation-transponding function. This decoupled configuration is unique to these 1FINITY platforms, and it offers unique advantages.

Paradoxically, at first glance, this type of “two-box” solution may seem less desirable. But there are several advantages to decoupling aggregation from transponding—particularly in DCI applications. Here’s a quick rundown of the benefits. As you’ll see, they’re similar to the overall benefits of the new disaggregated, blade-centric approach to data center interconnect architecture.

Efficient use of rack space: Physical separation of aggregation and transponding splits a single larger unit into two smaller ones: a dedicated transponder and a dedicated aggregator. As a result the overall capacity of existing racks is increased and as an added benefit, it is easier to find space for individual units and use up scattered empty 1RU slots, which helps make the fullest possible use of costly physical facilities.

Reducing “stranded” bandwidth: Many suppliers are using QSFP+ transponders, which offer programmable 40G or 100G. Bandwidth can be wasted when aggregating 10G services because 40 is not a factor of 100, which necessitates deployment in multiples of 200G in order to make the numbers work out; this frequently results in “over-buying” significant un-needed capacity.. The 1FINITY T400 aggregator deploys in chunks of 100G, which keeps stranded bandwidth to a minimum by reducing the over-buy factor.    

Simplified operations: Operational simplification occurs for two reasons. First, when upgrading the transponder, you simply change it out without affecting the aggregator. With aggregation decoupled from the transponder, changes such as upgrading the transponder or adjusting the mix of 10G/100G clients involve disconnection/reconnection of fewer fibers and require fewer re-provisioning commands. Line-side rate changes to the mix of 10 and 100G services involve roughly 60% of the operational activities in comparison with competing platforms. Client-side  rate changes involve 25% fewer operational activities. Fewer activities means fewer mistakes, less time per operation, and therefore less cost. Savings in this area mainly affect the expensive line side, which creates a larger cost reduction.

Overall, by separating the aggregator and transponder, Fujitsu can offer data centers significant savings through better use of resources as well as simplification of operations and provisioning. Find out more by visiting the Fujitsu 1FINITY platform Web page.