The Reality of Delivering the 5G Vision

With the start of 2019, the era of 5G is officially here… or is it? Are you ready? While a few early market leaders are already hyping 5G services, most service providers are still making plans. And as the build-out begins, the reality of deploying complex new architectures is introducing a variety of challenges.

Due to the increased speed and capability that 5G promises, service providers can expect mobile subscribers to consume more and more data, particularly rich multimedia content. Add to that the flood of device-to-device communications expected with the Internet of Things (IoT), as well as new use cases for the smart home enabled by fixed wireless access, and it’s easy to see that substantially greater capacity, scalability, reliability and performance will be needed — from the first mile all the way to the edge.

Intelligent RAN Plan

Next-generation 5G networks will require robust transport infrastructure, including a dense radio access network (RAN) architecture with distributed intelligence. This increasing densification means more advanced topologies in the access part of the transport network, as well as evolved fronthaul, midhaul and backhaul (i.e., X-Haul) interfaces.

As the 5G RAN becomes increasingly virtualized, service providers will be able to dynamically support a range of use cases with varying demands using SDN control and orchestration. Plus, a key benefit of this virtualization is the opportunity to disaggregate the optical transport network, simplifying the evolution to an integrated and modular 4G/5G network that is highly programmable.

However, X-Haul deployment plans will be highly dependent on the varying capacity needs and latency sensitivities of the specific use cases to be supported, requiring careful consideration of many different factors.

Vision to Reality

The potential for significant revenue from diverse 5G services is very real. And with a robust transport network capable of adaptively handling multiple open radio interfaces, network latencies and virtual infrastructures, your network will be able to support countless devices and applications, delivering the full 5G experience.

Yet, the complexities of next-generation architecture mean that service providers are essentially in uncharted waters as they transform this vision into reality, requiring them to fundamentally rethink network design and deployment. For this reason, Fujitsu is working closely with leading network service providers to help them plan, design and deploy 5G networks that will allow them to deliver new services they can monetize immediately, while preparing for more evolved use cases in the future.

To help other service providers learn from our real-world experience, we’ve published a paper entitled “Transporting 5G from Vision to Reality” that examines 5G transport challenges, the evolution of the RAN architecture, best practices for design and deployment, early business model opportunities and a vision for the future.  Click here to download this informative paper.

Assessing and Addressing Risk to Internet-Connected Critical Infrastructure

Advancing communications technology has brought real benefit to utilities of all kinds.  Connectivity allows utilities to gather data from remote industrial control systems, communications devices, and even passive equipment and other ‘things’ as part of the Internet of Things (IoT). This data creates valuable information for greater automation and efficiency, as well as improved customer service.

While this growing connectivity provides significant advantages, it also brings new challenges as networks become more interrelated and automated. From rural cooperatives to public and private power companies, utilities must be aware of the threats posed by cyberattacks in today’s hyper-connected era.

Is My Utility at Risk?

Hackers are constantly attempting to gather sensitive information, such as which SCADA systems are exposed to the Internet using tools such as Shodan. In fact, your SCADA systems and other critical infrastructure may already be at risk through inadvertent connections to the Internet. Even though the number of attacks on SCADA systems are much fewer compared to IT systems, hackers are always looking for easy targets. For example, note the unprecedented attack on a Ukrainian power company by hacker group BlackEnergy APT in 2015. This was the first confirmed attack to take the down an entire power grid.

The software we use to communicate with SCADA systems, IoT sensors and other connected devices makes our work day simpler and more efficient. However, unsecured services, such as management interfaces built into your computer operating system, may be exposing connected devices to vulnerabilities through insecure legacy clear text protocols such as telnet, file transfer protocol (FTP) and remote copy protocol (RCP). Once these protocols are spoofed by hackers in your corporate network, they are one step closer to your SCADA network.

On the SCADA side, protocols such as Common Industrial Protocol (CIP) that are used to unify data transfer have vulnerabilities for threats such as man-in-the-middle attacks, denial-of-service attacks and authentication attacks, etc. Although vendors release upgrades and patches from time to time to address these security vulnerabilities, the very nature of critical infrastructure means that many utilities are reluctant to take it offline to apply security patch updates.

While these legacy protocols have served us well for many years, they were not designed to withstand increasingly sophisticated cyberattacks. For example, legacy systems can be exposed to threats due to default passwords that don’t require updates, or unencrypted transmission of user names and passwords over the Internet. These systems may be unable to run the latest security tools if they are based on outdated standards.

Consequently, many utilities are unaware of the risks to critical infrastructure, exposing employees and the community at large risk of intentional or accidental harm.

How do I Mitigate my Risk?

You can, however, protect critical infrastructure from vulnerabilities. First and foremost, ensure that your network is protected from less secure networks so that SCADA devices and other critical infrastructure are not exposed to the Internet.

Many guidelines and recommendations are available to mitigate security vulnerabilities. Some of the more important ones are:

  1. Establish a network protection strategy based on the defense-in-depth principle.
  2. Identify all SCADA networks and establish different security levels (zones) in the network architecture. Use security controls such as firewalls to separate them.
  3. Evaluate and strengthen existing controls and establish strong controls over backdoor access into the SCADA network.
  4. Replace default log-in credentials. If a SCADA device doesn’t allow you to change the default password, notify the vendor or look for a device elsewhere with better security. If you have to install a device with default login credentials which you cannot change, ensure that defense-in-depth based security controls are in place to secure the device.
  5. Avoid exposing SCADA devices to the Internet, since every connection can be a possible attack path. Run security scans to discover Internet-exposed SCADA devices and investigate if/why those connections are needed. If a field engineer or the device manufacturer needs remote login access, implement a secure connection with a strong two-factor authentication mechanism.
  6. Conduct regular security assessments, penetration testing and address common findings such as missing security patches, insecure legacy protocols, insecure connections, SCADA traffic in corporate networks, default accounts, failed login attempts, and missing ongoing risk management process, etc.
  7. Work with device vendors to routinely solve device security issues such as update firmware and security patches. Ensure you are on their email list to get notifications for available security patches.
  8. Establish system backups and disaster recovery plans.
  9. Perform real-time security monitoring of IoT and SCADA devices on a 24/7 basis, along with the implementation of an intrusion detection system to identify unexpected security events, changed behaviors and network anomalies.
  10. Finally, if you don’t have security policies for both your corporate and SCADA network currently, take the lead, be a champion and work with your management to develop an effective cybersecurity program.
  11. Stay informed about security in the utility industry. Events such as DistribuTECH, where Fujitsu will be exhibiting, offer plenty of opportunities to learn more about this critical topic.

If you operate a generation and transmission cooperative, be advised that you are obligated to comply with North American Electric Reliability Corporation (NERC) rules, and failure to do so can result in huge penalties. Identifying your compliance obligations is a critical task, especially since NERC rules are created to secure your network.

For some utilities, particularly small rural electric cooperatives, the idea of a serious security threat to their essential infrastructure may sound far-fetched, like the plot to an action movie. However, it’s important to note that the biggest security risk is not necessarily a targeted attempt to physically destroy your equipment. A random malware attack is much more likely than a cyberterrorist, but this can devastate your critical infrastructure systems all the same, potentially causing significant damage and harming the public.

5G Transport: The Impact of Millimeter Wave and Sub-6 Radios

Part two in a blog series about how Fujitsu is bringing the 5G vision to life

As communications service providers (CSPs) prepare to deploy 5G, a number of factors will need to be considered as they plan their radio access network (RAN) architecture. An important aspect of this planning is an understanding of the 5G radio interface (NR) specifications and spectrum options.

Both millimeter wave (mmWave) and sub-6 GHz radio architectures have a fronthaul, midhaul and backhaul in terms of transport. However, the differences in the coverage aspects of these two radio types will define the network topology.

The high frequencies of mmWave radios result in reduced coverage of a given area, requiring a more dense deployment outside of traditional cell towers. The mmWave radios will be deployed in a small cell type of configuration, since a large number are required to cover a given area.  In urban areas, the dense deployment of mmWave radios will most likely be on street lamps, and the side or top of buildings. Sub-6 radios, however, enable coverage configurations similar to 4G LTE radios. Therefore, Sub-6 radio topology could be similar to a C-RAN LTE fronthaul, in which dark fiber is used where available, and some form of multiplexing such as WDM or packet multiplexing is used where fiber is lacking.

Initially, the mmWave radios will be best-suited for high throughput applications such as fixed wireless access (FWA), while sub-6 radios will be best used for mobility.  In the long term, both radio types will be used for both use cases.

Since sub-6 radio coverage dynamics are similar to LTE, many CSPs will consider deploying sub-6 much like 4G LTE in a C-RAN to realize DU pooling efficiencies and offer higher performance using cell site aggregation.

Alternatives to a centralized pool of DUs, whether mmWave or Sub-6 radio, is an integrated DU and RU which eliminates the fronthaul transport and discrete fiber connections between the two.  This alternative expedites service delivery while reducing capital and operational expense, but also eliminates pooling and cell site aggregation capabilities.  Cell sites with integrated DUs will have midhaul, or what the IEEE refers to as fronthaul-II, in this section of the RAN transport.

Based on the various deployment options for mmWave and Sub-6 radios, either WDM based transport or a newer packet based transport using Time Sensitive Ethernet (TSN) will be used to pass 5G eCPRI/xRAN channels, as well as legacy 4G CPRI channels, from the cell site to a central aggregation point when an abundance of dedicated dark fiber is not available.

This blog is the second in a series about our vision for 5G transport. See part one here.

Networks and Vehicles Follow Similar Journey to Automation

Autonomous vehicles (let’s call them AVs) and Autonomous Networks (ANs) are road-mates; they’ve essentially traveled the same route in the quest for full automation. They share the overarching Holy Grail objective of zero-touch operation, undisturbed by human hand as they go about the full range of their respective operations.

The Society of Automotive Engineers (SAE) has defined a six-degree taxonomy that classifies the level and type of automation capabilities in a given vehicle. This is summarized on Wikipedia’s Self-Driving Car page and illustrated in Figure 1.

Figure 1: SAE levels of vehicle automation

Both AVs and ANs have already arrived at their third level of automation, i.e. partial automation, where most of what they do is automated—but human supervision, monitoring, and even interaction is still needed. And just as AVs have relied upon an evolving set of building blocks over decades, ANs have also employed and built upon a number of tools along the way. Figure 2 illustrates this cumulative evolution.

Figure 2: Building blocks of network evolution

There are many examples of these building blocks in the network world. For instance, we have the availability and growing adoption of zero-touch provisioning (ZTP); YANG model-based open interfaces (NETCONF, REST APIs, gNMI/gNOI); gRPC-based deep-streaming telemetry; extensive, detailed logging and monitoring; and streaming for rapid fault isolation and prediction.

Perhaps the most critical characteristic that AVs and ANs share is that in order for their potential to be fulfilled, diverse stakeholders need to come together and coordinate. In the AV world, massive efforts are underway at every level (governments, cities and towns, car companies, insurance companies, and technology vendors) to standardize and streamline end-to-end operations based on key principles of interoperation, openness and reliability.

For ANs, there is a similar and pressing need by networking community for collaborative, coordinated development of an open, generic framework for a fully autonomous optical network, which could be used for setting up reference use cases that can be extended to various network architectures. This framework should be driven by the primary requirement of ZERO human intervention in network operations after initial deployment—including configuration, monitoring, fault isolation, and fault resolution. The framework should leverage currently available tools and technologies for full-featured and automation-ready software, such as Fujitsu System Software version 2 (FSS2) for network element management, in conjunction with Fujitsu Virtuora®, an open network control solution for network element and network management.

Efforts to achieve autonomous networks and autonomous vehicles show strong similarities in terms of both pace and trends.  These similarities are driven by common objectives to, primarily, address scale and the need for a growing number of applications, while tackling the human error element, and enabled by an intertwined and cross-dependent set of technology advancements and adaptations.

Four Key Enablers of Automated, Multi-Domain Optical Service Delivery

New advancements in software-defined control and network automation are enabling optical service delivery transformation. Stitching together network connectivity across vendor-specific domains is labor-intensive; now those manual processes can be automated with emerging solutions like multi-vendor optical domain control and end-to-end service orchestration. These new solutions provide centralized service control and management that are capable of reducing operational costs and errors, as well as speeding up service delivery times. While this sounds good, it can be all too easy to gloss over the complexities of decades-old optical connectivity services. In this blog post, I will explore the four enabling technologies for multi-domain optical service delivery as I see it.

The first enabler, optical service orchestration (OSO), is detailed here. In the not so distant past, most carriers deployed their wireline systems using a single vendor’s equipment in metro, core, and regional network segments. In some cases, optical overlay domains were deployed to mitigate supply variables and ensure competitive costs. While this maximized network performance, it also created siloed networks with proprietary management systems. The OSO solution that I imagine effectively becomes a controller of controllers, abstracting the complexities of the optical domain and providing the ability to connect and monitor the inputs/outputs to deliver services. As such, an OSO solution controls any vendor’s optical domain as a device, with the domain controller routing and managing the services lifecycle between vendor-specific end-points.

The second enabler is an open line system (OLS) consisting of multi-vendor ROADMs and amplifiers deployed in a best-fit mesh configuration. A network configured this way must be tested for alien wavelength support, which means defining the domain characteristics and doing mixed 3rd party optics performance testing. This testing requires considerable effort, and service operators often expect complete testing before deployment. The question is, who takes on the burden of testing in a multi-vendor network? Testing is a massive undertaking and operators do not have the budget or expertise; perhaps interoperability labs at MEF and CE services could help define it. Bottom line, there is no free lunch.

The third enabler is a real-time network design for the deployed network. Service operators deploy optical systems with 95%+ coverage of the network and are historically limited to vendor-specific designs. Currently, the design process requires offline tools and calculations by PhDs. A real-time network design tool that employs artificial intelligence algorithms promises to make real-time network design a reality. Longitudinal network knowledge combined with network control and path computation can examine the performance of optical line systems and work with the controller to optimize system design, variations in optical components, types, and quantity of fiber optical signals, component compatibility, fiber media properties, and system aging.

The final enablers are open controller APIs and network device models that support faster and flexible allocation of network resources to meet service demands. Open device models (IETF, OpenConfig, etc.) deliver common control for device-rich functionalities that support network abstraction. This helps service operators deliver operational efficiencies, on-boards new equipment faster, and provides the extensible framework for revenue-producing services in new areas, such as 5G and IoT applications.

Controller APIs enable standardized service lifecycle management in a multi-domain environment. Transport Application Programming Interface (T-API), a specification developed by the Open Networking Foundation (ONF), is an example of an open API specific to optical connectivity services. T-API provides a standard northbound interface for SDN control of transport gear, and supports real-time network planning, design, and responsive automation. This improves the availability and agility of high-level technology independent services, in addition to specific technology and policy-specific services. T-API can seamlessly connect the T-API client, like a carrier’s orchestration platform or a customer’s application, to the transport network domain controller. Some of the unique benefits of T-API include:

  • Unified domain control using a technology-agnostic framework based on abstracted information models. Unified control allows the carrier to deploy SDN broadly across equipment from different vendors, with different vintages, integrating both greenfield and brownfield environments.
  • Maintaining telecom management models that are familiar to telecom equipment vendors and network operations staff, making its adoption easier and reducing disruption of network operations.
  • Faster feature validation and incorporation into vendor and carrier software and equipment using a combination of standard specification development and open source software development.

Service operators are looking for transformation solutions with a visible path to implementation, and many solutions fall far short and are not economically viable. Fujitsu is actively co-creating with service operators and other vendors to integrate these four enabling technologies into mainstream, production deployments. Delivering ubiquitous, fully automated optical service connectivity management in a multi-vendor domain environment is finally within reach.

Open and Automated: The New Optical Network

Communication service providers (CSPs) are increasingly transforming their networks with an eye towards more openness and automation. There has been a continued push to disaggregate optical networking platforms in order to drive down total cost of ownership and provide network operators with the flexibility to upgrade their networks while keeping up with the accelerated pace of innovation across different layers of the network framework stack. The promise of vendor interoperability and automated control through open standards, APIs and reference platforms are the key drivers enabling CSPs to make the shift to open.

There are varying degrees of openness that one can choose to adopt in this transition – from the proprietary systems of today to a fully disaggregated open optical network. The sweet spot in which the industry seems to be converging is to be partially disaggregated, as in the open line system (OLS) model. OLS provides a good trade-off between interoperability and performance; however, we still have a long way to go to make these systems future-proof and deployable. Multiple industry organizations such as the Open ROADM MSA, OpenConfig, Telecom Infra Project (TIP) and Open Disaggregated Transport Network (ODTN) are working towards bringing this vision of open networking to reality. Though there are multiple initiatives addressing disaggregation in optical transport, we believe there is a strong need for harmonization among them so that the industry can truly benefit from standardization of common models and APIs.

As optical equipment vendors aggressively evolve their offerings to help enable this open optical transformation, care must be taken to address the key business and technical requirements which are unique to each network operator, depending on the state of their current network infrastructure. There is no one single solution that can be applied across the board, bringing both challenges and opportunities to vendors who have embraced open and disaggregated architectures. The migration to open networking requires the operator to reevaluate the manner in which networks are architected, deployed and operated. Enabling this shift presents multiple challenges (such as network planning and design and multi-vendor control) when it comes to the implementation and operationalization of the various building blocks. Effectively addressing them will be key to this transformation.

Fujitsu believes a collaborative process with CSPs that involves a thorough assessment of the network architecture and OSS/IT workflows, along with establishing a phased deployment plan for implementation of hardware and software solutions, will be instrumental in navigating this transition seamlessly. The enclosed white paper provides an overview of the open optical ecosystem today, identifies and describes some of the key challenges to be addressed in implementing open automated networks, and outlines some migration strategies available to network operators embracing open networking.

Time, Technology and Terabit Transport

If you measure time against technological progress, six years is a long time in optical networking. In 2012, we were congratulating ourselves for getting to 100G transport. Now we’ve officially reached 600G, as Fujitsu recently demonstrated on our new 1FINITY T600 blade, the latest in the 1FINITY transport series. Optical transport products are now available that can modulate photons to create signals with 600,000,000,000 bits of information packed into every second, and send those signals at close to the speed of light, traversing the globe almost instantaneously. To put this colossal capability into perspective, a 2 TB digital library could be transmitted virtually anywhere on the globe in seven seconds with a single T600.

It’s easy to disregard or minimize yet another technology advancement. But the implications of 600G and beyond are more significant and positive than simply an increased amount of Internet junk. For example, healthcare could become extremely collaborative across continents with a combination of real time data collection and data analytics with massive data rate transfers near real time. Universally available high-speed connections to a smartphone supports the kind of data gathering and analysis needed to understand our world better and develop remedies for the many serious problems we face.

Access to information is the chief means of empowerment in both personal and business life. Consequently, it is important deploy this 600G technology rather than, for example, hold to the false economy of continued deployments at slower rates. Being able to transmit entire libraries in seconds is an awesome power that opens up rich possibilities. The network occupies a critical role as the foundation of the connected digital economy that, one way or another, is making stakeholders out of every one of us. So, one might say our industry has an economic and moral imperative to drive the highest possible speeds and capacities as deep into communities as possible. High-speed connectivity fosters opportunity, learning and commerce. In the final analysis, more really IS better when it comes to the network.

Automation and Operations in the Modern Network: Bridging the Gap Between Legacy and Digital Infrastructure

In terms of network automation, we’re beyond removing manual tasks, speeding up production, and improving service quality. In the face of complex mesh network architectures, automated network slicing with guaranteed SLAs, as well as real-time spectrum and resource management, automation has become a foundational capability that is required for sheer survivability in basic network operations. The need for high-scale, intelligent control and management of end points, elastic capacity, and dynamic workloads and applications will only grow.

As the network virtualizes to accommodate connection-aware applications, the need for disaggregated, open and programmable hardware and software also gets stronger. To deliver on-demand services to bandwidth-hungry mobile consumers, the modern network must find ways to combine legacy gear that is vendor-proprietary and domain specific with virtual network elements and functions over merged wireline and wireless infrastructure. That requires software platforms and applications that connect and control physical and virtual network elements, automates network planning, provisioning and management, provides real-time network topologies, and increases the efficiency of traffic management and path computation up and down the network stack. It also paves the way for Communication Service Providers to implement service-oriented architectures that put business needs before arcane methods of network management that are required, but do not necessarily drive incremental revenue.

This type of agile network requires an agile deployment model that is predicated on open, disaggregated, and programmable physical and virtual infrastructure, as well as SDN-enabled applications that use open models. Disaggregated NE’s can deliver new capabilities without disrupting the existing production network, and SDN-enabled applications tie it all together seamlessly.

This approach has the advantage of increasing revenue velocity and speeding up the adoption of digital networking while maintaining the investment in the existing physical infrastructure.

Operationalizing the open and programmable network.

As closed and proprietary network segments give way to open network architectures that include Open Line Systems, Open ROADM, and the open APIs that connect them, operational gaps will emerge that require detailed integration and design considerations from a software perspective. This requires an understanding of disaggregation, service-oriented architectures, open APIs, and the ability to break all of that down into discrete datasets that can be mined by artificial intelligence so that CSPs know what levers to pull to improve the customer experience or deliver new types of services.

Microservices and container-based applications have the ability to fill those gaps without costly capital initiatives. Just as an SDN platform abstracts multi-vendor network elements from service provisioning applications to facilitate intent-based networking, container technologies abstract applications from the server environment where they actually run. Containerization provides a clean separation of “duties”; developers can focus on application logic and dependencies, while network operations can focus on management. Container-based microservices and microapplications can be deployed easily and consistently, regardless of the target environment.

This construct provides the ideal set up for operations teams that identify “holes” in the production environment. In the past, product managers and operations teams would have been forced to wait for lengthy development cycles to take advantage of new feature functionality. Now, with microservices and microapplications, new functionality can be developed quickly, more efficiently, and generate additional revenue inside the customer window of opportunity.

Microapplications are inherently cloud-native, and can be used to integrate newer technologies into monolithic systems without waiting for maintenance windows that may or may not include the capability. Examples of microservices include:

  • Customer IP circuit connections
  • A virtual gateway network element
  • Multi-vendor network element backup
  • IP forwarding loop detection
  • Bandwidth optimization

These microservices can augment existing SDN-enabled applications and infrastructure to provide precision solutions that impact revenue generating OSS/BSS applications. They also have the ability to accelerate lab testing and certification cycles so that new applications can be deployed faster and more effectively.

In addition to speed and efficiency, microservices and applications can also make the network more resilient and flexible. Since they can be deployed without impacting other services, developers can enhance the performance of one service without impacting other services.

All of this requires vendor adherence to, and cooperation with, open models that are streamlined for coordinated control and management across all network domains that include end-to-end connectivity services (MPLS, Carrier Ethernet, IP VPN, etc.) In the modern network, every touch point is engineered to do its job faster and more efficiently, whether it is legacy or digital network gear. Microservices and microapplications are a part of that solution, providing new capabilities that are free from traditional operational constraints, bridging the gap between legacy and digital infrastructure with precision solutions that drive revenue now, rather than later.

For more information about Fujitsu’s Microapplications practice, please visit http://www.fujitsu.com/us/products/network/products/microapplications-practice/

Digitizing the Customer Experience

Digitization of the network is reshaping the telecom landscape as customer data consumption habits change thanks to new, disruptive technologies. We’ve gone from a LAN connection on a desktop in your home to a cellular device in your pocket, and regular customers expect to access content whenever and wherever they are. This means that service providers are in trouble if they can’t adjust. They must find a solution that will keep the network healthy and adopt new technologies suited to today’s demands.

Today’s Network Operations Center (NOC) monitors the entirety of a network, actively working to keep everything healthy. However, it’s fundamentally reactive, with thousands of alarms piling up each day for operators to sift through. Current operations are handled manually, creating difficulties when trying to onboard new technologies. Digitizing the NOC to meet customers’ demands requires automation that will turn its reactive nature into a proactive one.

To ensure the health of a network, service providers need a service assurance solution capable of providing fault and performance management, as well as closed-loop automation. Fault and performance management uses monitoring, root-cause analysis, and visualization to proactively identify and notify operators of potential problems in the network before a customer can experience them. Providing closed-loop automation, a service assurance platform continuously collects, analyzes, and acts on data gathered from the network. When combined with machine learning, a service assurance platform becomes an essential part of the NOC. Altogether, a service assurance platform can cut the number of alarms by 50%, a significant reduction considering that a provider may collect close a million alarms each month.

A targeted network management solution provides an accessible path for network migration. While legacy equipment is guaranteed to work, it may not be the best fit for digitization. Integrating a targeted network management solution into your NOC helps bridge the gap between new technologies and vendors with legacy equipment. It supports a multivendor environment, allowing the NOC to manage both new and legacy equipment from different vendors in the same ecosystem. As well, targeted network management enables service providers to bring new services to market twenty times faster due to significant improvements made to the onboarding of new technologies and vendors into the network.

An automated NOC that contains both service assurance and targeted network management provides a network perfectly suited for the changing digital landscape. Service assurance helps keep the network up and running by identifying critical issues so that no matter where or how users access the network, they will be provided with a seamless experience. Targeted network management helps quickly onboard new technologies and vendors that will help push towards digitalization. Combined in a 24x7x365 NOC, service providers are prepared for whenever, wherever, and however, a customer chooses to interact with the network.

For a customer or business, the advantages of an automated NOC are exceptional. Customers don’t have to worry about any issues regarding the accessing of data from any device, anywhere or at any time of day. For businesses, the proactive nature of service assurance and the simple network migration of targeted network management helps ease operating expenses and mean-time-to-repair. Digitization isn’t slowing down for anyone, and service providers offer a way to hop on the train.

5G Transport: From Vision to Reality

Part one in a blog series about how Fujitsu is bringing the 5G vision to life

On the road to 5G, there are a number of different paths that communications service providers (CSPs) can choose. This blog is the first in a series about our vision for the 5G RAN, and how Fujitsu is working with leading CSPs to co-create these networks and bring 5G to life.

Transport is vital for building a robust and reliable network. The xHaul ecosystem consists of the backhaul, midhaul and fronthaul transport segments.  Dedicated dark fiber, WDM and packet technologies are used within these transport segments. As CSPs evolve their networks from 4G / LTE to 5G, there are several options explaining how those transport networks will be designed.

In a “Split Architecture,” the distribution unit (DU) connects to many macro site radio units (RUs) over multiple fronthaul fiber paths. This is a similar architecture to the 4G central RAN (C-RAN) where there is a central point; the DU in this case, fanning out to multiple macro sites for interconnect with the 5G radios, also known as RUs or Transmission Reception Points (TRPs).  This efficient technique is referred to as RAN Pooling, and along with cell site aggregation, offers mobile network operators the ability to engineer the RAN capacity based on clusters of sites coming into the central point DUs, instead of individual cell site demands.

The “Distributed DU” architecture involves DUs collocated with RUs at the cell site.  The distributed DU use case offers a latency sensitive architecture by eliminating the fronthaul transport path.  The fronthaul becomes a local connection between the top and bottom of the tower via fiber cable.  This is a low latency configuration, which also reduces costs by eliminating the fronthaul transport section.  The tradeoff is a loss of multi-site pooling and cell site aggregation with macro cell sites. Moreover, the midhaul capacity is reduced to 10GE rates.

Finally, there is the “Integrated DU” architecture, which integrates the DU into the RU at the cell site.  This architecture offers similar benefits as the Distributed DU use case, but with an additional advantage of lower CapEx and OpEx by combining these devices.   The combined DU and RU reduce the number of devices to install, manage and maintain resulting in expedited service turn-up and faster time to revenue.

To learn more, register for an archived webinar “New Transport Network Architectures for 5G RAN” with Fujitsu and Heavy Reading analyst Gabriel Brown: www.lightreading.com/webinar.asp?webinar_id=1227