Assessing and Addressing Risk to Internet-Connected Critical Infrastructure

Advancing communications technology has brought real benefit to utilities of all kinds.  Connectivity allows utilities to gather data from remote industrial control systems, communications devices, and even passive equipment and other ‘things’ as part of the Internet of Things (IoT). This data creates valuable information for greater automation and efficiency, as well as improved customer service.

While this growing connectivity provides significant advantages, it also brings new challenges as networks become more interrelated and automated. From rural cooperatives to public and private power companies, utilities must be aware of the threats posed by cyberattacks in today’s hyper-connected era.

Is My Utility at Risk?

Hackers are constantly attempting to gather sensitive information, such as which SCADA systems are exposed to the Internet using tools such as Shodan. In fact, your SCADA systems and other critical infrastructure may already be at risk through inadvertent connections to the Internet. Even though the number of attacks on SCADA systems are much fewer compared to IT systems, hackers are always looking for easy targets. For example, note the unprecedented attack on a Ukrainian power company by hacker group BlackEnergy APT in 2015. This was the first confirmed attack to take the down an entire power grid.

The software we use to communicate with SCADA systems, IoT sensors and other connected devices makes our work day simpler and more efficient. However, unsecured services, such as management interfaces built into your computer operating system, may be exposing connected devices to vulnerabilities through insecure legacy clear text protocols such as telnet, file transfer protocol (FTP) and remote copy protocol (RCP). Once these protocols are spoofed by hackers in your corporate network, they are one step closer to your SCADA network.

On the SCADA side, protocols such as Common Industrial Protocol (CIP) that are used to unify data transfer have vulnerabilities for threats such as man-in-the-middle attacks, denial-of-service attacks and authentication attacks, etc. Although vendors release upgrades and patches from time to time to address these security vulnerabilities, the very nature of critical infrastructure means that many utilities are reluctant to take it offline to apply security patch updates.

While these legacy protocols have served us well for many years, they were not designed to withstand increasingly sophisticated cyberattacks. For example, legacy systems can be exposed to threats due to default passwords that don’t require updates, or unencrypted transmission of user names and passwords over the Internet. These systems may be unable to run the latest security tools if they are based on outdated standards.

Consequently, many utilities are unaware of the risks to critical infrastructure, exposing employees and the community at large risk of intentional or accidental harm.

How do I Mitigate my Risk?

You can, however, protect critical infrastructure from vulnerabilities. First and foremost, ensure that your network is protected from less secure networks so that SCADA devices and other critical infrastructure are not exposed to the Internet.

Many guidelines and recommendations are available to mitigate security vulnerabilities. Some of the more important ones are:

  1. Establish a network protection strategy based on the defense-in-depth principle.
  2. Identify all SCADA networks and establish different security levels (zones) in the network architecture. Use security controls such as firewalls to separate them.
  3. Evaluate and strengthen existing controls and establish strong controls over backdoor access into the SCADA network.
  4. Replace default log-in credentials. If a SCADA device doesn’t allow you to change the default password, notify the vendor or look for a device elsewhere with better security. If you have to install a device with default login credentials which you cannot change, ensure that defense-in-depth based security controls are in place to secure the device.
  5. Avoid exposing SCADA devices to the Internet, since every connection can be a possible attack path. Run security scans to discover Internet-exposed SCADA devices and investigate if/why those connections are needed. If a field engineer or the device manufacturer needs remote login access, implement a secure connection with a strong two-factor authentication mechanism.
  6. Conduct regular security assessments, penetration testing and address common findings such as missing security patches, insecure legacy protocols, insecure connections, SCADA traffic in corporate networks, default accounts, failed login attempts, and missing ongoing risk management process, etc.
  7. Work with device vendors to routinely solve device security issues such as update firmware and security patches. Ensure you are on their email list to get notifications for available security patches.
  8. Establish system backups and disaster recovery plans.
  9. Perform real-time security monitoring of IoT and SCADA devices on a 24/7 basis, along with the implementation of an intrusion detection system to identify unexpected security events, changed behaviors and network anomalies.
  10. Finally, if you don’t have security policies for both your corporate and SCADA network currently, take the lead, be a champion and work with your management to develop an effective cybersecurity program.
  11. Stay informed about security in the utility industry. Events such as DistribuTECH, where Fujitsu will be exhibiting, offer plenty of opportunities to learn more about this critical topic.

If you operate a generation and transmission cooperative, be advised that you are obligated to comply with North American Electric Reliability Corporation (NERC) rules, and failure to do so can result in huge penalties. Identifying your compliance obligations is a critical task, especially since NERC rules are created to secure your network.

For some utilities, particularly small rural electric cooperatives, the idea of a serious security threat to their essential infrastructure may sound far-fetched, like the plot to an action movie. However, it’s important to note that the biggest security risk is not necessarily a targeted attempt to physically destroy your equipment. A random malware attack is much more likely than a cyberterrorist, but this can devastate your critical infrastructure systems all the same, potentially causing significant damage and harming the public.

Digitizing the Customer Experience

Digitization of the network is reshaping the telecom landscape as customer data consumption habits change thanks to new, disruptive technologies. We’ve gone from a LAN connection on a desktop in your home to a cellular device in your pocket, and regular customers expect to access content whenever and wherever they are. This means that service providers are in trouble if they can’t adjust. They must find a solution that will keep the network healthy and adopt new technologies suited to today’s demands.

Today’s Network Operations Center (NOC) monitors the entirety of a network, actively working to keep everything healthy. However, it’s fundamentally reactive, with thousands of alarms piling up each day for operators to sift through. Current operations are handled manually, creating difficulties when trying to onboard new technologies. Digitizing the NOC to meet customers’ demands requires automation that will turn its reactive nature into a proactive one.

To ensure the health of a network, service providers need a service assurance solution capable of providing fault and performance management, as well as closed-loop automation. Fault and performance management uses monitoring, root-cause analysis, and visualization to proactively identify and notify operators of potential problems in the network before a customer can experience them. Providing closed-loop automation, a service assurance platform continuously collects, analyzes, and acts on data gathered from the network. When combined with machine learning, a service assurance platform becomes an essential part of the NOC. Altogether, a service assurance platform can cut the number of alarms by 50%, a significant reduction considering that a provider may collect close a million alarms each month.

A targeted network management solution provides an accessible path for network migration. While legacy equipment is guaranteed to work, it may not be the best fit for digitization. Integrating a targeted network management solution into your NOC helps bridge the gap between new technologies and vendors with legacy equipment. It supports a multivendor environment, allowing the NOC to manage both new and legacy equipment from different vendors in the same ecosystem. As well, targeted network management enables service providers to bring new services to market twenty times faster due to significant improvements made to the onboarding of new technologies and vendors into the network.

An automated NOC that contains both service assurance and targeted network management provides a network perfectly suited for the changing digital landscape. Service assurance helps keep the network up and running by identifying critical issues so that no matter where or how users access the network, they will be provided with a seamless experience. Targeted network management helps quickly onboard new technologies and vendors that will help push towards digitalization. Combined in a 24x7x365 NOC, service providers are prepared for whenever, wherever, and however, a customer chooses to interact with the network.

For a customer or business, the advantages of an automated NOC are exceptional. Customers don’t have to worry about any issues regarding the accessing of data from any device, anywhere or at any time of day. For businesses, the proactive nature of service assurance and the simple network migration of targeted network management helps ease operating expenses and mean-time-to-repair. Digitization isn’t slowing down for anyone, and service providers offer a way to hop on the train.

Co-Creation is the Secret Sauce for Broadband Project Planning

Let’s face it—meeting rooms are boring. Usually bland, typically disheveled, and littered with odd remnants of past battles, today’s conference room is often where positive energy goes to die.

So we decided to redesign one of ours and rename it the Co-Creation Room, complete with wall-to-wall, floor-to-ceiling whiteboards. Sure, it’s just a small room but I have noticed something: it is one of the busiest conference rooms we have. It’s packed. All the time. People come together willingly – agreeing upfront to enter a crucible of co-creation – where ideas are democratized and the conversation advances past the reductive (“ok, so what do we do?”) to the expansive (“hey, what are the possibilities?”).

This theme of co-creation takes center stage when we work with customers on their broadband network projects. These projects are an incredibly diverse mix of participants, aspirations, challenges, and constraints which really brings home the necessity and power of co-creation.

Planning, funding, and designing wireline and wireless broadband networks are a question of bringing together multiple stakeholders with varied perspectives and fields of expertise, as well as negotiating complex rules of engagement, all while we plan and execute on a challenging multi-variable task. Success demands a blend of expertise, resources and political will—meaning the motivation to carry initiatives forward with enough momentum to carry through changes of leadership and priorities.

Many times prospective customers seek to start by bolstering their in-house expertise by asking for project feasibility studies  Good feasibility vendors should have knowledge of multi-vendor planning, engineering design, project and vendor management, supply chain logistics, attracting funds or investment, business modeling, and ongoing network maintenance and operations, to ensure a thorough study. Look for someone with experience across many technologies and vendors, not just one.

As a Network Integrator, we bring all the pieces together. But we do more than just get the ingredients into the kitchen. Our job is to make a complete meal. By democratizing creation, we like to expand the conversation—and broker the kind of communication that gets diverse people working together productively.

The integration partner has to simultaneously understand both the customer’s big picture and the nitty-gritty details. Our priority is to minimize project risk and drive things forward effectively.  Many times, we have to do the Rosetta Stone trick and broker mutual understanding among groups with different professional cultures, viewpoints, and language. We take that new shared understanding and harness it to co-create the best possible project outcome.

On a recent municipal broadband project, for example, we learned that city staff and network engineers, don’t speak the same language. A network engineer isn’t familiar with the ins and outs of water systems and a city public works director doesn’t know about provisioning network equipment.. But by building a trusted partner relationship, we  helped to build the shared understanding needed. With this new shared understanding, we realized that we really had re-defined what Co-Creation really means to us.

So, when you come to Fujitsu, you will see the Co-Creation Room along with this room-sized decal:

Co-Creation: Where everyone gets to hold the pen.

Importance of Fiber Characterization

Fiber networks are the foundation on which telecom networks are built.  In the early planning stages of network transformation or expansion, it is imperative that operators perform a complete and thorough assessment of the underlying fiber infrastructure to determine its performance capabilities as well as its limits.  Industry experts predict as many as one-third of the fiber networks will require modifications to the existing systems.

Front-end fiber analysis ensures key metrics are met and the fiber is at optimum performance levels to handle the greater bandwidth required to transport data-intensive applications over longer distances.  This will save the service provider time and money and prevent delays in the final test and turn-up phase of the expansion or upgrade project.

Fiber architecture diagram that shows fiber’s journey from the central office to the various real-world locations (homes, businesses, universities, etc.).

 

This full network diagram shows node locations, types of fiber and distance between notes. Includes ELEAF and SMF-28.

 

Actual images of clean and dirty fiber. Includes comparison of clean fiber versus fiber with dust, oil and liquid contaminations.


Potential Problems & Testing Options

Fiber networks are comprised of multiple types, ages and quality of fiber all of which significantly impact the fiber infrastructure and transmission capabilities.  Additionally, the fiber may come from several different fiber providers.  The net result is there are several potential problem areas with fiber transmission including:

  • ­Aging fiber optics – Some fiber optic networks have been in operation for 25+ years. These legacy fiber systems weren’t designed to handle the sheer volume of data that is being transmitted on next generation networks.
  • Dirty and damaged connectors – Dirty end faces are one of the most common problems that occur at the connectors. Environmental conditions such as oil, dirt, dust or static-charged particles can cause contamination.
  • Splice loss – Fibers are generally spliced using fusion splicing. Variations in both fiber types (manufacturers) and the types of splices that are being used (fusion or mechanical) can all result in loss.
  • Bending – Excessive bending of fiber-optic cables may deform or damage the fiber. The light loss increases as the bend becomes more acute.  Industry standards define acceptable bending radii.

Fiber characterization testing evaluates the fiber infrastructure to make sure all the fiber, connectors, splices, laser sources, detectors and receivers are working at their optimum performance levels.  It consists of a series of industry-standard tests to measure optical transmission attributes and provides the operator with a true picture of how the fiber network will handle the current modernization as well as future expansions.  For network expansions that require new dark fiber, it is very important to evaluate how the existing fiber network interacts with the newly added fiber to make sure the fiber meets or exceeds the service provider’s expectations as well as industry standards such as TIA/ANSI and Telcordia.

There are five basic fiber characterization tests:

  • Bidirectional Optical Time-Domain Reflectometer (OTDR) – sends a light pulse down the fiber and measures the strength of the return signal as well as the time it took. This test shows the overall health of the fiber strand including connectors, splices and fiber loss.  Cleaning, re-terminating or re-splicing can generally correct problems.
  • Optical Insertion Loss (OIL) – measures optical power loss that occurs when two cables are connected or spliced together. The insertion loss, is the amount of light lost.  In longer distances, the light loss can cause the signal strength to weaken.
  • Optical Return Loss (ORL) – sends a light pulse down the fiber and measures the amount of light that returns. Some light is lost at all connectors and splices.  Dirty or poorly mated connectors cause scattering or reflections and result in weak light returns.
  • Chromatic Dispersion (CD) – measures the amount of dispersion on the fiber. In single-mode fiber, the light from different wavelengths travels down the fiber at slightly different speeds causing the light pulse to spread.  Additionally, when light pulses are launched close together and spread too much, information is lost. Chromatic dispersion can be compensated for with the use of dispersion-shifted fiber (DSF) or dispersion compensation modules (DCM’s.)
  • Polarization Mode Dispersion (PMD) – occurs in single-mode fiber and is caused by imperfections that are inherent in the fiber producing polarization-dependent delays of the light pulses. The end result is the light travels at different speeds and causes random spreading of optical pulses.

Once the fiber characterization is complete, the service provider will receive a detailed analysis of the condition of the fiber plant including: location of splice points and pass-troughs as well as assignments of panels, racks and ports.  They will also know if there is any old fiber that will not be able to support higher data rates now or for future upgrades.   More importantly, by doing the fiber characterization prior to transforming or expanding their telecom network, service providers can eliminate potential risks with the fiber infrastructure that can result in substantial delays during the final test and turn-up phases.