April 24, 2024

If it ain’t broke, don’t patch it!
SECURITY SESSIONS: Volume 4 No. 2

by William T. (Tim) Shaw, PhD, CISSP
It seems as if one of the major points of conflict and disagreement concerning cyber security standards and recommended practices derives from the “no man’s land” that exists between the reality of computer-based automation systems and the ideal world of the IT security standard. Many of the current recommendations for making automation systems secure, be they EMS/SCADA systems, plant DCS systems, or safety/shutdown systems are derived from IT standards such as NIST 800-53 or the ISO 27001 recommendations (or the original ISO 17799 standard.) The NERC CIP standards, the NRC’s RG 5.71 standard, and even the recent recommendations in the gas pipeline world (from the TSA and INGAA) all have a basis in the IT world. There is nothing basically wrong with most of the security practices used in the IT world. But applying them successfully to industrial automation and control systems in the real world can be quite a daunting (and sometimes frustrating) exercise. – Tim.

William T. (Tim) Shaw
PhD, CISSP

I have no issue with the goal of identifying best practices and creating recommendations and standards for cyber-securing our industrial automation systems. In fact, I’m all for seeing that it happens as soon as possible! There have already been enough examples of cyber security threats and incidents to make it obvious, even to the most reluctant instrumentation and controls engineer, that cyber security is a necessity – particularly since corporate connectivity and the insatiable demand for current-data needed to satisfy business decision-making processes is not going away. Plus, the recent Stuxnet malware discovery has shown that even measures like ‘pulling the plug’ and ‘air gapping’ our mission-critical automation systems do not constitute an adequate cyber defense.

The problem with many current cyber security recommendations is that the groups promoting these standards never seem to include anyone with actual industrial automation expertise and plant operating experience. As you read through most of these standards you get a clear sense of the IT mentality that guided their creation. They are filled with suggested technical controls and countermeasures that make sense if you are dealing with servers running Windows and linked with Ethernet, or perhaps TCP/IP networking and remote users who are just trying to figure out how to type a letter after being subjected to the latest incomprehensible changes to the Microsoft Office applications. But many of these technical and administrative controls don’t work well with real-time industrial automation systems.

A SCADA, DCS or PLC-based automation system manufactured in the past few years will incorporate a lot of elements that are seemingly identical to what IT professionals deal with every day: PCs and servers running a Windows OS, Ethernet LANs, TCP/IP networking, web servers, etc. But those same systems will also include devices and equipment the IT people don’t typically see in their training – and most never will.

The most obvious are process controllers – and I’m including RTUs and PLCs in this general term – with analog, pulse and contact I/O which are performing real-time data acquisition, alarming and control functions. There will often be other ‘smart’ devices such as analyzers and ‘smart’ instrumentation that may interface though asynchronous serial communication circuits (i.e., EIA-232, EIA-422 or EIA-485) and ‘speak’ industrial protocols such as Modbus or DNP3. There may be historians with proprietary databases and APIs and other types of application servers connected together using one or more variants of OPC or using the industrial protocols previously mentioned.

In a legacy SCADA system they may encounter low-bandwidth, bit-oriented legacy protocols that require special communications hardware. They will definitely run into analog leased and dial-up phone lines configured in a multi-drop arrangement – and many more legacy protocols. Today, they are also likely to run into wireless technology being used in ways they haven’t seen before. A plant mesh network of instruments running WirelessHART protocol in a plant, or spread-spectrum radio repeaters linking a city-wide network of PLCs connected back to a control center, are very different wireless technologies than an office WiFi (IEEE 802.11) access point.

The issue is that these things are NOT what IT professionals are used to dealing with, and they don’t conform to the notions of a ‘computer’ or a ‘network’ or a ‘database server’ as the IT world thinks of those elements. In many instances the suggested technical controls for cyber securing them are either not possible or not practical.

Consider that a protective relay, an analyzer and a PLC are all very powerful computer-based devices, and most even support Ethernet and IP-based communications, but they don’t run a Windows or Linux operating system; they don’t support complex passwords; they don’t have separate user accounts and login IDs; and they generally don’t respond well to being hit with a vulnerability scan. It is also very hard to find malware/virus scanning software that can be loaded into, and run, in such devices.

One basic problem with many such devices is that they have a minimal IP “stack” implementation, and their code doesn’t handle communication exceptions very well, if at all. Similarly their configuration settings may not include much of anything beyond assigning an IP address and subnet mask. And yet, because these are ‘computer-based’ devices, most of the current cyber security recommendations for automation systems would suggest applying those IT cyber security mechanisms.

Other typical IT cyber security solutions – such as user account lock-out and time-out – would be considered dangerous if applied to an operator workstation on a plant DCS or EMS/SCADA system. One of the computers I use in my work is configured for the highest level of cyber security the system administrator could devise. If I turn away from that computer for just a minute, to answer a phone call for example, it automatically locks me out and requires that I login yet again in order to return to my work.

A control room operator would find it totally unacceptable (not to mention unsafe) if you tried to configure that operator’s workstation in that same manner. If they have to login at all, most operators do so at the start of their shift and then expect the workstations to remain up and operating until they end their shift. I’ve seen operator workstations that were powered up when the system was installed, and no one has logged in or out of them since. Having to go through a login procedure in order to be able to make a control adjustment or change operational displays, particularly in the middle of a plant upset, would never be acceptable to plant operators. So applying typical IT security controls to an operator workstation, even though it looks like a PC, just doesn’t work.

Another general problem with applying IT cyber security solutions to industrial automation systems is the fact that in the IT world, most computer and network equipment is considered obsolete – and usually replaced – after five years of service. This means that is it usually mostly up-to-date and still supported by its vendors. On the other hand, in many industrial facilities and operational control centers, the automation equipment is well over a decade old with some even decades old. The vendor(s) of that hardware and software may no longer support the system, or no longer even exist at all.

The plant personnel are not generally able to make any significant changes to these systems (i.e., other than the typical user-configuration tasks such as adding I/O, editing graphic pages, defining calculations, etc.). They are normally loath to implement patches or software upgrades – even assuming that such measures are readily available – because of the possibility of breaking something that could easily have catastrophic consequences. Their mindset is very simple: “If it ain’t broke, don’t patch it!”

In some plants it’s not even a sure thing that the plant personnel could bring the systems back up and return them to fully operational status, were something to go seriously wrong. I have seen automation systems where the backup medium is a magnetic tape copy – usually made by the vendor when the system was initially commissioned. No one in the plant knows if it can actually be read or if the remaining legacy tape reader even works – and nobody is going to risk trying to find out!

Under such operating conditions, the procedures that would be considered as recommended IT practices, such as making and testing routing system backups, evaluating and installing security patches and even running and updating virus scanning software, might not be possible. For those older systems there is no longer a vendor who is providing patches, either because they are gone (no longer in business) or have dropped ongoing legacy support for the product. Even for newer systems, there have been instances where vendors have released patches that turned out to be inadequately tested (if at all) and downright dangerous. So, unless a plant has a separate test system, it may be preferable to avoid installing patches, thus precluding the risk of the patches causing unexpected outcomes and/or equipment damage.

One possible strategy for industrial automation systems that no longer have support available is to simply “wrap” them in a protective cocoon – usually accomplished using middleware – that doesn’t require the modification of the system yet still improves security. By this I mean placing firewalls and intrusion detection/prevention systems between these old systems and any IP-based communication interface. Moreover, setting up a DMZ and placing a sacrificial data server between these systems and the corporate network can help to isolate them. It is even possible to use more dramatic approaches such as placing “data diodes” between these systems and any data requester to ensure that data can only pass in one direction.

Of course, another aspect of cyber security is having good policies and procedures (i.e., more than just “don’t touch the system”). By now we all know that Stuxnet got into systems by being brought in on USB thumb drives. Smart, properly communicated and enforced policies and procedures would have gone a long way in preventing that sort of attack. Now this doesn’t mean we ought to throw up our hands and forget about making our automation systems, even the old ones, more cyber secure. What is does mean, however, is that we may have to look at some unconventional alternatives… but that will be the subject matter for a future column. – Tim

About the Author

Dr. Shaw is a Certified Information Systems Security Professional (CISSP) and has been active in industrial automation for more than 30 years. He is the author of Computer Control of BATCH Processes and CYBERSECURITY for SCADA Systems. Shaw is a prolific writer of papers and articles on a wide range of technical topics and has also contributed to several other books. He is currently Principal & Senior Consultant for Cyber SECurity Consulting, a consultancy practice focused on industrial automation security and technologies. Inquiries, comments or questions regarding the contents of this column and/or other security-related topics can be emailed to Tim@electricenergyonline.com.