April 20, 2024

Security Sessions: Serial Links Are Intrinsically Secure – Right?

by William T. (Tim) Shaw, PhD, CISSP / CIEH / CPT
Welcome to the latest installment of Security Sessions, a regular feature focused on security-related issues, policies and procedures. As has been expected, at least by every cyber security professional I know, another clever piece of malware made it on the scene recently. A variation on the Stuxnet malware (which is already so ‘last year’), this new invader has been given the name Duqu. Unlike Stuxnet, this new infection is primarily an information gathering mechanism, at least to the extent that it has been determined by the researchers doing the dissection. So far, the number of confirmed Duqu infections is limited, but by limited, I mean that it has only spread to eight countries. Thanks to the Internet, cyber plagues can spread around the world in very little time in much the way that jet travel can spread physical plagues. To me, that says the need for good cyber security has never been more obvious.. – Tim.

William T. (Tim) Shaw
PhD, CISSP / CIEH / CPT

In previous columns, we’ve discussed various security technologies and strategies for protecting industrial automation systems against cyber attack. I recently had an interesting discussion with a plant manager that has an in-plant data acquisition system (DAS) that collects information from a wide range of plant subsystems and equipment, and which in turn, feeds this information into both a plant and corporate data historian. The discussion had to do with the presumption that cyber attacks can’t occur over serial links. In fact, the person made the statement that serial links are “intrinsically secure”.

Allow me to clarify what the person meant by that statement, because point in fact, pretty much ALL communications schemes being used today are “serial”. Ethernet is serial – both wired and wireless versions – as are the FieldBus and Profibus standards. Likewise, ATM and SONET are serial. What the person was actually referring to was the use of point-to-point, asynchronous low-speed communications over EIA/TIA-232 or EIA/ TIA-422 circuits using basic-capability industrial protocols such as Modbus or DNP3.0. We can even extend that clarification to include multi-dropped communications channels using EIA/TIA-485. The person I was discussing this with probably would have even extended his definition to include things like vendor-proprietary synchronous, moderate-speed local area networks used by PLCs such as Modbus+ and DataHighway+ too. We never actually discussed the point to the level of considering non-Ethernet industrial communications bus standards, such as Fieldbus H1 and Profibus, but I’m reasonably certain that he would have included them in his broad definition of “intrinsically secure serial links” as well.

Essentially, this gentleman was telling me that if it isn’t running over Ethernet, using TCP/IP networking, it is “intrinsically secure”, and as such, nothing needs to be done about cyber security. Well, I know that if I could make a connection to one of those ‘intrinsically secure’ communication channels, I could potentially cause a range of problems; anything from causing false and misleading data to be displayed to an operator to commanding a piece of plant equipment to operate in an unsafe or otherwise undesirable manner.

That statement is based on many years of installing and commissioning automation system expansions and upgrades and fond memories of the excitement generated when defective software, improper configuration settings, or just plain human stupidity caused strange and mysterious things to happen. I shudder to think of what we might have done had we been intentionally trying to cause damage.

Okay, I know that the basic assumption for attacking computer systems is to get access to a communications circuit that will route me through intervening devices to the target system. But the fact is, one only needs to get into a device that has connectivity to that communications circuit, which is where the discussion started with my plant manager friend. His in-plant data acquisition system has numerous ‘serial’ Modbus links that are used to retrieve data from a whole host of subsystems and devices.

That DAS is, in turn, was connected to the plant and corporate Ethernet LANs and has little in the way of cyber protections because, as the discussion indicates, they feel that an attacker can’t go any further than the DAS and that DAS data is not ‘sensitive’. Moreover, the system would shortly be updated with fresh values from the plant devices and subsystems if corrupted. Essentially, he was saying that if an experienced hacker were able to break into the system’s serial communication channels, allowing them to send specifically crafted malicious message traffic to the DAS, it could not cause any serious harm to the plant and could not reach through the DAS to attack other systems.

That presumption actually raises an interesting question, because I’ve personally seen several instances of cyber security assessments that don’t consider ‘serial’ links as an attack pathway. I’m not talking about a dial-up phone line that is used for remote system access, or even a serial connection running PPP or SLIP as the data link layer for a TCP/IP connection, which are obvious attack pathways. An example of what I am talking about, however, is two critical systems that exchange data via a dedicated serial connection. Is it possible to attack one system from the other and plant malware via a ‘serial’ Modbus link? Let’s think about that possibility.

Having run a couple of computer automation companies that provided DCS, PLC and SCADA systems, I know that we frequently had to develop customized serial drivers to communicate with other systems and smart devices. We had developed several dozen Modbus drivers – and half that many DNP3.0 drivers – by the time I decided that being a shepherd might actually be a superior career choice. As with any other sort of communications software – a Modbus driver for example – expects all messages and responses to conform to the protocol specifications, but we didn’t always think about every possible way that messages could be accidentally or intentionally malformed.

Consider this: When a Modbus ‘master’ asks for 10 register values, if the programmer wasn’t on his/ her toes, the logic might not check that only 10 values actually come back. Thus, if 10,000 values are returned rather than the expected 10, obviously they have to be stored somewhere. This is the classic “buffer overflow” attack used by hackers over TCP/ IP networks; such attacks work in precisely this same manner. Trust me, in debugging Modbus (as well as other serial drivers), we saw many instances where the computer with which we were communicating got hung or crashed because we had a bug in our driver code and generated malformed commands or responses. We were not trying to cause a buffer overflow, but that’s what happened.

Of course, that doesn’t mean that every Modbus driver has those kinds of bugs. Like any software ever created, the robustness of a given program – particularly one that processes data inputs – is going to depend on how many checking and validation steps were put in the code and at what critical points in the logic. In theory, if I could get a copy of your driver code, I could probably reverse-engineer a buffer overflow attack that would overwrite the Modbus driver code and infect it (i.e., replace it) with my own code – just as it’s done via TCP/IP networks in remote exploit attacks. At that point, I would have planted malware in that other system – malware that is running with the same access rights as the Modbus driver it infected! If that other system had TCP/IP and Ethernet connectivity, the malware could then use them to spread to other systems using the same techniques used by “worms”.

I’ve used Modbus here as an example, but I could make the same points about any number of industrial serial protocols. Would an attacker be willing to go to those lengths? (Hint: Consider all we know about the Stuxnet Worm before you answer that question!) Admittedly, this notion is a bit theoretical – not the part about crashing systems with Modbus messages – I know first hand that that can definitely happen. I mean the part about passing malware in this manner. So, if someone is looking for a good doctoral thesis or research topic, I would be interested in discussing these concepts further and seeing the results!

Getting back to the discussion with the plant manager, it also reminded me about how people generally view cyber security. For far too many people involved in industrial automation, the discussion of security usually devolves into a discussion about reliability and the consequences of failure. This is because those factors have been the holy grail of automation engineers for many years. Historically, we designed systems to have full, automatic redundancy, or at the very least, be able to fail gracefully and non-disruptively. The problem here is that cyber attacks – as should be evident by the actions of the Stuxnet Worm – go well beyond the simple desire to shut a system down. In fact, that’s probably the least desirable option for a cyber attacker.

In the cyber security realm, we discuss assets being ‘compromised’ by a cyber attack. Isn’t that a great catchall term? It can mean anything from: ‘I caused your system to crash’ to: ‘I have full control over your system and all information it displays is suspect.’ If I craft an attack that merely causes your automation system to crash, knowing that for critical processes there is usually some level of backup safety scheme that prevents a catastrophe, I will have gained very little while you will have learned that you have cyber vulnerabilities that need to be eliminated.

But, if I craft an attack that causes your automation system to make periodic, random changes (i.e., start/stop equipment, change loop set-points, change tuning constants or other operational settings), I can disrupt your operations, probably cause you economic harm, possibly cause injury or even loss of life. Yet my attack might go undetected – probably for an extended period – since a long list of other possibilities would be probably suspected long before reaching the conclusion that it must be a cyber attack. Sure, such an attack would require some pretty serious knowledge about your automation system, but what I just said is a good overall description of what Stuxnet was designed to accomplish – so we know it’s all possible. Now, we are starting to see the next evolutionary offspring of Stuxnet – Duqu being the first.

Don’t be fooled, however; Duqu and even Stuxnet are not magic, no matter how clever their designs might be. They break into systems and spread by simply taking advantage of a combination of well known – and some zero-day – vulnerabilities. Are there are technologies and techniques that would stop them in their tracks – or at least render them ineffective? Yes, but that will have to be the subject matter for a future column... Tim.

About the Author

Dr. Shaw is a Certified Information Systems Security Professional (CISSP), a Certified Ethical Hacker (C|EH), a Certified Penetration Tester (CPT) and has been active in industrial automation for more than 35 years. He is the author of Computer Control of BATCH Processes and CYBERSECURITY for SCADA Systems. Shaw is a prolific writer of papers and articles on a wide range of technical topics, has also contributed to several other books and teaches several courses for the ISA. He is currently Principal & Senior Consultant for Cyber SECurity Consulting, a consultancy practice focused on industrial automation security and technologies. Inquiries, comments or questions regarding the contents of this column and/or other security-related topics can be emailed to tim@electricenergyonline.com.