Following her mechanical engineering studies, Sarah Fluchs embarked on a career in cybersecurity. Today, she is CTO at admeritia and co-authored the recently published “Top 20 Secure Coding Practices” for PLC programming. In our exclusive interview, she talks about particularly difficult challenges in her work, technical developments in the area of OT/IoT security, and suggests ways in which managing directors of SMEs can approach OT security.
Dear Ms. Fluchs, how did you get into industrial cybersecurity?
Short answer: I’ve always looked for an area to specialize in where you can and must have a big-picture perspective of the entire system rather than “just” focusing on a small highly optimized part of one.
Long answer: During my mechanical engineering studies I couldn’t really decide on one specialization. Why should one type of machine or piece of equipment be more interesting than another? A control technology professor once said to me “come join us, we build mathematical models of entire systems,” – that made sense to me and I found system theory and model design fascinating. The only problem was that I always had more fun understanding systems than building them. I couldn’t really imagine spending my whole life optimizing controllers. I first got the bug during my automation technology Master’s degree which, although highly interdisciplinary, had a major focus on communication technology (and partly also on security). That was when I understood: In security it’s about both, firstly, having an overview of the entire system and secondly, understanding continually new systems that you haven’t designed yourself. At that time, I also worked in journalism and wrote a longer report on data security and privacy – this helped me choose the topic for my Master’s thesis. Purely by chance, I came across security for automation technology when I was looking for potential topics for my Master’s thesis at the Federal Office for Information Security (BSI). I realized then that it was what I had to do. Especially because it was still a relatively unexplored research area and one that offered a lot of potential in shaping its development. That is still the case now.
And: it doesn’t get much more interdisciplinary than that. Automation technology is already interdisciplinary and a cross-industry field. Industrial security also includes a great deal of network technology and IT as well as – from an engineer’s perspective – a multitude of “soft” disciplines like management, organizational structures, and even psychology.
Are there any anecdotes or stories from your professional life that haven’t yet been told (or not often enough)?
I can say with a great deal of certainty that I studied a subject in which I had the least talent.
Engineering studies were not on my agenda at all. I chose not to pursue physics in school and was probably the only first-semester mechanical engineering student in Aachen who had never heard of “Newton” before their first physics lecture.
Before my mechanical engineering studies, I’d studied mathematics for two semesters before changing direction because math was too narrow. “Mechanical engineering” is actually a very misleading title, it should be called “general natural science studies”.
At RWTH Aachen, before registering to study mechanical engineering, you had to complete a voluntary self-assessment that included exercises in mathematics, logic, technical understanding, and spatial thinking. Well, the self-assessment indicated that it would be better for me to study mathematics…
How do your customers typically approach you and your colleagues?
“We actually have quite a good setup, but…”
There is no reason for people to be ashamed or surprised for believing that they have a good setup in security – at the same time, there is a reason to be ashamed or surprised if they then find out that this is not the case.
Industrial security is not a topic that most engineers have studied in detail and that was included in the planning of most of the plants. So how can they know what a) “having a good setup” actually means and b) how that should work?
Security is a topic that is often underestimated because it is considered to be part of compliance. Security is often equated with information security management which is similar to quality management – and quality management is often something you have on paper but everyone knows that it works differently in reality. Security becomes an exciting topic (also for engineers), once you look at it from the technical perspective: What exactly has to work to ensure “the system works”?
“We don’t know where to start…”
See above: Security engineering as discipline, especially as a subdiscipline for automated systems, is still in its infancy. So I really understand why engineers could find this topic a constant source of annoyance. Someone is always telling them to think about security – but nobody tells them how. Engineers are used to adopting an approach that is highly methodical, often model-driven, natural-science-related, and often standardized. Therefore, the lack of methods in security can be difficult to grasp and even frustrating. Where is the instruction manual (like the Dubbel guide to mechanical engineering) for security when you need it?
In addition: Engineers are usually tasked with looking at security as an add-on activity to their existing responsibilities. That also means: They are expected to fit this additional work around their core work. However, this doesn’t often work well if they want to have satisfactory results. It’s not only the methodology that’s missing, but also a data basis (asset register!) and time.
“Security people don’t understand us and our systems.”
In many cases, IT regards automation with incomprehension or even arrogance:
- “You’re doing something that we discontinued 20 years ago!”
- “Industrial Ethernet is also Ethernet, it can’t be that different.“
Statements frequently voiced in the heat of the moment on any given day.
Likewise, automation regards IT with incomprehension or even arrogance:
“People who can’t even spell PLC, shouldn’t be telling me how to do my work.”
Both perspectives are understandable, but incorrect. Both sides can learn from one another. However, automation tends to hand off too much to IT. Security is not possible without people who understand the systems – but they do have to feel responsible for “their security” (and be able to handle it themselves).
Can you tell us about a particularly difficult challenge that you and your team resolved?
For over a year, we have been working with an international pharmaceuticals group to resolve a challenge they are facing. The objective is to design and roll out a technical solution for patch management in automation-related networks – a solution that will function at all locations worldwide. We can safely say that we know all the solutions for this in the market. None of them are perfect. A lot of workarounds are necessary – which of course is not something you want in a global group.
The problem has three subproblems.
- How will I be informed about new patches and how can I get these patches?
- How do I decide which patch has to be installed – and how quickly?
- How do I roll out patches with reasonable effort and without downtime in systems that are not all networked and potentially operate 24/7?
So the topic of patch management is by no means a purely technical one and you can see why there are such big differences between security for “normal” IT and security for industry. Let’s look at the second subproblem: IT people often find the question itself difficult to understand. A security patch has to be installed, and this should happen as soon as possible. End of discussion.
In automation, however, informed non-patching can be a wise decision and the question as to what should be patched is a key (and the most complicated!) component of the patch solution. For example: If you roll out a security patch and by doing so release the control system manufacturer from liability, you haven’t gained anything. The liability question does, however, often depend on the precise software combination installed in a system – and you would have to have that information in the first place…
As co-author, you have just published “Top 20 Secure Coding Practices” for PLC programming. For impatient readers: Can you provide 1-2 examples of how to program PLCs more securely?
My favorite practices are always those that use the inherent strengths of a PLC. No system in the field knows the process better than the PLC. If you want to know whether a change in the process is an anomaly, ask the PLC, not an anomaly detection tool. This also implies that PLCs are the most important system, the last line of defense, for validating the integrity of target values and parameters transferred from field devices to field devices.
This can be checked specifically. Your sensor tells you that a valve, that physically needs 3 seconds to close, was closed in a millisecond. You can ask the PLC whether that makes sense. The control system tells the PLC that as of today, triple the volume of chemicals is a good idea. You can ask the PLC whether that really is a good idea. PLCs can act as bodyguards for your processes.
Apart from the already overused Triton case: Are there any other examples of cyberattacks in which PLCs were utilized?
To be honest, there have been very few attacks specifically targeting industrial IT. Far more attacks have been made on IT, with intended or random side effects on automation technology.
Targeted attacks on controls are complex and require process know-how. These aren’t usually direct attacks on controls, but rather attacks via engineering PCs (Stuxnet) or the control system (Oldsmar).
However, the question as to whether a PLC was attacked directly or indirectly is not really that important. If someone wants to impact the process in industrial IT – which is generally the absolute horror scenario we want to avoid – they will often try to do so via PLC. PLC weaknesses are well-known, widespread, and sometimes even necessary “features.” This is why it was necessary to start a discussion on what we mean by “secure” PLCs. This is what Secure Coding Practices aim to achieve. To be clear: Secure Coding Practices for PLCs alone do not offer effective protection against this kind of incident. I would not recommend that anyone start by implementing this if they haven’t previously given any thought to security. The Coding Practices ensure that PLCs are no longer the weakest link and are potentially even an extra layer of security: going from Achilles’ heel to bodyguard for the system.
What technical developments in the area of OT/IoT security do you currently find particularly interesting?
Looking at what security and safety experts are currently doing shows some exciting developments. In OT, we are increasingly finding that both sides are actually working on the same overarching objective: Ensuring resilience in technical systems experiencing unexpected external changes. Shaping this together in system engineering methods (without blending security and safety methods and creating interdependency!) is a massive undertaking. But in a few years, we will be wondering why we didn’t always do it like that.
Digitalization is another interesting topic because it offers so many still untapped synergies with security. It’s madness that for years we’ve been seeing one new asset inventory tool after another in industrial security, but we don’t make use of the many overlaps with Industry 4.0 developments. Asset information is not a security topic, it’s a digitalization topic! Yet security is one of the key drivers in our need for asset inventories.
If you were a Managing Director of an SME who had not done much in OT security up to now, where would you typically start?
Introducing and implementing a solid security engineering methodology. That is your most important tool for making all other decisions.
In the course of doing this, you will not only document and learn a lot about your systems, but also uncover operating bottlenecks and balancing acts you might not have known about previously – ultimately, this effort will give you a model of your systems from the security perspective (often with an unusually low level of detail) that will help you in the future. For example: You have a security incident in IT and have to quickly remove the cable that connects IT and automation from the firewall. How do you know what happens after that? This was a real case that occurred at one of our customers and they were able to answer the question in half an hour thanks to a function-based system model.
Incidentally, even though risk analysis is a core element of security engineering, I deliberately avoid saying “do a risk analysis.” That would be misleading because you will spend far more time understanding and modeling systems than on assessing risks.
Are there any largely unnecessary or ineffective security solutions that should be shifted to the end of the roadmap?
Always look closely if there are plans to introduce additional systems just for security. Keeping things in perspective, these “super smart security solutions”, like any other new system, initially add complexity and a potential new attack vector. This has to be weighed up carefully against the benefits of security. Many effective actions exist that do not entail having to buy any new hardware or software.
Security monitoring, anomaly detection, and threat intelligence belong at the end of the road map. The crux of anomaly detection is that you have to know what is normal. The crux of security monitoring is that someone should be able to do something useful with the high volume of data generated – they should have the time and know-how and also know what to do in case of critical notifications, otherwise you will not have gained anything. This is similar for threat intelligence: It is useful to know which attackers are currently using which methods (and which ones may have already infiltrated your network), but only if you can really make use of this information.
With all these solutions that deliver additional information, ask yourself: What will I do with the information? How will it affect my decisions if I have this information? If you can’t answer these questions immediately then you likely still have a way to go before these solutions will help you.
What are the misjudgments or half-truths you keep hearing, even from experts?
“You always have to weigh up security vs. convenience.” That’s true if your security only consists of long passwords and unusable encryption tools, but these tend to play a secondary role in industrial security.
Especially in industrial security, the most effective actions show that the contrary is often correct: Something that helps security will often help achieve more efficient operations. Industrial security has a lot to do with understanding systems, maintaining an overview, concentrating on the most important aspects, and reducing complexity. Sometimes, engineers can even make security a scapegoat to finally address operational topics that have been a headache for a long time, for example, an expensive upgrade to a new operating system.
If you could send an e-mail to all CIOs worldwide, what would be your core message?
Security does not work if it is operating as a central staff unit (where it is usually located in the organization structure). Place your security officers in the line organization, and staff it with people who know how your business runs.
If security is going to work, it has to come from within – therefore from the people who work with and on the systems that are to be protected. Security is a killjoy topic anyway, so it doesn’t work “outside-in” and it also doesn’t work without a budget and decision-making authority, without knowing how the operational processes work – and importantly: it doesn’t work without knowledge of the systems.
Please remember: This article is based our knowledge at the time it was written – but we learn more every day. Do you think important points are missing or do you see the topic from a different perspective? We would be happy to discuss current developments in greater detail with you and your company’s other experts and welcome your feedback and thoughts.
And one more thing: the fact that an article mentions (or does not mention) a provider does not represent a recommendation from CyberCompare. Recommendations always depend on the customer’s individual situation.