More Problems with the Risk Equation

That rant I wrote earlier got me thinking even more…

The first presumption that the risk equation gets wrong is that generic risk is linear and additive. It is not. Let’s assume that someone sabotages the brakes in your car. You still have the parking brake that uses a completely separate system. You may not stop in the manner you had hoped for, but you will get stopped.

Likewise, if someone were to sabotage just your parking brake, well, you can use your regular brake to come to a complete stop and the put the vehicle in Park, which locks the transmission.

hinge-trap So the consequence of the first attack is medium to high, and the consequence of the second is relatively low. The risks are pretty even. Someone could hack in to your car and disable your primary braking system by messing with the ABS solenoids. But the parking brake is entirely mechanical. So you always have that to use for recovery. Thus, I’d place the risk of main brake attack at medium, and the parking brake attack at “low”.

But what if someone got access to your car and disabled both systems? Does risk one add up to risk two and do you get a resulting new risk that describes what happens if both occur?

Welllll… not exactly.

The problem with a risk equation is that it is often applied linearly to non-linear problems.

Security Risk is NOT Linear.

I’m not even sure what sort of function to use. In fact, the presumption that there is a function that can describe a security risk is probably bogus. For example, It does not take the behavior of management in to account. If the company is sold and new managers sweep in, there will be disgruntled employees. What fudge factor are people going to use for that. Do managers walk around with that fudge factor on their foreheads?

After all, the new management might be decent people who are just the breath of fresh air that the company needs. Or they might have ulterior motives.

Regardless of either situation, the notion that someone can assign a security risk score to anything is deeply flawed.

There. I said it. So what alternatives do we have?

We need to understand our processes better. We need to understand our employees better. If we leave our fence open, and allow anyone to walk in and change things on the plant, whose fault is that?

First, let’s have a second look at what we’re defending:  Without discussing the many security models that exist in the halls of academia, here are my personal views of what security layers should look like for an industrial control system:

Layer 0 is physical. How well is the site itself secured?

Layer 1 is the process and the controls itself. What can you do to the assets and I/O that will make it safer and less vulnerable to an attack?

Layer 2 is the controller. What can be done to the controller that will make it less vulnerable in the event of an attack?

Layer 3 is the network (I think this is self explanatory to most readers)

Layer 4 are the supervisory components, the HMI, the Historian, the Alarm server, and so on.

Layer 5 are the regular daily staff: the Operators, the technicians, the superintendent and so forth.

Layer 6 are contractors, consulting engineers, project managers, and other trusted outsiders

Layer 7 is the general public

We need Security for each layer of operation.

Second, within each security arena, what measures are we to take to limit or remove hazards?  How can we remove leakage from the layer above to the lower layers? These are not abstract questions; there are concrete design answers to these questions.

I have finally begun to realize that those who discuss abstract risks are doing so because they really do not understand what the security system is supposed to defend. Nobody seems to have real solid practice guides. It’s all abstractions with no sense of priority. The closest anyone has come to a realistic do-this/don’t-do-that set of guidelines is NERC CIP. However, NERC CIP suffers from another structural flaw: It has become a compliance-oriented paper chase. Nobody cares if the measures taken are actually effective. They care about whether it was done. In other words, they left out the last step of this process which is to confirm that the security measures actually work as expected. Instead they audit the field to ensure that the assets are in place, never mind functionality.

What we need are guidelines that not only tell people what is needed and why it is a good idea but what outcomes and tests can be done to confirm that it is doing what everyone expects. In other words, if you put a deep packet inspection firewall in front of a MODBUS port on a PLC, how do you know it is working properly? How do you know it hasn’t been tampered with?

Abstractions don’t help. Compliance doesn’t help. Only a design goal with a routine integrity can effect real security.

http://www.infracritical.com

With more than 30 years experience at a large water/wastewater utility and extensive experience with control systems, substation design, SCADA, RF and microwave telecommunications, and work with various standards committees, Jake still feels like one of those proverbial blind men discovering an elephant. Jake is a Registered Professional Engineer of Control Systems. Note that this blog is Jake's opinion ONLY. No Employers, past or present were ever consulted with regard to these posts. These are Jake's notions. Don't blame anyone else for them.

Related posts

Leave a Reply