At a recent meeting of ICS Security “experts,” the discussion turned to risk-assessment standards.
I posed the question: Why are are we so infatuated with the Risk Equation when it offers so little guidance. “Why not use consequences and defenses?” I asked. “Isn’t that how most Engineers and Operators think?”
“Risk is what they understand in the boardroom.” was the reply from our resident sage.
I didn’t feel like pursuing the question any further at the time. I could see that they were afraid to deviate from the standards they “knew” and didn’t care about what it might cost. And yet, as I left the meeting, the answer began to gnaw at me.
When assessing whether to make a potentially hazardous flight, pilots evaluate lots of things: The weather, the fields where they intend to land, the alternate airports, fuel budgets, navigation options, density altitudes, and so on. They don’t look at risk scores. Their bosses don’t look at risk scores either.
So who does use them? Well, perhaps insurance companies use them to monetize long term risks, but I’m not aware of anyone else who does. Furthermore, the risk equation, while appropriate for safety, is poor for security because security violations are not easily described with statistic probabilities. They are often triggered by external or internal politics. In other words, they’re not random events.
Engineers and OT need to decide what sorts of protection are appropriate. In other words, what requires defense and from whom? There will need to be detection, and defensive measures. These detection measures are also useful for diagnostics. The result can be an earlier recognition of a problem and a significantly shorter Mean Time To Repair (MTTR), so there is a return on the investment even if the system is never attacked.
In terms of making the case in front of the board room members, OT can just cite these issues.
The risk equation, even if it is well understood, indicates problems, not remedies.
By identifying blind spots in process and instrumentation systems, identifying self integrity checking measurements, and by training staff to recognize security issues process awareness is significantly improved. These systems can also make tracking employees easier; as such, one can identify those who make process errors and target them for additional training. Finally, by preparing software and OT resources for rapid recovery following a hack one can also recover from disasters sooner.
I’m pretty certain that these are also things boardroom members understand. So why are we still discussing that risk equation?