Artificial Intelligence in Control Systems

Artificial neural network based fault diagnostics for ...
Artificial Neural Network, Copyright 2020, Creative Commons, Abhisar Chouhan, et al. from JVE Journal

Many people are starting to wave their hands and declare that we need machine learning or perhaps even deep learning forms of neural networks for managing alarms and automating industrial processes. My question is why?

Many years ago, I recall some professors who came to our water utility looking for a way to estimate demand by looking at the SCADA information over an entire year. Back in the mid 1990s this was not a trivial thing, but I complied. We presented them with about a year and a half worth of data. They then took this back to the university and made a neural network estimate of the consumption.

The problem was that we couldn’t tell what the model had trained up on. Anytime one new instrument or new construction was added, the model had to be retrained. This was not a trivial process. There was also a lot that the model didn’t have. It didn’t know the various hydraulic zones in the utility. It didn’t even have a calendar. It must have been inferring day of the week from the waste-water patterns (which were very consistent) of the day before. There was a “signal” on Friday night as more people went out to dinner or stayed up late that the weekend was coming –so it had some idea of what day of the week it was. But federal holidays? It had no clue. But a senior operator knew all these things and could easily outperform the model.

So that idea didn’t take hold. Some said that the technology wasn’t ready. But having read up on it, I decided that what it really lacked was an understanding of what it was doing. And there was no way we were getting to a sentient neural network any time soon.

Nevertheless, that hasn’t stopped the AI fan club from pushing the idea in to places where it really has no business existing. For example, take alarm management. Many organizations are flummoxed when managing alarms. There are too many, and it’s too damned difficult to figure out what the alarm really is. This is not a trivial issue. One of the contributing factors to the West Point Waste Water treatment plant failure in King County, Washington State, was the lack of priorities on the alarms. The operators had poor situational awareness. The damage ran in to the tens of millions and raw sewage was dumped intermittently to Puget sound for months while the plant was rebuilt.

What most operations really need is remote processing to identify what triggered first, and a few algorithms to determine what the cause is likely to be. Then send the summary alarms back to the operations center along with the status data that triggered it. Remember that the acronym SCADA begins with Supervisory. We need SUPERVISORY data.

But no. Many decide to send everything up the pipe in to the control room because they feel they have the bandwidth, so why not? Here’s why not: The goal is to summarize things and send relevant data. The control room is not meant to be a garbage dump of events that only some genius can sort through forensically days later.

Nevertheless, that’s what many companies have in their SCADA systems. So when some slick sales person shows up in front of the people with the purchasing authority with a really cool blinky light thingy that is “artificially intelligent.” It will sort out your alarms for you.

So how does it do this? It identifies patterns and sequences. It associates patterns with ultimate results, and then alerts you whenever it sees those sequences of patterns. The problem is that it doesn’t know anything about how the stuff works. It just looks for things it has seen before. So if this is a rare failure, that AI is useless. It can’t reason. It doesn’t know the context or the science behind what it sees.

If you want a practical example of what I’m talking about, I highly recommend Janelle Shane’s blog https://aiweirdness.com. In that blog you will find weird attempts at inventing a breakfast cereal, Playground equipment, recipes, poetry, and so on. Nevertheless, it demonstrates the fact that an AI is very much an Idiot Savant. It doesn’t understand what it does. It just mimics what it has seen in the past.

And that brings me to my contention: Until an AI can report the confidence of its actions, unless it can explain its reasoning in a manner an operator can understand, I propose we refrain from using it any critical applications.

http://www.infracritical.com

With more than 30 years experience at a large water/wastewater utility and extensive experience with control systems, substation design, SCADA, RF and microwave telecommunications, and work with various standards committees, Jake still feels like one of those proverbial blind men discovering an elephant. Jake is a Registered Professional Engineer of Control Systems. Note that this blog is Jake's opinion ONLY. No Employers, past or present were ever consulted with regard to these posts. These are Jake's notions. Don't blame anyone else for them.