Complexity – The Difference Between a Widget and a Thingy

I was discussing this with Bob and he pointed an interesting word use I had stumbled upon: Thingy. Bob pointed out that a thingy is what people call things they don’t understand. A widget is a thing that you do understand. Our SCADA systems used to be composed of widgets that we could work on. Today, those widgets have been replaced by thingys.  Those thingys are much more complicated than they need to be.

Complexity is a problem that has plagued many aspects of control systems. I’m going to go geezer for a bit, but there is a reason behind this.

When I first got in to SCADA systems in the mid 1980s, an RTU was a very simple data gathering device that reported over a modem at blazing speeds of around 1200 bits per second. The data were integrated, displayed, and alarmed through a sixteen bit PDP-11 running at 15 MHz, that had a whopping 2 MB of RAM, and an 80 MB hard drive.  The display terminals were specialized color character graphics terminals built around an 8085 processor with about 32 kb of RAM.

That was a enough to manage a system of nearly 70 RTUs, and probably about 700 I/O points.  Alarms were printed out on dot matrix impact printers. If you wanted to know what the alarm history was, you went back through the printout.

It was crude, and it wasn’t particularly pretty –but it did the essential stuff. Later, we learned how to build history files of data in comma separated variable (CSV) files that we could import in to a spreadsheet program. The data had a time resolution of about once every two minutes.

That was then. Today, we have virtualized servers with tens of gigabytes of RAM, NAS systems with dozens of terrabytes of storage, displays with cheap graphics cards that have a maybe 500 MB of Graphics space, the HMI has about 2 GB of RAM on a 32 bit Windows platform.  The historian is a monster with four processors, TerraBytes of storage, and a 64 bit server OS. It records rollups every fifteen to 30 seconds for thousands of points. Alarm processors have histories going back years.

And how many points are we collecting from how many RTUs?  I think it’s about ten times as many points from less than three times as many RTUs.

So while the data volume has gone up by an order of magnitude, the machine performance has increased by roughly FIVE orders of magnitude. Are we really doing that much more with our data? Sure, the performance is cheap, but with that performance is also five orders of magnitude more software than ever before. That’s a lot of complexity.

But it gets worse. In addition to that complexity, there are new layers that we really didn’t consider to be an essential part of the system before. Let’s call them thingy layers.

First, there is a new network layer. Older SCADA systems pre-dated common use of TCP/IP.  In today’s SCADA systems the network hardware is active and smart. We have managed switches with routers, firewalls, DNS, DHCP, NTP, and many other servers. We have completely separate historian servers with their own complex software. Our operating systems have gone from single user simplicity to multi-user, multi-threaded, multi-processor systems of systems.

There are also thingy processors. The hard drive has its own processor as does the Front-Side-Bus of the processor. The Disks have RAID processors, The USB devices have USB processors. Dumb radios and modems now have digital signal processors of their own. They have modes, features, and modulation methods that older radio systems didn’t have. Even our RTUs that used to have about 32 kilobytes of ROM and 2 k of RAM, now have multi-threaded multi-protocol processors with flash memory measured in Gigabytes. Yes, the RTU is also becoming a thingy.

And when you get to the phone company, well, what use to be a straight run of copper connection from the field to the Control Room is now modulated through a trunk in to a switching center and back to another switching center and in to your modem at the control room. There is loads of stuff between both ends there too.

Each of these processors, every one of these peripherals, servers, and network devices, are all potential points of attack. The operating systems that used to fit on a 1.4 MB floppy disk and kilobytes of RAM is now unable to boot without Gigabytes of drive space and hundreds of Megabytes of RAM.

In the field, we have remote processing capabilities now, with the ability to communicate on a local LAN with multiple PLCs. These PLCs have embedded operating systems which include programs with application IDE systems. In the old days we didn’t worry about patching stuff because, aside of bugs, there weren’t many things that could go wrong with those RTUs.  Today, that field thingy is so complex that we’re not even sure if a patch is safe to apply.

This is why we have such a problem with “lightly configured” devices that respond to too many things.  It’s because theyr’e not widgets. They’re super complex thingys. Nobody have the time to wrap their minds around these devices as completely as they would have liked to.

So, are these thingys really doing that much for us?  I’m not so sure that they are. In fact, in frustration, I have even proposed that we consider ditching processors altogether and build primary control systems around small, comprehensible FPGA chips.

It’s worth a thought. But meanwhile I have more thingys that I don’t understand well enough…

 

http://www.infracritical.com

With more than 30 years experience at a large water/wastewater utility and extensive experience with control systems, substation design, SCADA, RF and microwave telecommunications, and work with various standards committees, Jake still feels like one of those proverbial blind men discovering an elephant. Jake is a Registered Professional Engineer of Control Systems. Note that this blog is Jake's opinion ONLY. No Employers, past or present were ever consulted with regard to these posts. These are Jake's notions. Don't blame anyone else for them.