Is A.I. the ultimate solution for protecting critical infrastructure from sophisticated cyber-attacks?

Kirk: “Machine over man, Spock, it was impressive, it might even be practical”;

Spock: “But not desirable, computers make excellent and efficient servants, but I have no wish to serve under them,” -From Star Trek TOS episode “The Ultimate Computer”.


Gort, a robot, is working with the computer aboard Klaatu’s spaceship in the 1951 film “The Day the Earth Stood Still”. 1951’s science fiction is becoming today’s science fact with A.I. Can we be as trusting as Klaatu in giving all the technical related work (patrolling the planets and responding to aggression) to his A.I. assistant?

One of the episodes from the 1960’s TV  series Star Trek was called “The Ultimate Computer”.  In the story a powerful new computer, the M5, enhanced with Artificial Intelligence (A.I.)[1] was created by computer genius Dr. Daystrom who claimed his computer could take over command of a Federation Starship. He claimed it could do a better job at managing ship functions, even taking on all the tasks performed by the human crew, including its captain. M5 was installed on the Enterprise and taken out on a cruise for a trial run.  The main part of the test was to evaluate M5’s ability to defend the ship against attack from other vessels.  The crew and its captain stood aside and watched as the M5 took full control of the ship and skillfully defended the ship with superior speed and tactics when confronted by an attack from the red team of two starships.  The test went wrong when the MF confused the test with a real attack on the ship and started to attack the exercise ships with deadly force. When the crew tried to regain control it also defended itself from being disconnected[2].  Eventually Captain Kirk regained control of the ship by appealing to the M5’s sense of humanity which its creator had imprinted on the computer during its creation.  Today, the question of machine over man raised in that 1960’s science fiction series has come to the fore as A.I. is being applied in real life in areas such as in helping to answer questions and write poetry.

This past week an article by Jason Healey, an experienced former government advisor who played a part in formulating national cyber policies and cybersecurity opinion leader, appeared that explored the possibility of applying A.I. to help cyber defenders stay ahead of cyber offense[3].  Up to now offensive use of cyber has always been ahead in the struggle for protecting the technologies that monitor and control critical infrastructure. Mr. Healey describes an increasingly complex cyberspace environment where the defenders with the required skillsets are few and the field of application of cybersecurity measures so vast.  A.I. can fill these gaps and according to Healey: “AI’s greatest help to the defence may be in reducing the number of cyber defenders required and the level of skills they need”.[4]  This sounds like the argument made by Dr. Daystrom in promoting the M5 to operate a starship on its own.

The author uses the examples of The US Cybersecurity Framework to explore the ways A.I. can be used for defense and the Cyber Kill Chain promoted by Lockheed Martin to understand offensive cyber operations.   There is one little problem with this analysis if one is concerned with protecting the technologies used to monitor and control processes found in critical infrastructure. Both models have flaws.  First the US Cybersecurity Framework poorly addresses process control environments where protecting the operation of a power grid or petrochemical plant is the priority as opposed to protecting the data in the office (read my review of the new draft of the Cybersecurity Framework[5]).   Under the category of protection Healey sees the advantage A.I. has for  the  automatic patching of “software and associated dependencies”[6].  In putting this on the advantage for the defender side he does not seem to appreciate the caveats raised when applying patches in a process control environment where the application of a patch can result in unexpected and unpleasant consequences.  In terms of identification it would be most useful in this kind environment of physical processes if A.I. could help identify and inform the senior plant engineer of anomalous process flows, data flows and equipment behavior.  Instead Healey sees A.I. being used mostly for asset management and quickly finding software vulnerabilities.  Software (viruses, ransomware) in a process control environment is not the only thing that is vulnerable.  Much harm can be done by taking away the view and control of a physical process from an operator as was done when a Ukrainian power grid operator saw someone move his mouse on a control screen and open breakers at 30 substations putting a ¼ of a million customers in darkness.

The Cyber-kill chain is then explored for A.I.’s application in offense.  The application of this model is curious. For example, in the category of weaponization, instead of describing the work of Stuxnet which had the ability to destroy equipment, examples of phishing and video deep fakes are used.  Phishing and deep fakes are more about influencing behavior or deception than attacking something.  One of the weaknesses of the Cyber-kill chain model is its focus on cyber activities of malicious intent. It is weak, however,  on the kinds of  incidents that can that come from accidents, human error, not following procedure and other non-malicious unintentional acts.

It is further argued by the author that A.I can address a weakness of humans in meeting the challenge of working with “complex and diffuse tasks like defence at scale.”[7] This I think indicates a weakness in the argument stemming from a poor understanding of the process control environment found in critical infrastructure.  There are in fact, people who are adept at meeting the challenges of working with complex and diffuse tasks that are carried out every hour of the day found in petrochemical plants, power generation and distribution systems, water supply and transportation systems.  These sectors that support a society’s well-being are called by some critical infrastructure.   These skilled people are engineers who know how things run and apply that knowledge to the design of complex systems found in critical infrastructure.  Without their expertise national power grids, cross border pipelines, petrochemical plants, water supply and transportation systems could never be operated at large scale.

A point is being missed when the author asserts that  A.I. has the advantage of requiring fewer skilled people to handle responsibility for cybersecurity.  This may not apply well in process control environments. In fact, where more (not fewer) people are needed are those that understand something about office IT cybersecurity and the engineering aspects of physical operations.  There is an unsettling  tendency of CISO’s with office IT/computer science backgrounds trained in data protection being given responsibility for industrial  cybersecurity.  Cross training, which the CISO is not likely to have,  is needed in both the engineering and computer science disciplines to be equal to the challenges of protecting the data used in the office and the  physical operations in critical infrastructure. 

Lastly in all the articles, including Mr. Healey’s, proposing the use of A.I. to help humans deal with the challenges stemming from the downside of technology, such as vulnerability of vital systems to cyber-attacks, missing are answers to two important questions concerning the application of A.I. to protect industrial control and automation environments.  They were posed by the person who told us about Stuxnet, Ralph Langner, in his lecture “Brave New Industrie 4.0” : how will it be determined that an A.I. imposed action on a system is authorized? And “How will it be determined that the change makes good engineering sense?”[8]

If A.I. does find broad and general application in defending critical systems and processes from cyber-attacks and incidents it should be foreseen that there could be initial failures especially when applied in industrial environments.  The application of A.I. should first precede an understanding of the environment where it will be applied.  The test should at first be small and in a laboratory environment and then brought up to scale as confidence grows in the learning ability of A.I.  In that Star Trek episode  Captain Kirk asked Dr. Drayson a telling question.  Why was the M1 computer not tried out first in running a starship instead of the M5?  The reply was that the earlier versions were flawed but the M5 is now perfected.

One of the most difficult problems encountered in investigating a cyber-attack, especially where a state actor is suspected, is the issue of attribution. This is a topic that Mr. Healey has covered quite well in earlier writings[9]. Perhaps A.I. could help in determining the identity of the threat actor? Sadly, this intriguing possibility is not explored in the article.  However, that still leaves the issue of what to do when attribution is confirmed when a state is identified as the perpetrator.   Law enforcement is likely due to jurisdiction limits to cybercrime to drop investigation of a case. Until now partly due to the attribution problem, no state has been punished or otherwise paid a significant penalty for a cyber-attack on critical infrastructure. If A.I. could provide attribution support to an  international body that monitors and reports on violations of norms of state behavior in cyberspace then that may be A.I.’s best contribution.  For the defender the number of attacks that require attention could be reduced as potential threat actors think twice about the risk of being identified.   The best defense perhaps would stem from imposing some restraint on those in offense.


[1] Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind, IBM, https://www.ibm.com/topics/artificial-intelligence accessed 24 October 2023

[2] Reminds me of what happened during the crashes of the Boeing 737 Maxs when pilots failed to regain control from  the autopilot that was intent on crashing into the ocean.

[3] Jason Healey,  The impact of artificial intelligence on cyber offence and defence, The Strategist, Australian Strategic Policy Institute, 18 October 2023 https://www.aspistrategist.org.au/the-impact-of-artificial-intelligence-on-cyber-offence-and-defence/?mc_cid=85b705dde3&mc_eid=UNIQID

[4] Ibid.

[5] http://scadamag.infracritical.com/index.php/2023/08/25/having-a-framework-for-a-boat-does-not-guarantee-that-it-will-float-or-sail-well/

[6] Jason Healey,  The impact of artificial intelligence on cyber offence and defence, The Strategist, Australian Strategic Policy Institute, 18 October 2023 https://www.aspistrategist.org.au/the-impact-of-artificial-intelligence-on-cyber-offence-and-defence/?mc_cid=85b705dde3&mc_eid=UNIQID

[7] Jason Healey,  The impact of artificial intelligence on cyber offence and defence, The Strategist, Australian Strategic Policy Institute, 18 October 2023 https://www.aspistrategist.org.au/the-impact-of-artificial-intelligence-on-cyber-offence-and-defence/?mc_cid=85b705dde3&mc_eid=UNIQID

[8] Ralph Langner, “Brave New Industrie 4.0”, S4 Conference video lecture, Accessed 23 October 2023, https://www.youtube.com/watch?v=ZrZKiy2KPCM

[9] Jason Healey, Beyond Attribution: Seeking National Responsibility in Cyberspace, Atlantic Council, February 22, 2012, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/beyond-attribution-seeking-national-responsibility-in-cyberspace/

http://scadamag.infracritical.com/index.php/author/vytautas/

NOTE: The views expressed within this blog entry are the authors’ and do not represent the official view of any institution or organization affiliated thereof. Vytautas Butrimas has been working in cybersecurity and security policy for over 30 years. Mr. Butrimas has participated in several NATO cybersecurity exercises, contributed to various international reports and trade journals, published numerous articles and has been a speaker at conferences and trainings on industrial cybersecurity and policy issues. Has also conducted cyber risk studies of the control systems used in industrial operations. He also collaborates with the International Society of Automation (ISA) on the ISA 62443 Industrial Automation and Control System Security Standard and is Co-chair of ISA 99 Workgroup 16 on Incident Management and member of ISA 99 Workgroup 14 on security profiles for substations.