It seems that when a new technology becomes practical, there is always a rush by self styled influencers to apply this solution to whatever problems they can think of. Those who question the applicability of this new technology are considered naysayers or even Luddites. Nevertheless, there is a history of overblown, oversold technologies. Remember Blockchains? Cloud Services? Fuzzy Logic? I think you get the idea. These “tech-leaders” are determined to be among the cool kids with the bragging rights that they have been there and done it. AI technology has a lot of potential. But the technology does have limits. The hordes of cheerleaders and gushy sales blather need a significant reset.
That is what is going on with AI today. AI concepts have been around for decades. Only recently, however, has the computing capacity made it practical for some tasks. And the ones that seems to generate the most interest are the Large Language Model AI types.
The name “Artificial Intelligence” is a bit of misnomer. It is actually a neural network of probabilities. It can write eloquently, but it has very little idea of what it is saying. It only “knows” what it was trained on. Large Language Models are pretty good for translations, though they require special training on idioms and social references. For example, unless trained specifically, it probably won’t understand the significance of the reference to “These Are Not The Droids You Are Looking For” It may “recognize” that this is a line from a movie, but it may not understand the context of using that line and the silliness that it typically contains.
In fact, an AI can explain very little of why it does what it does. Worse, not only does the AI not “know” why its output looks that way, it is nearly impossible for anyone else to explain it either.
AI is not “just another tool” for an Engineer. Those other tools are well defined methods of analysis in a software package. One can find people who can explain exactly how it works. On the other hand, it is not a junior engineer either. Junior engineers will usually tell you when there is something they do fully understand. An AI will just hallucinate an answer. Junior Engineers have the self awareness to express a level of confidence in their work. An AI does not.
Yet, despite these severe drawbacks, AI advocates are trying to explain that a neural network configured to talk and draw like an engineer could be an adequate replacement for a human engineer with little experience. It generates solutions that are, to put it bluntly, flawed. But even if not flawed, it does not know why various decisions may or may not be appropriate. It only knows what was done in the past, not the science or reasoning behind it.
So if you are a professional engineer, and you are thinking of using an AI on your next project, remember this: It cannot take responsibility for its actions. You are essential for review. Are you willing to stand behind something that does not formally reason? Are you willing to approach your classically taught profession with post-modernist practice? For now, my answer is no. Some day, if the concerns I expressed above are addressed, I may change my mind.
The legal profession has already had numerous shocking cases of professional misconduct where AI was used inappropriately. The practice of Engineering should heed these cases and apply AI technologies with greater care. Our clients expect well founded decisions and results that they can depend on for their livelihood and their lives. If you are a Professional Engineer, I think you should heed the negative experiences from the legal profession and avoid using AI for most design related work.
