To AI or Not to AI in the World of OT Cybersecurity?
By Eric Knapp, Director of Cybersecurity Research and a Sr. Fellow at Honeywell Connected Enterprise
That is a question that, ironically, was easily answered using artificial intelligence (AI). When asked “To AI or not to AI?” ChatGPT answered, “The decision of whether or not to implement AI in a particular context or application depends on various factors and should be carefully considered.” While disappointed that the language model didn’t pick up on the clear homage to The Bard, it did proffer some additional advice within a list of key considerations, which included, “Assess whether AI is suitable for the specific problem or task at hand. AI is excellent at handling complex, data-driven tasks such as natural language processing, image recognition and predictive analytics. If the problem can benefit from automation, pattern recognition or decision-making based on data, AI may be a good fit.”
But what about operational technology (OT)? Here, our artificial advisor adds some “unique considerations” including safety and reliability, data quality and security. First, let’s talk about data quality. The AI itself freely admits that AI relies heavily on data and that inaccurate or noisy data can lead to incorrect decisions. Interestingly, it recommends pre-processing and data cleaning, placing the onus squarely on the operator, and doesn’t accept any of the fault here. It’s a strong reminder that the quality of data going into AI directly impacts the quality of what comes out of AI. It doesn’t mention that AI also introduces errors of its own. If your data is 100% clean, a percentage of the decisions made will still be (confidently) wrong. So, one consideration that ChatGPT did not point out, but probably should have, is accuracy. It asked us to think about data availability, cost-benefit analysis, ethical and regulatory considerations, scalability and a host of other considerations … but it never once suggested that we consider accuracy. Well, other than the standard ChatGPT disclaimer – “ChatGPT can make mistakes. Consider checking important information” – in the page footer.
So what about using AI for OT cybersecurity? When asked about using AI in OT, the advice to “implement robust security measures to protect against potential AI-related vulnerabilities” was happily output to the screen. When asked to narrow the answer to using AI for specifically OT cybersecurity, it happily proclaimed that “integrating AI into the cybersecurity of OT systems can be a powerful strategy to enhance the protection of critical infrastructure and industrial processes” and it listed a total of 16 areas where AI could help with cybersecurity, from anomaly detection (which makes sense) all the way to network segmentation (which does not make sense).
It forgot to mention the small issues of accuracy and potential AI vulnerabilities.
Thinking like an adversary, I already know enough from this brief exchange to cause some mischief. To avoid detection by AI-powered tools, all I need to do is manipulate the data set. High error rates are already an issue, but even if an AI system advances to the degree that it is 100% accurate and only ever offers good advice, that advice can still turn sour by poisoning the data. There have been numerous stories in the past few weeks about manipulating AI tools in this fashion (including, ironically, using AI to find vulnerabilities in AI). Does this mean we should avoid using AI for cybersecurity in OT? No. It means we should avoid relying on it because AI simply doesn’t live up to the same standards of precision and quality demanded in industrial areas where the consequences of AI hallucinations could be catastrophic.
My advice? Anyone who isn’t using AI tools to some degree is missing out on the amazing benefits that they provide. In cybersecurity, we use AI in a lot of ways, and I believe we should continue to leverage this new tech. But we should never depend solely on programmatic decision-making. Use AI where it can benefit us, while still minimizing the new risks that AI presents. Let AI analyze the haystack, but then further examine everything that it tells you is a needle. Put a human mind between the artificial one and the consequences of an inappropriate response.