The increasing reliance on artificial intelligence to manage critical infrastructure presents a growing risk, one that could lead to widespread disruptions as early as 2028. A new report from Gartner warns that misconfigured AI within cyber-physical systems (CPS) could trigger shutdowns of essential services in a major economy, a scenario typically associated with cyberattacks or natural disasters.
This isn’t a future threat of malicious actors exploiting AI vulnerabilities, but rather a concern that AI systems, operating as intended, could fail in unpredictable ways. As organizations rapidly integrate AI into the control of vital infrastructure, the potential for errors stemming from flawed data, incorrect settings, or even routine updates is escalating. The stakes are particularly high because even minor errors in these systems can quickly cascade into large-scale failures.
Gartner defines Cyber-Physical Systems (CPS) as “engineered systems that orchestrate sensing, computation, control, networking and analytics to interact with the physical world (including humans).” This broad definition encompasses operational technology (OT), industrial control systems (ICS), and the Industrial Internet of Things (IIoT), essentially any system where software directly interacts with and controls physical processes.
The core issue, according to Gartner, isn’t necessarily AI “hallucinations” – the generation of incorrect or nonsensical outputs – but rather a lack of nuanced understanding. AI systems may struggle to recognize subtle changes that experienced human operators would immediately identify as potential problems. “The next great infrastructure failure may not be caused by hackers or natural disasters but rather by a well-intentioned engineer, a flawed update script, or a misplaced decimal,” cautioned Wam Voster, VP Analyst at Gartner, in the report.
The Expanding Role of AI in Critical Infrastructure
The trend towards automation and AI-driven control is accelerating across numerous sectors, including energy, transportation, water management, and manufacturing. Gartner highlights that AI is now the “single most important area of investment and innovation” for leaders in CPS security. This push for greater efficiency and responsiveness is understandable, but it also introduces new vulnerabilities.
Unlike traditional software bugs that might crash a server, errors in AI-driven control systems can have direct physical consequences. These could range from equipment failures and forced shutdowns to destabilized supply chains, impacting essential services and potentially causing widespread disruption. The speed at which these systems operate also exacerbates the risk; errors can propagate rapidly before human intervention is possible.
Beyond Cyberattacks: The Risk of Internal Failure
Whereas cybersecurity threats to critical infrastructure are well-documented, Gartner’s warning focuses on a different, and perhaps more insidious, risk. The firm emphasizes that the problem isn’t necessarily about external attacks, but about internal failures within the AI systems themselves. This means that even with robust cybersecurity measures in place, infrastructure could still be vulnerable to disruption due to misconfiguration or unintended consequences of AI algorithms.
The report points to the increasing practice of allowing machine learning systems to make real-time decisions without sufficient oversight. Changes to settings, software updates, or the introduction of flawed data can all trigger unpredictable responses, potentially leading to catastrophic outcomes. Here’s particularly concerning as organizations strive to optimize performance and reduce operational costs through automation.
Preparing for the Inevitable: A Call for Proactive Measures
The Gartner report serves as a stark warning to CIOs and infrastructure operators. A reactive approach to AI security is no longer sufficient. Organizations must proactively assess the risks associated with AI-driven control systems and implement robust safeguards to prevent misconfigurations and unintended consequences. This includes investing in skilled personnel who understand both AI and the intricacies of the infrastructure they manage, as well as developing rigorous testing and validation procedures.
Companies like Nozomi Networks are already focusing on AI-native operational technology (OT) security capabilities, offering platforms designed to protect cyber-physical systems. However, technology alone is not enough. A fundamental shift in mindset is required, one that prioritizes safety and reliability over pure efficiency.
As AI continues to permeate critical infrastructure, the potential for disruption will only increase. The challenge lies in harnessing the benefits of AI while mitigating the risks, ensuring that these powerful technologies serve to enhance, rather than endanger, the essential services that modern society depends on. The next few years will be crucial in determining whether One can successfully navigate this complex landscape.
What steps will organizations take to proactively address these emerging AI-related risks to critical infrastructure? Share your thoughts in the comments below.