Stay ahead with breaking tech news, gadget reviews, AI & software innovations, cybersecurity tips, start‑up trends, and step‑by‑step how‑tos.
The increasing reliance on artificial intelligence to manage critical infrastructure presents a growing risk, with a new report from Gartner predicting that misconfigured AI could shut down national critical infrastructure in a G20 country by 2028. This isn’t a scenario involving malicious actors, but rather a failure stemming from the complexities of AI systems operating within cyber-physical systems (CPS).
Gartner defines CPS as “engineered systems that orchestrate sensing, computation, control, networking and analytics to interact with the physical world (including humans).” This broad category encompasses operational technology (OT), industrial control systems (ICS), industrial automation and control systems (IACS), the Industrial Internet of Things (IIoT), robots, drones, and Industry 4.0 technologies. The core concern isn’t AI “hallucinations” or deliberate attacks, but the potential for these systems to miss subtle changes that experienced human operators would readily identify, leading to cascading failures.
The rapid adoption of AI in these systems is accelerating the risk. Operators are increasingly granting machine learning systems the authority to make real-time decisions. While this offers efficiency gains, it also introduces vulnerabilities. A seemingly minor alteration in settings, a flawed software update, or even inaccurate data input can trigger unpredictable responses with potentially devastating consequences, Gartner warns. Unlike traditional software bugs that might crash a server, errors in AI-driven control systems can directly impact the physical world, causing equipment failures, forcing shutdowns, or destabilizing entire supply chains.
The Challenge of Subtle Changes
The Gartner report highlights a critical difference between AI and human oversight. Experienced operational managers develop an intuitive understanding of system behavior and can detect anomalies that might escape the notice of an AI. “The next great infrastructure failure may not be caused by hackers or natural disasters but rather by a well-intentioned engineer, a flawed update script, or a misplaced decimal,” cautioned Wam Voster, VP Analyst at Gartner, in the report. This underscores the importance of robust testing and validation procedures as AI takes on more responsibility for critical infrastructure management.
The issue extends beyond simply preventing errors. it’s about ensuring AI systems can adapt to changing conditions and unexpected events. Critical infrastructure is rarely static. Maintenance, upgrades, and even seasonal variations can introduce subtle shifts in system behavior. An AI trained on a specific dataset might struggle to interpret these changes correctly, leading to incorrect decisions. This is particularly concerning in complex systems where the interplay between different components is not fully understood.
Nozomi Networks Leads in AI-Powered OT Security
As AI integration into CPS expands, the need for robust cybersecurity measures is paramount. Nozomi Networks has emerged as a leader in AI-native operational technology (OT) security, according to a December 2025 Gartner report on AI Vendor Races. Gartner noted that Nozomi’s early investment in embedding machine learning capabilities into its CPS security product, starting in 2013, has given it a significant operational advantage in enabling AI to support CPS discovery, analysis, and alerting capabilities (Nozomi Networks). The company’s AI engine utilizes various techniques to enrich asset profiles, establish baseline behavior, and provide actionable insights for complex cyber-physical systems across sectors like energy, manufacturing, and transportation.
Implications for CIOs and Infrastructure Operators
The Gartner report serves as a wake-up call for CIOs and infrastructure operators. A proactive approach to AI risk management is essential. This includes implementing rigorous testing and validation procedures, developing robust monitoring systems, and ensuring that human operators retain the ability to override AI decisions when necessary. Organizations need to invest in training programs to equip their workforce with the skills needed to understand and manage AI-driven systems. The focus must shift from simply deploying AI to ensuring its safe and reliable operation within complex critical infrastructure environments.
The potential for widespread disruption is significant. Gartner predicts that by 2028, misconfigured AI in cyber physical systems will shut down national critical infrastructure in a G20 country (Gartner). This isn’t a distant threat; it’s a rapidly approaching reality that demands immediate attention. As AI continues to permeate critical infrastructure, the focus must be on building resilience and mitigating the risks associated with these powerful, yet potentially fragile, systems.
Looking ahead, the development of industry standards and regulatory frameworks for AI in CPS will be crucial. Establishing clear guidelines for testing, validation, and monitoring will help to ensure the safe and reliable deployment of AI in critical infrastructure. Continued research into explainable AI (XAI) – AI systems that can provide clear explanations for their decisions – will also be essential for building trust and accountability. The conversation around AI safety needs to move beyond hypothetical risks and focus on the practical challenges of deploying these technologies in the real world.
What are your thoughts on the increasing role of AI in critical infrastructure? Share your insights and concerns in the comments below.