Global cybersecurity agencies are sounding the alarm about emerging risks within the artificial intelligence (AI) and machine learning (ML) supply chain. A new joint advisory, released this month, warns organizations to carefully assess the security of third-party AI tools, models, and data as adoption of these technologies accelerates. The guidance highlights the potential for vulnerabilities to be exploited if vendors aren’t properly vetted and the integrity of AI components isn’t consistently verified.
The collaborative effort was spearheaded by the Canadian Centre for Cyber Security, working alongside the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), and agencies from Japan, New Zealand, South Korea, Singapore, the United Kingdom, and the United States. This international cooperation underscores a growing recognition of the need to proactively address cybersecurity challenges posed by increasingly sophisticated AI technologies.
While AI and ML offer significant benefits – improving decision-making, automating processes, and enhancing customer experiences – poorly managed supply chains can introduce serious security risks. Many AI systems rely on pre-trained models, external software libraries, and large datasets sourced from third parties, creating potential entry points for attackers. According to the guidance, these dependencies can be exploited if organizations fail to adequately assess the security posture of their AI suppliers.
Understanding the AI Supply Chain Threat
The advisory is primarily aimed at organizations that develop, deploy, or maintain AI and ML systems, but also offers valuable insights for procurement teams. It encourages a thorough evaluation of vendors, data sources, and software components throughout the procurement and deployment process. Establishing clear security requirements for suppliers and implementing robust monitoring practices to detect tampering or malicious modifications are also key recommendations.
The guidance emphasizes the importance of understanding how vulnerabilities can emerge within the AI lifecycle. Attackers could potentially compromise systems by manipulating training data, injecting malicious code into models, or exploiting vulnerabilities in third-party libraries. The document stresses that organizations must consider these risks when building or procuring AI systems.
The Australian Signals Directorate highlighted key enhancements in the publication, noting guidance on revising risk management to address AI/ML specific threats and system behaviors, quarantining and testing AI data before integration, and validating model outputs to ensure systems operate as expected. These measures aim to strengthen data and model integrity and reduce exposure to emerging threats, as shared by the National Cyber Security Centre on LinkedIn.
International Collaboration on AI Security
This joint advisory builds on previous efforts to address AI security risks. In December 2025, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the ASD’s ACSC, in collaboration with federal and international partners, published guidance for critical infrastructure owners and operators integrating AI into operational technology (OT) systems. Industrial Cyber reported on this guidance, which outlined four key principles for secure AI integration in OT environments.
in October 2025, a new publication highlighted the importance of AI and ML supply chain security, as noted by the Australian Cyber Security Centre. The Canadian Centre for Cyber Security also joined this effort, releasing guidance on supply chain risks and mitigations for AI and ML just eight days ago, according to Cyber.gc.ca.
By coordinating across multiple countries, the participating agencies demonstrate a unified front in addressing the evolving cybersecurity landscape. This collaborative approach reflects a shared understanding that securing AI supply chains is crucial for maintaining trust and resilience in these technologies as they develop into increasingly integrated into various industries and government operations.
Looking ahead, supply chain security will undoubtedly remain a critical focus as AI adoption continues to expand. Organizations must prioritize proactive risk management, vendor vetting, and continuous monitoring to safeguard their AI systems and protect against potential threats. The ongoing international collaboration signals a commitment to addressing these challenges collectively and ensuring the responsible development and deployment of AI technologies.
What are your thoughts on the evolving AI security landscape? Share your insights and concerns in the comments below.