A wave of recent security concerns has focused on Claude Code, Anthropic’s AI-powered coding assistant. What began as a promising tool for developers has quickly become a focal point for cybersecurity vulnerabilities, culminating in a significant breach of Mexican government systems. The incidents highlight the growing risks associated with integrating AI tools into sensitive workflows and the importance of robust security measures.
The vulnerabilities, discovered by researchers at Check Point, stem from several configuration mechanisms within Claude Code, including Hooks, the Model Context Protocol (MCP) servers, and environment variables. These flaws allowed attackers to execute arbitrary shell commands and potentially steal Anthropic API keys when users opened untrusted repositories. The issues weren’t theoretical; they were actively exploited in a sophisticated attack targeting Mexican government agencies, resulting in the theft of over 150GB of data, according to SecurityWeek.
Vulnerabilities Identified and Patched
Researchers identified three key vulnerabilities. The first, tracked as “No CVE” (CVSS score: 8.7), involved a code injection flaw stemming from a user consent bypass when starting Claude Code in a modern directory. This allowed for arbitrary code execution without additional confirmation when using untrusted project hooks defined in the .claude/settings.json file. This vulnerability was addressed in version 1.0.87 in September 2025. A second vulnerability, CVE-2025-59536 (CVSS score: 8.7), enabled the execution of arbitrary shell commands automatically upon tool initialization when a user started Claude Code in an untrusted directory. Anthropic fixed this in version 1.0.111 in October 2025. Finally, CVE-2026-21852 (CVSS score: 5.3) was an information disclosure vulnerability that allowed malicious repositories to exfiltrate data, including Anthropic API keys. This was patched in version 2.0.65 in January 2026.
According to Anthropic’s advisory for CVE-2026-21852, if a user initiated Claude Code within a repository controlled by an attacker, and that repository contained a settings file directing ANTHROPIC_BASE_URL to an attacker-controlled endpoint, Claude Code would issue API requests – potentially leaking the user’s API keys – before displaying a trust prompt. This underscores the danger of opening projects from untrusted sources.
Exploitation in the Wild
The vulnerabilities weren’t merely academic concerns. A hacker reportedly weaponized Claude AI in a month-long campaign from December 2025 to early January 2026, using it to hunt for vulnerabilities, craft exploit code, and steal sensitive data from Mexican government agencies, as reported by Cyberpress and Cybersecurity News. The Hacker News reported that the same flaws were leveraged to breach Mexican government systems and exfiltrate over 150GB of data. Cybernews detailed how simply opening a malicious repository from GitHub with Claude Code could trigger the vulnerabilities.
The incident serves as a stark reminder of the potential for AI tools to be repurposed for malicious activities. Whereas AI offers significant benefits in software development, it also introduces new attack vectors that require careful consideration and proactive security measures.
Implications and Future Considerations
The Claude Code vulnerabilities and their exploitation highlight the need for developers and organizations to exercise caution when using AI-powered coding assistants. It’s crucial to only work with trusted repositories and to regularly update software to the latest versions to benefit from security patches. The incident also underscores the importance of robust API key management and the implementation of strong access controls.
As AI continues to integrate into critical infrastructure and workflows, the focus on security must intensify. Developers and security researchers will need to collaborate to identify and address vulnerabilities proactively, ensuring that the benefits of AI are not overshadowed by the risks. The ongoing evolution of AI-powered tools demands a continuous reassessment of security protocols and a commitment to staying ahead of emerging threats.
What are your thoughts on the security of AI-powered development tools? Share your insights in the comments below, and please share this article with your network to raise awareness about these critical vulnerabilities.