Spy Firms Exploit Global Telecom Flaw to Track Targets, 500K UK Health Records Leaked on Alibaba, Apple Patches Notification Bug

On April 25, 2026, a coordinated group of Discord-based security researchers uncovered a critical vulnerability in Anthropic’s Mythos platform, gaining unauthorized access to internal API endpoints and training data pipelines through a misconfigured OAuth 2.0 redirect URI validation flaw. The breach, which exposed non-public model weights and experimental safety alignment protocols, was traced to a legacy staging environment inadvertently linked to production identity providers, allowing attackers to forge service-to-service tokens using stolen client secrets from a public GitHub repository. This incident highlights systemic risks in AI infrastructure where rapid model iteration outpaces security hygiene, particularly in platforms relying on complex federated authentication across cloud, edge, and researcher sandbox environments.

The Anatomy of the Mythos OAuth Bypass

The exploit chain began with a hardcoded client_secret for Anthropic’s internal Mythos developer portal, accidentally committed to a public repository by a contractor in January 2026. While the key was rotated within 48 hours, attackers leveraged the window to enumerate redirect URIs associated with the OAuth application. They discovered that the validation logic only checked for domain ownership via DNS TXT records—not path specificity—allowing them to register a subdomain like mythos-staging.attacker-controlled.com and point it to an IP hosting a malicious callback server. By initiating an auth flow with response_type=token and a crafted state parameter, they obtained bearer tokens granting access to Mythos’ internal /v1/admin/model-weights and /v1/experimental/alignment-logs endpoints. Unlike traditional API keys, these tokens inherited the scope of the service account used for CI/CD pipelines, enabling read access to Triton Inference Server logs and FP8-quantized checkpoints of Claude 3.5 Mythos variants.

The Anatomy of the Mythos OAuth Bypass
Mythos Anthropic Unlike
The Anatomy of the Mythos OAuth Bypass
Mythos Anthropic Unlike

Forensic analysis by the researchers, shared via encrypted Signal channels and later published on a private GitHub audit log, revealed that the tokens bypassed Anthropic’s internal API gateway because the staging environment used a separate Kong plugin configuration that omitted JWT audience validation—a detail absent from their public-facing API docs. This misconfiguration allowed tokens issued for mythos-staging.anthropic.com to be accepted by the production API gateway at api.anthropic.com due to a shared Redis-backed token cache lacking namespace isolation.

Why This Matters Beyond Anthropic

The Mythos breach is not an isolated failure but a symptom of how AI labs manage identity sprawl in multi-tenant, hybrid-cloud environments. As companies like Anthropic, OpenAI, and Google DeepMind rush to deploy specialized model variants—reasoning engines, coding assistants, and multimodal agents—their internal developer platforms become attack surfaces where OAuth, SAML, and just-in-time access controls intersect insecurely. Unlike traditional SaaS, AI platforms often grant broad scopes to service accounts to accommodate dynamic workflow chaining (e.g., a model calling a retrieval system which calls a safety classifier), amplifying the blast radius of credential leaks.

Massive Telecom Espionage & Critical Exploits: Stay Alert!

This incident bridges directly to the growing tension between AI innovation and infrastructure security. While model cards and system prompts receive public scrutiny, the underlying plumbing—identity federation, token lifecycle management, and environment isolation—remains opaque. As one anonymous senior SRE at a foundation model provider told me under condition of anonymity:

We treat our models like crown jewels but our internal developer portals like college project repos. The Mythos leak wasn’t sophisticated—it was a reminder that secrets hygiene in AI ops is still circa 2018.

Further, the exploit underscores risks in the emerging “AI developer experience” market. Platforms like Weights & Biases, Hugging Face Inference Endpoints, and Replicate are increasingly offering managed fine-tuning and deployment tools that mirror Mythos’ architecture—centralized control planes with delegated access to GPU clusters. If identity validation flaws like this can occur at a lab with Anthropic’s resources, similar vulnerabilities likely exist in lesser-resourced startups relying on third-party auth providers without deep token introspection capabilities.

Enterprise Mitigation and the Open-Source Response

In the aftermath, Anthropic confirmed they have implemented stricter redirect URI validation using full-path matching, added Redis namespace isolation for token caches, and began scanning public repositories for leaked client secrets using GitHub’s secret scanning API with custom patterns for ANTHROPIC_MYTHOS_* prefixes. They also shortened the lifespan of implicit grant tokens from 24 hours to 15 minutes for admin-facing endpoints—a move aligned with NIST SP 800-63B recommendations for high-risk applications.

Enterprise Mitigation and the Open-Source Response
Mythos Anthropic Redis

Meanwhile, the open-source community has responded with tooling to detect similar misconfigurations. A new OWASP-AI project, oauth-ai-linter, now includes rules to flag OAuth configs allowing subdomain wildcard redirects or missing audience validation in AI platform integrations. Early adopters include engineers at Stability AI and Mistral who’ve integrated the linter into their GitHub Actions pipelines to preemptively catch misconfigurations in Hugging Face Space deployments.

For enterprises consuming AI APIs, the takeaway is clear: treat internal developer portals as high-risk zones. Enforce short-lived tokens, mandate mutual TLS for service-to-service calls within VPCs, and implement runtime policy enforcement using tools like Open Policy Agent (OPA) to validate token scopes against intended use cases—especially when models interact with external tools or data sources.

The 30-Second Verdict

The Mythos breach wasn’t a zero-day in a transformer layer—it was a preventable identity misconfiguration exploited through basic OAuth hygiene failures. Yet its significance lies in what it reveals: as AI systems grow more capable, their security foundations are not keeping pace. Until AI labs apply the same rigor to identity infrastructure as they do to model alignment, the next leak won’t just expose weights—it could compromise the very mechanisms meant to retain those weights safe.

Photo of author

Sophie Lin - Technology Editor

Sophie is a tech innovator and acclaimed tech writer recognized by the Online News Association. She translates the fast-paced world of technology, AI, and digital trends into compelling stories for readers of all backgrounds.

Steve O’Donnell and Ben Kennedy Promoted to Lead NASCAR: A New Era Begins for Stock Car Racing

Heritage Festival at Lotte Mall West Lake, Hanoi – Open Until April 26, 9:30 AM – 10:00 PM on Culture Avenue, 4th Floor

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.