Home / AI / Claude Mythos 5: Anthropic’s AI Finds Zero-Day Bugs

Claude Mythos 5: Anthropic’s AI Finds Zero-Day Bugs

Claude Mythos 5: Anthropic’s AI Finds Zero-Day Bugs — illustrative image for this article

Key Takeaways

  • Claude Mythos 5 is the first AI model to cross 10 trillion parameters, using Mixture-of-Experts architecture that activates roughly 1 trillion parameters per token.
  • Anthropic’s model autonomously found thousands of zero-day vulnerabilities, including a 27-year-old flaw in OpenBSD that survived decades of expert review.
  • Project Glasswing gives $100 million in compute credits to 50+ partners — Apple, Google, Microsoft, Nvidia — for defensive vulnerability patching only.
  • The UK’s AI Safety Institute confirmed Mythos solves 73% of expert-level cyberattack tasks — the first AI model ever to do so.
  • Mythos Preview is NOT publicly available; access is restricted to verified Project Glasswing partners.

How dangerous is an AI that finds software vulnerabilities faster than any human security team? Anthropic’s answer, for now, is: dangerous enough to withhold from general release. Claude Mythos 5, the company’s first 10-trillion-parameter model, emerged from a March 26 data leak — when roughly 3,000 internal Anthropic documents became publicly accessible via a CMS misconfiguration. The model has since undergone independent cybersecurity evaluation by UK government researchers. The results confirm both the breakthrough and the risk.

This article covers what Mythos can do, who gets access through Project Glasswing, and what it means for software security teams globally.

What Makes Claude Mythos 5 Different From All Previous AI?

Scale alone doesn’t explain the leap. Claude Mythos 5 carries approximately 10 trillion parameters, making it the largest known AI model to date. But it uses a Mixture-of-Experts (MoE) architecture in which only roughly 1 trillion parameters activate per token. The result: the knowledge capacity of a 10T model at a fraction of the compute cost of a dense equivalent.

Prior frontier models — including Claude Opus 4.6, which still leads most general-purpose benchmarks — use dense transformer architectures. Mythos’s MoE design lets Anthropic encode substantially more specialized knowledge while keeping inference costs viable for enterprise partners. The leaked internal documents described it as “by far the most powerful AI model we’ve ever developed” and a “step change” in capabilities.

ModelParametersArchitectureGeneral ReleaseExpert Cyber Tasks
Claude Opus 4.6UndisclosedDense transformerYesStandard
GPT-5.4UndisclosedDense + Thinking modeYesStandard
Claude Mythos Preview~10 trillion (MoE)Active ~1T per tokenNo — restricted73% success rate

How Did Mythos Find a 27-Year-Old Bug Nobody Had Caught?

3D rendered abstract brain concept with neural network. — Photo by Google DeepMind on Pexels

OpenBSD has a reputation as one of the most rigorously hardened operating systems. Security researchers have audited it for decades. Mythos found a flaw dating to the 1998 SACK implementation — a vulnerability that survived 27 years of human review. The flaw allows a remote attacker to repeatedly crash any OpenBSD host responding over TCP.

That was not the only discovery. Over several weeks, Anthropic used Mythos Preview to identify thousands of zero-day vulnerabilities in every major operating system and every major web browser, along with other widely-deployed software. Multiple flaws had gone undetected for years before the model surfaced them.

The UK’s AI Safety Institute (AISI) independently evaluated Mythos and published their findings. Mythos succeeded on 73% of expert-level cyberattack tasks — a category no prior AI model could complete at all. It reached 85% on apprentice tasks and approximately 95% on beginner tasks. According to the AISI evaluation report, it became the first model ever to complete a 32-step cyberattack chain end-to-end, succeeding in 3 out of 10 attempts on a simulated enterprise network.

What Is Project Glasswing and Who Gets Access?

Because Anthropic judged Mythos’s offensive capabilities too significant for public release, the company launched Project Glasswing — a $100 million compute credit program giving verified defenders restricted access to the model for patching, not attacking.

Launch partners include AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. An additional 40+ organizations maintaining critical software infrastructure can apply for access. All use is restricted to defensive work only.

  • $100M in Mythos Preview usage credits committed by Anthropic
  • $4M in direct donations to open-source security organizations
  • Pricing: $25 per million input tokens / $125 per million output tokens
  • Available via: Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry
  • General availability: None — Anthropic states it will not make Mythos Preview publicly available

For security teams in Southeast Asia — where many organizations run software stacks built on the same open-source components Mythos can scan — the practical question is whether the Project Glasswing access criteria will expand beyond the initial 50 partners and when. Get the latest model and security coverage in our AI section.

Can AI-Powered Attacks Be Contained Once Models Like Mythos Proliferate?

The AISI evaluation was precise about scope. The attack ranges used lacked live defenders, endpoint detection, or real-time incident response. Mythos can autonomously attack weakly-defended enterprise systems; it has not been tested against hardened networks with active security operations centers.

But the structural concern raised by Anthropic, the Council on Foreign Relations, and Fortune — quoting CEO Dario Amodei — is not about Mythos specifically. It is about proliferation. If Anthropic built this capability in 2026, hostile state-sponsored actors and criminal groups will have equivalent tools within months to years. The real challenge, as Amodei noted, is not finding vulnerabilities but fixing them at the pace AI can discover them.

For enterprise security implications and patch management strategy, see our Security section.

Common Questions — Claude Mythos 5 and Project Glasswing

Q: What is Claude Mythos 5?

A: Claude Mythos 5 is Anthropic’s largest and most capable AI model to date, with approximately 10 trillion parameters in a Mixture-of-Experts architecture. It was leaked from Anthropic’s content management system in March 2026 and officially announced in April. It is not publicly available due to its cybersecurity exploitation capabilities.

Q: What is Project Glasswing?

A: Project Glasswing is a $100 million Anthropic initiative that provides selective access to Claude Mythos Preview for defensive cybersecurity work. Partners including Apple, Google, Microsoft, Nvidia, and CrowdStrike use the model to find and patch vulnerabilities in their systems before attackers can exploit them.

Q: Can Claude Mythos autonomously hack into systems?

A: According to the UK AISI evaluation, Mythos can execute multi-step cyberattacks on weakly-defended systems and completed a 32-step attack chain in a controlled simulation. However, the tests used no live defenders or endpoint detection. Its capabilities against hardened enterprise networks with active defense teams have not been publicly demonstrated.

Q: How does Claude Mythos compare to Claude Opus 4.6?

A: Claude Opus 4.6 is the standard production model for most Claude applications, with a 1 million token context window and broad general-purpose capabilities. Claude Mythos Preview is a restricted research model with approximately 10 trillion parameters, available only to Project Glasswing partners at $25 per million input tokens for defensive cybersecurity use.

Conclusion

Claude Mythos 5 marks a clear inflection in what AI can do with software security. A model that finds 27-year-old vulnerabilities in hardened systems and completes multi-step attack chains autonomously is not a lab curiosity — it signals a capability that adversaries will race to replicate. Project Glasswing is Anthropic’s attempt to deploy this power defensively first, at scale, with verified partners. Whether that head start proves sufficient is the critical question for every security team in 2026 and beyond. Follow the latest AI developments in our AI section.

Last Updated: April 16, 2026

TouchEVA

TouchEVA

Founder and lead writer at Hubkub. Covers software, AI tools, cybersecurity, and practical Windows/Linux workflows.

Tagged: