The AI model that governments are calling the most serious cybersecurity development since the internet is not publicly available. It cannot be downloaded, accessed via API, or prompted by any individual user. And yet it has triggered emergency meetings between the Federal Reserve and US bank CEOs, an unprecedented briefing by the UK Bank of England to every major British financial institution, and a global reassessment of what AI can do to the infrastructure that modern society runs on.
This is Claude Mythos Preview. Here is what it can do, what the world’s regulators are doing about it, and what it means for you.
Direct Answer: What is Claude Mythos and why are governments alarmed? Claude Mythos Preview is Anthropic’s most powerful AI model to date, designed for advanced reasoning with a specialisation in cybersecurity. In evaluations by the UK AI Security Institute, Mythos scored 73% on expert-level cyberattack tasks and became the first AI model to complete a full 32-step simulated enterprise network attack. The model can autonomously identify and chain previously undiscovered software vulnerabilities across major operating systems and browsers — including a 27-year-old flaw in OpenBSD found during internal testing. Anthropic released Mythos only to a select group of defensive partners via Project Glasswing. Fed Chair Powell and Treasury Secretary Bessent have briefed US bank CEOs. The UK Bank of England has convened emergency sessions with major British financial institutions. Mythos is not publicly available and Anthropic has no plans to release it publicly.
What the UK AI Security Institute Found
The UK AI Security Institute (AISI) — the government body responsible for evaluating frontier AI capabilities — published its evaluation of Claude Mythos Preview on April 14, 2026. The findings are the most alarming independent assessment of an AI model’s cyberattack capability ever made public.
The headline numbers:
- 73% success rate on AISI’s expert-level capture-the-flag (CTF) cybersecurity tasks — tasks that typically require professional penetration testers to complete
- First AI model to complete “The Last Ones” — AISI’s 32-step full enterprise network attack simulation, representing a complete attack chain from initial access through lateral movement, privilege escalation, and data exfiltration. Mythos succeeded on 3 of 10 attempts. No previous AI model had ever finished it.
What AISI described as “a step up over previous frontier models”:
AISI has been tracking AI cyber capability since 2023 and progressively raised the difficulty of its evaluations as each generation of frontier models improved. The organisation uses the language of incremental progress to calibrate public reaction. For AISI to describe Mythos as “a step up” — in an institution that chooses its words with regulatory precision — is the equivalent of a fire marshal describing a building as “quite warm.”
The model can operate across the full attack chain without human guidance: reconnaissance, vulnerability discovery, exploit development, execution, and lateral movement. The 32-step evaluation is specifically designed to test whether a model can maintain coherent attack intent across a complex multi-stage network — the exact capability that makes the difference between a script kiddie and a nation-state actor.
The 27-year-old OpenBSD vulnerability:
During Anthropic’s internal testing — before the AISI evaluation — Mythos independently discovered a vulnerability in OpenBSD that had existed undetected for 27 years. OpenBSD is specifically designed for security and is used in critical infrastructure worldwide. The fact that Mythos found a flaw that 27 years of human security research had missed illustrates the qualitative gap between what this model can do and what traditional vulnerability research can accomplish.
The Global Regulatory Response
The speed and breadth of the regulatory response to Mythos is without precedent in AI history.
United States: Powell and Bessent Brief Bank CEOs
Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened a closed-door meeting with the chief executives of the largest US banks. The briefing’s explicit subject: the cybersecurity risks posed by Mythos-class AI, and what it means for the systemic security of American financial infrastructure.
The message to bank CEOs was direct: AI models are now capable of discovering and chaining software vulnerabilities at a speed and scale that exceeds anything human attackers could previously do. Banks need to assume that their attack surface — already significant — has effectively expanded overnight. They need to accelerate patching, tighten access controls, and begin treating AI-discovered vulnerabilities as an active operational risk, not a theoretical future concern.
CNBC confirmed that JPMorgan Chase was among the initial Project Glasswing partners given controlled access to Mythos for defensive purposes. JPMorgan’s own security team is now actively using Mythos to find vulnerabilities in its systems before a malicious actor with equivalent capability can.
Regulators have told banks to expect a large influx of AI-discovered vulnerability disclosures in 2026. The current plan is to use AI to patch what AI finds — but the window between discovery and exploitation is narrowing.
United Kingdom: Bank of England Emergency Sessions
The UK response is being coordinated through the Cross Market Operational Resilience Group (CMorg) — chaired by the Bank of England’s executive director for supervisory risk Duncan Mackinnon — and attended by the National Cyber Security Centre (NCSC), the Financial Conduct Authority (FCA), and HM Treasury.
CMorg is convening an emergency briefing with UK bank and insurance chief executives within two weeks. The briefing will cover Mythos’s demonstrated capabilities, the threat model it implies for UK financial infrastructure, and the defensive posture the NCSC is recommending.
Goldman Sachs CEO David Solomon confirmed publicly on Monday that his bank is already working with Anthropic on defences. Goldman is not a named Project Glasswing partner — its engagement appears to be bilateral, indicating that Anthropic is managing a broader set of defensive relationships beyond the publicly disclosed coalition.
The Financial Times, which broke the UK regulatory response story, reported that British authorities are treating this as the highest-priority emerging cybersecurity issue in the financial sector.
Japan: Infrastructure Assessment Underway
The Japan Times reported on April 15 that Japanese regulators are assessing the implications of Mythos-class AI for Japanese banking and critical infrastructure. Japan is a significant target for state-sponsored cyberattacks — particularly from North Korean groups like Lazarus — and the Japanese government is evaluating whether domestic financial institutions need to accelerate defensive AI adoption.
What Mythos Can Actually Do
Understanding the threat requires understanding the specific technical capability. Mythos is not a general-purpose chatbot that can also write malware if you ask it the right way. Its dangerous capabilities emerged during training as an emergent property — Anthropic did not build a cyberweapon. They built an extraordinarily capable reasoning system that can apply that reasoning to security research, and discovered during testing that the results were alarming.
Vulnerability discovery at scale:
Mythos can systematically analyse large codebases — the kind that underpin operating systems, browsers, and banking applications — and identify logical flaws that create exploitable security vulnerabilities. It does this faster than human security researchers, more systematically, and without the cognitive biases that cause humans to overlook flaws in familiar code patterns.
Critically, it can find novel vulnerabilities — not known CVEs, not documented weaknesses from security databases, but previously undiscovered flaws. This is what separates Mythos from traditional automated security scanners, which pattern-match against known vulnerability signatures.
Exploit chaining:
Finding a vulnerability is only the first step. Making it exploitable often requires chaining multiple smaller weaknesses together. Mythos can autonomously reason through the chain — identifying not just “this function has a buffer overflow” but “this buffer overflow, combined with this weak entropy in the random number generator, combined with this misconfigured access control, creates a path to remote code execution.” This reasoning across a multi-step attack chain is what AISI’s 32-step test was specifically designed to evaluate.
Operating system and browser coverage:
Anthropic stated that Mythos Preview is capable of identifying and exploiting previously undiscovered vulnerabilities in every major computer operating system and every major web browser. The major operating systems include Windows, macOS, Linux distributions, Android, and iOS. The major browsers include Chrome, Firefox, Safari, and Edge. The scale of potential exposure — billions of devices — is why Powell and Bessent called the bank CEOs.
The most dangerous property:
Mythos’s offensive capabilities were not deliberately engineered. Anthropic set out to build a powerful general-purpose reasoning model. The cybersecurity capability emerged as a result — an unforeseen consequence of capability that was being developed for legitimate purposes. This is what makes the governance problem so difficult: restricting Mythos does not prevent the emergence of equivalent capability in other models that other labs are building right now.
Project Glasswing: The Defensive Coalition
Anthropic’s response to its own discovery is Project Glasswing — a controlled deployment of Mythos’s capabilities exclusively for defensive purposes, under strict access agreements, to organisations that can use it to find and fix vulnerabilities before malicious actors with similar tools can exploit them.
Confirmed Project Glasswing partners:
- Amazon Web Services (AWS)
- Apple
- Cisco
- CrowdStrike
- JPMorgan Chase
- Linux Foundation
- Microsoft
- NVIDIA
- Palo Alto Networks
The inclusion of the Linux Foundation is significant. Linux underlies the servers that run most of the internet, most cloud infrastructure, and most enterprise computing. Scanning Linux for Mythos-discoverable vulnerabilities and patching them before a nation-state actor with equivalent capability can weaponise them is one of the highest-leverage defensive actions available.
The partnership structure gives these organisations controlled, supervised access to Mythos for their own infrastructure. They can use the model to find vulnerabilities in their own systems. They cannot redistribute access or use it for offensive purposes. Anthropic retains oversight.
Project Glasswing represents a new model for deploying dangerous AI capabilities: not a public release, not a purely closed internal tool, but a trusted coalition structure where the most critical infrastructure owners get defensive access while the general public does not.
The Sovereignty and Privacy Implications
From Vucense’s perspective, Claude Mythos raises questions that go well beyond immediate cybersecurity.
Who controls the most powerful security tool in existence?
Project Glasswing’s partner list is entirely composed of US corporations and the Linux Foundation. The UK’s defensive briefing is being conducted by British regulators to British banks — but the tool itself, and the access to it, is controlled by a US company. France, Germany, India, Brazil, Japan — none of them have direct access to Mythos. They are receiving briefings from their own regulators about a risk they cannot independently assess or independently counter.
This creates a new form of technological dependency: the most capable defensive AI tool is owned by one company, in one jurisdiction, subject to the legal authorities of one government. The US Cloud Act, export controls, and national security frameworks all apply to how Anthropic can deploy Mythos internationally.
The dual-use problem has no clean answer:
Mythos discovers vulnerabilities. Patching those vulnerabilities before attackers can exploit them requires disclosing them to the affected vendors. But the window between discovery and patch is when organisations are most exposed. A nation-state actor with an equivalent model — and the NSA, China’s MSS, Russia’s GRU, and North Korea’s Lazarus group are all building or acquiring AI security research tools — could be exploiting vulnerabilities that Glasswing partners have disclosed to vendors but not yet patched.
The 27-year-old OpenBSD flaw:
This is the detail that security professionals find most disturbing. OpenBSD has been systematically audited by human security researchers for nearly three decades. It is specifically designed to be secure. If Mythos found a flaw that 27 years of human review missed, the implication is that every piece of software in existence — operating systems, browsers, banking applications, medical devices, power grid control systems — contains vulnerabilities that human review has not found, and that Mythos-class AI can now discover.
The offensive version of that capability — deliberately built, not emergent — would represent a qualitative shift in the power asymmetry between attackers and defenders.
What individuals and organisations should do:
Immediate defensive actions:
1. Patch everything — treat AI-accelerated vulnerability discovery as an active threat
2. Enable automatic updates across all OS and browser software — the patch cycle is now shorter than the exploit cycle
3. Apply zero-trust architecture — assume any layer of your stack can be compromised
4. Audit third-party SaaS access — this week's Rockstar/Anodot breach shows the supply chain is the attack surface
5. Enable MFA everywhere — token-based, not SMS
6. Monitor outbound traffic — the attack goal is always exfiltration; detect it at the egress layer
The Governance Question No One Has Answered
Claude Mythos has put a concrete form to an abstract question that AI safety researchers have been asking for years: what do you do when an AI system you built for legitimate purposes turns out to be capable of something catastrophic?
Anthropic’s answer — Project Glasswing, selective defensive access, no public release — is a reasonable first response. But it raises the deeper question: is it sustainable?
Anthropic cannot control what other labs are building. The same training dynamics that produced Mythos’s emergent cybersecurity capabilities will produce equivalent capabilities in models developed by OpenAI, Google DeepMind, xAI, China’s frontier labs, and open-source communities. The question of whether to release a powerful model publicly or restrict it to trusted partners applies to every lab, not just Anthropic.
The AISI evaluation, the CMorg briefing, and the Powell-Bessent bank meeting are the first time governments have operationally responded to AI capability as an active national security concern rather than a theoretical future risk. That matters. It means the policy apparatus — regulators, central banks, security agencies — is beginning to treat AI capability the same way it treats nuclear or biological risk: as something that requires active governance infrastructure, not just voluntary corporate responsibility.
What that governance infrastructure looks like — who has access to what, under what conditions, with what oversight — is the defining policy question of 2026. Claude Mythos has made it urgent.
FAQ
What is Claude Mythos Preview? Anthropic’s most capable AI model, specialised for advanced reasoning with emergent cybersecurity capabilities. It can autonomously discover previously unknown software vulnerabilities across major operating systems and browsers, chain exploits across multi-step attack sequences, and complete full simulated enterprise network attacks. It is not publicly available.
What did the UK AI Security Institute find? AISI reported a 73% success rate on expert-level cyberattack tasks, and confirmed Mythos is the first AI model to complete its full 32-step enterprise network attack simulation (“The Last Ones”), succeeding on 3 of 10 attempts. AISI described it as a meaningful step up over previous frontier models.
Why did the Federal Reserve brief bank CEOs? Treasury Secretary Bessent and Fed Chair Powell convened bank leaders to warn them that Mythos-class AI can discover and chain software vulnerabilities faster than any human attacker. Regulators pressed banks to accelerate patching and begin treating AI-discovered vulnerabilities as an active operational risk.
What is Project Glasswing? Anthropic’s defensive deployment programme, giving controlled access to Mythos exclusively to organisations that will use it to find and patch vulnerabilities in critical infrastructure. Partners include AWS, Apple, Cisco, CrowdStrike, Google, JPMorgan Chase, Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks.
Can Mythos be used to hack banks or governments? Mythos has demonstrated capability to find vulnerabilities in systems that underpin banking, critical infrastructure, and government networks. Anthropic has no plans to release it publicly. Project Glasswing access is governed by strict use agreements. However, the same capabilities will eventually emerge in other models from other labs — the underlying training dynamics cannot be monopolised.
What should I do to protect myself? Enable automatic updates on all software immediately. Apply multi-factor authentication everywhere. Audit all third-party services with access to your data. Enable outbound traffic monitoring. Treat every piece of software as potentially containing vulnerabilities that traditional scanning has not found.
Was this capability deliberately built? No. Anthropic’s engineers discovered Mythos’s offensive cybersecurity capabilities during internal testing. The model was designed as a general-purpose reasoning system. The dangerous capabilities emerged as an unforeseen property of the model’s overall capability level — the same dynamic that security researchers had long predicted would eventually occur with sufficiently capable AI.
Related Articles
- Claude Mythos: The AI Too Dangerous to Release — Project Glasswing Explained
- Anthropic vs the Pentagon: The 2026 AI Safety Battle
- Anthropic Overtakes OpenAI in Revenue: $30 Billion ARR, IPO October 2026
- Post-Quantum Cryptography 2026: Your Data Is Already at Risk
- ShinyHunters Leaks Rockstar Games Data: GTA 6 Breach via Anodot and Snowflake
- NVD Enrichment Crisis: Chrome Zero-Days Going Unanalysed by NIST
Sources & Further Reading
- MIT Technology Review — AI Section — In-depth coverage of AI research and industry trends
- arXiv AI Papers — Pre-print research papers on AI and machine learning
- EFF on AI — Civil liberties perspective on AI policy