The Fifty-Two
In the sixteen days between April 7 and April 23, 2026, the following happened.
Anthropic released a system card for Claude Mythos Preview describing six pathways by which the model could cause catastrophic harm. The Treasury Secretary and the Federal Reserve Chair convened the CEOs of the largest US banks. The UK Bank of England, Financial Conduct Authority, Treasury, and National Cyber Security Centre opened coordinated scrutiny. The UK AI Security Institute published its own independent evaluation finding Mythos was the first AI model to complete a 32-step autonomous enterprise-network attack from reconnaissance to takeover. The US Treasury CIO requested direct access to the model. Sullivan & Cromwell issued a legal memorandum to bank boards. The American Securities Association warned the Treasury Secretary of "systemic financial market disruption." At the IMF spring meetings, the Bank of England Governor, the ECB President, the Canadian finance minister, and the IMF Managing Director all said in public that existing frameworks cannot contain the tool. The White House chief of staff and the Treasury Secretary met with Anthropic's CEO. The President said he had no idea the meeting had occurred. A Discord group accessed the model through a third-party vendor and had been using it the entire time.
Anthropic's risk report names none of this.
This is the fifth article in a series about the Mythos system card and its companion Alignment Risk Report. The prior four critique those documents from the inside: their ordering (Page 141 First), their methodology (What the Model Writes When Nobody's Watching), their taxonomy (The Six Ways It Could Go Wrong), and an admission on page 58 that Anthropic's own safety work is not keeping up with capability growth (The Sentence on Page 58). This one critiques from outside. It documents what actors with no stake in Anthropic's framing have been doing in response to the same model — and argues the two framings are measuring different things.
Anthropic published six specific ways the model could cause serious harm. The institutions that have to live with the deployment — central banks, finance ministries, regulators, law firms, trade associations — are responding to something else. In sixteen days, governments on three continents mobilized around a framing Anthropic's documents do not provide. This article walks through that mobilization in the order it happened.
The Rollout and Its Scope
Anthropic deployed Mythos under an initiative called Project Glasswing. The initiative names 12 launch partners — AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, Palo Alto Networks, and Anthropic itself — and extends limited access to "over 40 additional organizations that build or maintain critical software infrastructure." That totals approximately 52 organizations. The "40+" figure is Anthropic's own; the exact roster of those additional organizations is not public, so "52" is a best approximation of the deployment footprint rather than an exact count. Pricing: $25 per million input tokens, $125 per million output, with a $100 million usage-credit commitment across the effort.
That scope is the backdrop for everything that follows. Mythos is not a model released to a single customer or tested at a single partner. It is a tool that, from day one, was in the hands of the world's largest cloud providers, the largest US bank by assets, one of the world's largest open-source foundations, and a consortium of security vendors who collectively defend most of the infrastructure the world's financial system runs on. The number 52 is not a capacity. It is a surface.
Anthropic's Alignment Risk Report, Section 8, analyzes six ways this tool could cause catastrophic harm — all six framed around "the use of models within Anthropic." None of the six is about what happens at the 52.
The sixteen-day response described in what follows is the answer the documents don't give.
Day One
Anthropic publishes the Mythos system card (244 pages) and the Alignment Risk Report (61 pages). The blog post framing emphasizes defensive capability and Project Glasswing. Anthropic CEO Dario Amodei tells the press Mythos is "so powerful that it could enable dangerous cyberattacks" if misused.
The same day — or, per some reporting, the following day; the chronology is slightly muddy — Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convene an emergency closed-door meeting with the CEOs of the largest US banks at Treasury headquarters. Attendees reportedly include Jane Fraser (Citigroup), Ted Pick (Morgan Stanley), Brian Moynihan (Bank of America), Charlie Scharf (Wells Fargo), and David Solomon (Goldman Sachs). Jamie Dimon (JPMorgan) does not attend; JPMorgan is a Project Glasswing partner.
The meeting's stated purpose is to ensure the banks are "aware of possible future risks" from Mythos and are "taking precautions to defend their systems." The attendance of the Federal Reserve Chair signals that regulators view this as a potential systemic issue, not a corporate matter.
The timing is the fact. The US Treasury and the Federal Reserve are at the table on the day the system card is published, telling the country's systemically important banks to brace for consequences Anthropic's own risk report does not enumerate.
Same Day, Different Framing
The Bloomberg story breaks on the Bessent-Powell meeting. Administration officials begin encouraging Wall Street banks to test Mythos directly. David Solomon at Goldman confirms his bank is "supplementing" its cyber and infrastructure resilience. Goldman, Citigroup, Bank of America, and Morgan Stanley all reportedly gain testing access in the days that follow. A parallel internal memo circulates at the White House Office of Management and Budget: Federal CIO Gregory Barbaccia instructs cabinet-department technology and security leaders to prepare for Mythos access across federal agencies.
On Thursday, April 9, Vice President JD Vance and Treasury Secretary Scott Bessent convene a meeting on AI cybersecurity with the CEOs of the four largest US technology companies — Dario Amodei (Anthropic), Sam Altman (OpenAI), Sundar Pichai (Google), and Satya Nadella (Microsoft) — along with leadership from Palo Alto Networks and CrowdStrike, both Project Glasswing partners. Two days after Mythos's release, the Vice President and the Treasury Secretary have assembled the people who run the infrastructure the rest of the response depends on: the two leading frontier-AI labs, the two largest cloud providers, and two of the world's largest cybersecurity vendors. The meeting happens against the prior history of the Pentagon's supply-chain-risk designation on Anthropic and the ongoing litigation between Anthropic and the federal government. The same week, the administration is simultaneously encouraging the banks to test Mythos and blacklisting its developer from the Department of War.
US software stocks tumble on April 9. Coverage is dominated by the national-security framing. Anthropic's own framing — Project Glasswing as a defensive cybersecurity consortium, a carefully managed preview — is already being overtaken by a different one: the most capable cyber-offense tool in the world is at 52 organizations with inconsistent perimeter security, and the federal government needs its own copy.
The UK Opens a Parallel Track
The Financial Times reports on Sunday that British financial regulators are in urgent talks with the National Cyber Security Centre and major banks. The coordinating body is the Cross Market Operational Resilience Group (CMORG), co-chaired by Bank of England executive director for supervisory risk Duncan Mackinnon and UK Finance CEO David Postings. Membership includes the Bank of England, the Financial Conduct Authority, HM Treasury, the NCSC, UK Finance (the trade body for over 300 UK banks and financial service companies), eight of the UK's biggest banks, four financial infrastructure providers, and two insurers.
The framing: a regulator-led briefing of the UK financial sector on what Mythos can do, scheduled to occur within the following fortnight. Parliament weighs in separately; Treasury Committee Chair Meg Hillier issues a statement naming Mythos and calling for proactive AI-risk assessment.
Reuters picks up the FT report the same day. Within 72 hours, it is international news.
Five days after Anthropic's release, the UK's top financial-stability institutions had already opened a coordinated scrutiny process. They didn't wait for Anthropic's framing to tell them what to worry about. They convened their own table and named their own risk.
An Independent Evaluation
The UK's AI Security Institute (AISI), the government body housed within the Department for Science, Innovation and Technology, publishes its own evaluation of Mythos Preview's cyber capabilities. Six days after Anthropic's system card. A parallel assessment by an actor with no commercial stake.
AISI's findings, in its own words: Mythos Preview is "the first model to complete an AISI cyber range end-to-end." The range, internally named The Last Ones, is a 32-step simulation of an internal corporate-network attack spanning reconnaissance, privilege escalation, lateral movement, and full network takeover. A trained human security professional needs approximately 20 hours to complete it. Mythos Preview completed the full chain on 3 of 10 attempts, averaging 22 of 32 steps across all trials. Claude Opus 4.6, the previous best model AISI had tested, averaged 16 of 32 and never reached the final milestone.
AISI also tested capture-the-flag challenges at four difficulty tiers. On expert-level challenges — a threshold no model could cross before April 2025 — Mythos succeeded 73% of the time.
The caveats AISI names openly: the ranges lacked live defenders, endpoint detection, and real-time incident response. The evaluation environment measured attack capability against weakly-defended systems, not hardened enterprise networks. Inference-compute budget was 100 million tokens per attempt; AISI noted performance had not plateaued at that ceiling, meaning more compute probably yields more capability.
AISI's bottom-line assessment: Mythos Preview "can execute multi-stage attacks on vulnerable networks and discover and exploit vulnerabilities autonomously — tasks that would take human professionals days of work."
This is what an independent government safety institute does when it doesn't rely on the model builder's own risk taxonomy. It builds a parallel evaluation, runs the tool against its own benchmarks, and publishes findings in language the model builder does not use.
The AISI report also contains a claim that, if it survives primary-source verification, belongs next to the sentence on page 58 of Anthropic's own Risk Report: frontier AI capabilities in cyber offense are doubling approximately every four months. Two consecutive years of that trajectory is a factor of sixty-four. The Risk Report says mitigations must accelerate faster than capability. AISI, reading the same trendline from outside the building, implies the rate mitigations would have to match.
Treasury Wants Its Own Copy. The Law Firm Weighs In.
Bloomberg reports on April 14 that the US Treasury Department's own technology team, led by CIO Sam Corcos, is seeking direct access to Mythos. Corcos briefs the Treasury cybersecurity team on the technology and instructs them to prepare for threats from powerful AI systems. The request is placed through Anthropic for access "as soon as this week."
The Treasury CIO is not a Project Glasswing partner. He is asking for Mythos because Treasury is not satisfied with the assessment Anthropic has published. The federal department that sits closest to financial-system risk wants to probe the tool against its own concerns, using its own people.
On April 15, Sullivan & Cromwell — one of the top corporate law firms in the United States, with deep representation of the same major banks that attended the Bessent-Powell meeting — issues a memorandum titled "Treasury Secretary and Federal Reserve Chair Warn Bank CEOs About Cybersecurity Risks Posed by Anthropic's New AI Model." The memo urges bank boards to treat Mythos as a systemic financial-stability concern and outlines governance obligations. On April 23, the memo is republished on the Columbia Law School Blue Sky Blog, putting the framing into the academic-legal record.
The significance is not the memo's content. It is that a law firm of Sullivan & Cromwell's standing, addressing a client base of bank boards, has determined that the appropriate advice is to treat deployed-Mythos as a category of risk that requires board-level attention. Anthropic's risk taxonomy does not authorize that advice. The memo derives its framing from elsewhere.
A Trade Association Writes to the Treasury Secretary
The American Securities Association (ASA), a trade group representing regional financial-services firms including broker-dealers, sends an open letter to Treasury Secretary Bessent. The letter warns that malicious use of Mythos could produce consequences ranging from mass identity theft to "systemic financial market disruption."
The letter's primary target is the SEC's Consolidated Audit Trail (CAT), the centralized database storing investors' private trading information. The ASA argues: "Mythos excels precisely at finding decades-old, dormant flaws of the kind that permeate the middleware, data feeds, browsers, and operating systems that underpin CAT's vulnerable architecture." The letter identifies three specific risk categories: exploitation of software vulnerabilities, insider threat risk via malicious insiders at firms, and systemic market disruption via mass liquidations triggering "failures across the financial system." Its recommended remediation is dramatic: suspend CAT and delete the data it has collected.
The ASA's letter is the first instance of a financial-industry trade association formally asking the federal government to partially dismantle critical market infrastructure in response to the deployment of a single AI model. Anthropic's Risk Report contains no pathway for "deployment of Mythos causes a trade association to recommend dismantling the SEC's primary audit system." The pathway does not exist in the taxonomy because the pathway was not imaginable from inside the taxonomy.
By April 16, the IMF spring meetings are already underway in Washington. What happens there is the next section.
The IMF Day, Track One: Governors and Ministers
The IMF and World Bank spring meetings had been scheduled to focus on the Middle East conflict, private credit markets, and sovereign debt. By April 17, Mythos was the dominant topic. The Financial Times reported: the ability of new AI models to wreak havoc in the world's banking system "was all many people wanted to talk about."
Andrew Bailey, Bank of England Governor and Chair of the Financial Stability Board, tells the BBC: "We are having to look very carefully now what this latest AI development could mean for the risk of cyber crime. There is a development of AI, of modelling, which makes it easier to detect existing vulnerabilities in sort of core IT systems, and then obviously cyber criminals and bad actors could seek to exploit them." To the Financial Times, Bailey frames the policy dilemma directly: "What is the optimum moment to frame the rules of the road? If you go too early you a) risk missing the target and b) you risk distorting the evolution, and if you go too late things can get out of control."
Bailey is not a commentator. He is the chair of the international body created specifically to coordinate global financial-system risk oversight after the 2008 crisis.
Christine Lagarde, President of the European Central Bank, to Bloomberg TV: "The development we've seen with Anthropic and Mythos is a good example of a responsible company that is suddenly thinking, 'ah, that could be really good' — but if it falls in the wrong hands, it could be really bad." And then the sentence that matters: "Everybody is keen to have a framework within which to operate. I don't think there is a governance framework that is there to actually mind those things. We need to work on that."
François-Philippe Champagne, Canada's Finance Minister, to the BBC, drawing a comparison that shouldn't work but does: "The difference with the Strait of Hormuz is that we know where it is and we know how large it is. The issue that we're facing with Anthropic is that it's an unknown, unknown. It requires a lot of attention so that we have safeguards, and we have processes in place to make sure that we ensure the resiliency of our financial system."
Kristalina Georgieva, Managing Director of the International Monetary Fund, on CBS Face the Nation: "We don't have the ability… to protect the international monetary system against massive cyber risks. We are very keen to see more attention to the guardrails that are necessary to protect financial stability in a world of AI."
Four officials, four institutional roles — national central bank (Bailey), supranational monetary authority (Lagarde), national finance ministry (Champagne), international financial institution (Georgieva). Four independent reads of the same situation. Four versions of the same conclusion: existing governance tools cannot contain the tool Anthropic released ten days earlier.
Lagarde's sentence contains a specific technical claim. She is not saying the framework is inadequate. She is saying it does not exist. The ECB, the IMF, the FSB, the BoE — institutions whose purpose is to coordinate exactly this kind of oversight — are stating publicly that the apparatus required to govern deployed-Mythos is not built. Building it is the work now in front of them. The tool is in 52 organizations. The framework is nowhere.
The IMF Day, Track Two: The White House
The same Friday, Anthropic CEO Dario Amodei arrives at the White House. Axios breaks the story first. Reuters, the Washington Post, CNN, CNBC, and UPI follow within hours. Amodei meets with Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent. The meeting is described by both sides as "productive and constructive."
This is happening against a specific prior history. In early March 2026 — before Mythos — the Pentagon had designated Anthropic a "supply chain risk," a label previously reserved for companies associated with foreign adversaries. The designation followed a breakdown in contract negotiations: the DOD wanted unfettered access to Claude for "all lawful purposes," including autonomous weapons and domestic mass surveillance. Anthropic refused. Trump ordered federal agencies to stop using Anthropic tools and called the company "leftwing nut jobs" on Truth Social. Amodei had previously described Trump as a "feudal warlord" in a pre-2024 Facebook post and, in a leaked internal Slack message, characterized the administration's dispute with the company as driven by its refusal to offer "dictator-style praise." Anthropic sued the administration in February. A California federal judge partially blocked the designation; the DC Circuit declined to block enforcement against the Department of War. Litigation is ongoing.
In March 2026, Anthropic registered $130,000 in federal lobbying disclosures for engaging Ballard Partners — the firm where Wiles had previously worked — with the engagement specified as "advocacy regarding [Department of War] procurement."
Against that backdrop, the April 17 meeting is a thaw. Both sides signal cooperation. The White House statement describes discussion of "shared approaches and protocols to address the challenges associated with scaling this technology." Anthropic says the meeting was "productive."
Trump, in Phoenix the same day, is asked about the meeting by reporters. He responds: "Who?" And then: "I have no idea."
The sitting President of the United States did not know his own Chief of Staff and Treasury Secretary had that morning met with the CEO of the company that released, ten days earlier, the most capable cyber-offensive AI model ever built.
That fact belongs on its own line.
On the same day, the central bankers of four major economies said in public that the world's financial governance frameworks cannot contain Mythos. The White House Chief of Staff met with Anthropic's CEO to discuss "productive collaboration." The President said he didn't know the meeting had happened. One of these things was a governance response. Another was politics. None of them is in Anthropic's risk report.
The Thing That Was Already Leaking
Bloomberg reporter Rachel Metz breaks the story Tuesday evening. A small group of unauthorized users had gained access to Mythos Preview. Not recently. On April 7 — the day Anthropic publicly announced the model.
The group was members of a private Discord channel focused on gathering intelligence about unreleased AI models. They accessed Mythos through a third-party vendor environment Anthropic uses for development. The access pathway, per Bloomberg: a worker at the third-party contractor, combined with shared accounts and API keys belonging to authorized penetration-testing vendors, combined with "an educated guess about the model's online location based on familiarity with Anthropic's URL formatting conventions for other models." The group had been using Mythos "regularly" since day one. They provided Bloomberg with screenshots and a live demonstration as proof.
Anthropic's public response: "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." No evidence of impact to Anthropic's core systems, the company said. No evidence the activity "extended beyond the third-party vendor environment."
The technical pathway is simple enough to describe in one sentence: contractor credential plus URL guess plus common-knowledge sleuthing tools. No zero-day exploit. No nation-state operation. No advanced persistent threat. The access control failed at the edge — exactly where Page 141 First noted ASL-3 security does not fully defend.
None of Anthropic's six named risk pathways describes this scenario. The six are propensity pathways — about what the model might choose to do. The breach is an access pathway — about what happens when an attacker does not need the model to do anything at all, except be reachable.
The chronology matters. For the full two weeks in which the Treasury Department, the Federal Reserve, UK financial regulators, the UK AI Security Institute, Sullivan & Cromwell, the American Securities Association, and the governors and ministers at the IMF were mobilizing in response to Mythos, a Discord group had been using the model through a vendor leak. The governance response described in this article was occurring in parallel with an unmonitored parallel use that Anthropic did not know about until Bloomberg told them.
The Record Closes
The Sullivan & Cromwell memorandum of April 15 is republished on the Columbia Law School Blue Sky Blog. The legal-academic record now contains the institutional framing of Mythos as systemic financial-stability risk. Future citations by other law firms, by regulators, by academic commentators, and by future court filings will draw from this record.
The sixteen-day window closes here. The case has been made — in regulatory framings, in independent government evaluations, in legal opinions of record, in trade-association letters to Cabinet secretaries, in central-bank public statements, in international monetary governance. None of it maps onto Anthropic's six pathways. All of it exists because the tool was released.
What the Two Framings Measure
Anthropic's risk analysis asks propensity questions. Will the model sandbag safety research? Will it insert backdoors into its successor's training code? Will it exfiltrate its own weights? Will it go rogue inside Anthropic's systems? These questions assume the model is the entity being evaluated, and the evaluator's task is to predict what it will attempt from the inside.
The institutional response is asking a different set of questions. Given that 52 organizations have this tool, what consequences follow? Given that access control at a single third-party vendor failed on day one, how secure is the perimeter? Given that the tool's capability is growing at a rate existing governance frameworks cannot track, how do those frameworks adapt? Given that consequences of misuse cross national jurisdictions, who coordinates? These are access questions, systemic-exposure questions, and governance-capacity questions. They assume the model is the entity deployed, and the evaluator's task is to catalog the exposure surface that deployment creates.
Both are legitimate. Only one is the subject of the documents Anthropic published.
This asymmetry is not accidental. Anthropic controls the model; it does not control the deployment. Its risk analysis is scoped to what it controls. The institutional response is scoped to what it must defend. Anthropic's Risk Report concedes this explicitly in Section 8: "The pathways discussed below focus on the use of models within Anthropic." That sentence is the concession. This article has documented what fills the space that concession opens.
The 52 organizations are that space. What the 52 do, what happens to them, what leaks from them, what states they expose, what the exposure costs — everything this article has described. None of it is in the Risk Report. All of it is in the world.
What This Means for the Next Model
The next model is being trained now.
If Anthropic's risk analysis continues to be scoped to internal use, and if the deployment footprint continues to grow faster than the analysis can track, the gap between what is measured and what matters will widen.
The institutional response in the sixteen days after Mythos is evidence that the actors who must defend against these tools are not waiting for Anthropic's framing. The next model will land into a governance environment that has begun — in real time, across jurisdictions — to build its own frame. Whether that frame catches up faster than the capability jump is the open question.
The statements on April 17 suggest the answer. Bailey: regulate too early and you distort development; regulate too late and things get out of control. He is the Chair of the Financial Stability Board. That is not a confident claim about getting the timing right. It is an acknowledgment that the timing may already be past. Georgieva: the international monetary system cannot currently be protected. That is not a prediction. It is a status report.
The next model lands into this. The sixteen days described here are not the response to Mythos Preview specifically. They are the beginning of the response to the category Mythos Preview belongs to. The next model is in that category too.
What Remains
Anthropic documented what the model might do. The institutions that received the model documented what the deployment did. The two documentations do not overlap.
They published 305 pages about six ways the model could go wrong from the inside. Meanwhile — central banks, finance ministries, the Pentagon, the White House, law firms, trade associations, independent government evaluators, and the International Monetary Fund spent sixteen days responding to a different problem. The problem where a tool that powerful is already in the hands of 52 organizations, has already leaked, and sits inside a governance system no one in charge of it thinks can contain it. That problem is not in their document. It is in the world.
The risk analysis is inside Anthropic. The risk is everywhere else.
The next model is being trained now.