The Capybara in the Room
Anthropic accidentally leaked their most powerful model through a misconfigured website. Its codename is Mythos. Its tier is Capybara. It sits above Opus. And it changes everything.
A company building the most advanced cybersecurity AI on earth accidentally revealed its existence through a misconfigured CMS. Five days later, they leaked their own source code on npm. The model's codename is Mythos. The product tier is Capybara. And it is not a version bump. It is a new category.
The Leak
On March 26, 2026, approximately 3,000 unpublished assets on Anthropic's website became publicly accessible. Security researchers from LayerX Security and the University of Cambridge independently confirmed the exposure. Among the assets: a draft blog post describing a model called Claude Mythos sitting in a brand-new tier called Capybara.
Anthropic confirmed the model is real. Their spokesperson: "We're developing a general purpose model with meaningful advances in reasoning, coding, and cybersecurity. Given the strength of its capabilities, we're being deliberate about how we release it."
Training is complete. The model is in restricted testing with early-access customers right now.
A Fourth Tier
This is not Claude 5. This is something else entirely.
Haiku -> Sonnet -> Opus -> CapybaraFrom the leaked draft: "Capybara is a new name for a new tier of model: larger and more intelligent than our Opus models - which were, until now, our most powerful." It claims "dramatically higher scores" than Opus 4.6 on coding, academic reasoning, and cybersecurity benchmarks.
The leaked benchmark numbers from Fortune's reporting:
- SWE-bench: 87.4% (vs. Opus 4.6's ~80.8%)
- Scaled tool use: 75.7% (vs. Opus 4.6's 59.5%)
No independent verification exists. These are Anthropic's own numbers from their own leaked draft. But if they are even directionally correct, the gap between Capybara and current Opus is larger than the gap between Opus and Sonnet.
The Cybersecurity Problem
This is the part that should make you uncomfortable.
The leaked draft states Mythos is "currently far ahead of any other AI model in cyber capabilities" and "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders."
Anthropic is not hiding from this. According to Axios, they have been privately briefing senior government officials that Mythos makes large-scale cyberattacks significantly more likely in 2026. The planned rollout starts with cybersecurity defense organizations before any general release.
The market took notice. After the leak:
- Zscaler: down ~8%
- CrowdStrike: down ~7%
- Palo Alto Networks: down ~7%
- Tenable: down ~11%
Cybersecurity stocks dropped not because the model is bad for security, but because the market understood the implication: if one lab has this, others will follow. The asymmetry between offense and defense in cybersecurity is about to get significantly worse.
The Irony Nobody Can Ignore
A company building a model with "unprecedented cybersecurity capabilities" leaked its existence through a basic CMS misconfiguration. Five days later, they accidentally bundled a source map in Claude Code v2.1.88 on npm, exposing ~512,000 lines of TypeScript across 1,900 files. Before the takedown, it was forked 41,500 times.
What the code revealed:
- References to "Capybara v8" - suggesting extensive iteration
- A separate codename: "Fennec" - believed to be Claude Sonnet 5
- A feature system called KAIROS: an autonomous daemon mode allowing Claude to operate as a persistent background agent with memory consolidation
- Anti-distillation measures designed to corrupt competitor training pipelines
The "Fennec" codename matched a model identifier that had appeared in Google Vertex AI error logs in early February 2026, seven weeks before the Mythos leak. So there are at least two unreleased models in the pipeline.
What It Will Cost
No official pricing exists. The leaked draft explicitly states the model is "very expensive for us to serve, and will be very expensive for our customers to use." Analyst estimates for the API:
- Current Opus 4.6: $5 input / $25 output per million tokens
- Estimated Capybara: $10-15 input / $50-75 output per million tokens
For context, GPT-5.4 charges $2.50/$15 and Gemini 2.5 Pro charges $1.25/$10. Capybara would be in a category of its own. Consumer access will likely require at minimum the Max tier ($100-200/month), possibly a new tier entirely. The $20/month Pro plan is almost certainly too cheap to include Capybara access given compute demands.
The wildcard: Anthropic has historically achieved dramatic cost reductions between generations. Opus went from $15/$75 to $5/$25. The leaked draft says they are working to make Mythos "much more efficient." Launch pricing may surprise.
When
Polymarket gives a 73% chance of launch by June 2026. Training is done. Testing is happening now. The question is safety evaluation, not readiness.
Two factors create pressure for sooner rather than later. First, Anthropic is reportedly targeting an October 2026 IPO at $400-500 billion valuation. Launching their most powerful model before the IPO is a financial imperative. The Information directly linked Mythos preparation to IPO timing.
Second, the competition is not standing still. OpenAI's codenamed "Spud" model is reportedly being prepared as a direct response. Google Gemini 3 Pro, DeepSeek V4, and xAI Grok 5 are all expected by mid-2026. The window for Capybara to be uncontested is narrow.
But the leaked draft includes one line that overrides commercial pressure: "timeline driven by safety evaluations, not a fixed commercial calendar." If Mythos triggers Anthropic's ASL-4 threshold - reserved for models posing catastrophic risk - the deployment could face significant delays.
Dario Amodei Is Not Hedging Anymore
Anthropic's CEO has been escalating his public statements with almost alarming precision:
- October 2024: "Powerful AI could come as early as 2026"
- January 2026 (Davos): AGI within two years. Software engineers could be replaced within 6-12 months.
- February 2026: 90% confidence in AGI within ten years. Personal hunch: 2026-2027.
- March 2026 (Morgan Stanley): "We do not see hitting the wall. We don't see a wall." Compared AI progress to the 40th square of the rice-on-chessboard parable. Said what they see in the lab is "far more crazy than what the outside world perceives."
- March 2025 (White House filing): Official corporate position: "We expect powerful AI systems will emerge in late 2026 or early 2027."
That last one matters most. An official filing to a government body is not a podcast hot take. It is a statement the company is willing to defend legally.
The $19 Billion Machine
Anthropic's annualized revenue run rate hit $19 billion as of March 2026. February alone added $6 billion in ARR. They have raised over $64 billion in total capital, secured 1 million Google TPUv7 chips and 500,000+ AWS Trainium2 chips, plus $30 billion in Microsoft Azure capacity.
This is not a research lab releasing papers. This is a company building infrastructure for a model that, by their own admission, might be the most consequential piece of software ever created. And they accidentally told us about it through a misconfigured website.
Mythos is real. It is trained. It is testing. It is coming. The only question is whether it arrives as a product launch or as a safety event. Either way, the capybara is already in the room.