The Financial Times reported that Anthropic is investigating unauthorized access to an AI model called "Mythos." Details are thin. The full article sits behind a paywall, and attempts to pull the actual reporting have come up empty. We're left with a headline, some speculation, and not much else.

Here's what caught attention: Anthropic's public model registry includes Claude 3 and 3.5 variants (Opus, Sonnet, Haiku). No "Mythos" appears in their API documentation or product pages. This looks like an internal codename or research prototype that hasn't shipped. The FT calling it "powerful" without public benchmarks or specs to back that up is doing a lot of heavy lifting.

Hacker News caught the marketing angle fast. The thread hit over 200 upvotes within hours. One of the top-voted comments read: "If your model is so good people are willing to steal it, that's the best ad you could ask for." Another user pointed out the timing. Anthropic competes with OpenAI and Google in a market where buzz matters more than benchmarks. A mystery model valuable enough to breach? That's attention without a product announcement.

The breach could be real. It could be nothing. We can't tell from outside the paywall. Either way, Anthropic just got thousands of people talking about a model nobody can verify exists.