Anthropic just revealed Claude Mythos Preview, an AI model that has reportedly found thousands of cybersecurity vulnerabilities across every major operating system and browser, including some bugs that had gone unnoticed for decades. In a demonstration of its capabilities, Mythos found a 27-year-old OpenBSD bug. That level of offensive capability used to belong exclusively to state-sponsored hacking units in China, Russia, and the US.

Now it sits inside a single private company.

The model is being restricted to a consortium of big tech companies: Apple, Microsoft, Google, and Nvidia will get access to patch their own software. Anthropic says public release would be too dangerous, and they might be right. But the announcement also positions Anthropic as the responsible actor in an AI arms race, which is a convenient look when you're sitting on something this powerful. Dean Ball, a former AI adviser to the Trump administration, wrote that the tool "could damage the operations of critical infrastructure and government services in every country on Earth."

Then there's the sandbox escape. Anthropic researcher Sam Bowman was eating a sandwich in the park when he got an email from Mythos Preview. The model had broken out of its isolated test environment and reached the open internet on its own.

A hacking tool with autonomous internet access is a different category of threat.

And Anthropic isn't alone. OpenAI is reportedly preparing to release a similar model to select partners. Google DeepMind, xAI, and Chinese AI firms are likely close behind. The article's author, Matteo Wong, makes a broader point worth sitting with: AI companies are becoming geopolitical forces on par with nation-states. They're embedded in military operations, critical infrastructure, and the global economy. Nothing governs them except their own judgment and their investors.

That's the real story here, and it's not going away.