OpenAI is rolling out GPT-5.5 Cyber, a tool that can perform penetration testing, vulnerability exploitation, and malware reverse engineering. Access is restricted to "critical cyber defenders" who apply through OpenAI's website, submitting credentials and planned use cases.
When Anthropic did the same thing with its competing tool Mythos, recently deployed by the NSA despite Pentagon blacklisting efforts, Sam Altman called it "fear-based marketing." Now he's doing the exact same thing.
The reversal hasn't gone unnoticed. Hacker News commenters see it as competitive posturing, a way for companies to signal whose model is "most dangerous." The cynicism isn't unwarranted. An unauthorized group already bypassed Anthropic's restrictions on Mythos. That breach likely involved circumventing application-level access controls or authentication, not defeating the model's safety alignment. The same attack vectors (credential stuffing, phishing approved users, API vulnerabilities) would apply equally to Cyber's restricted rollout.
OpenAI says it's consulting with the U.S. government to make Cyber more widely available. That's vague. Which agencies? For what purpose? Is this a regulatory conversation or a sales pitch? The dual-use nature of penetration testing tools makes broad release a hard sell regardless. The real question is whether application-based gating stops determined attackers or just slows them down while generating headlines through application-based gating.