When a packaging mistake at Anthropic exposed 512,000 lines of Claude Code source on March 31, nobody expected to find a single 3,167-line function with 486 branch points. But that's what was there, buried in a file called print.ts. The codebase also revealed that Anthropic, makers of one of the world's most advanced language models, was detecting user frustration through regex patterns matching pattern matching profanity. A Hacker News commenter summed it up: "Like a trucking company using horses to haul parts." QueryEngine.ts ran 46,000 lines. Tool.ts hit 29,000. Someone had documented a bug burning 250,000 API calls daily, written a comment about it, then shipped the code anyway. The fix was three lines.

The leak arrived three months after lead engineer Boris Cherny posted that 100% of his contributions to Claude Code were written by Claude Code itself: 259 pull requests, 497 commits, 40,000 lines added. The claim got 1.3 million views. CPO Mike Krieger echoed it in February 2026, telling the Cisco AI Summit that "effectively 100%" of code for most Anthropic products was AI-written. A LessWrong analysis (from the rationality forum known for its AI safety discussions) later called these claims "misleading/hype-y." Nobody ever defined what the percentage actually measured. Lines committed? Engineering effort? Characters typed? The distinction matters. Anthropic never clarified.

The team's response to the leak confirmed that the code quality wasn't an accident. Cherny acknowledged the packaging error but said the solution was "finding ways to go faster, rather than introducing more process." A commenter in the Hacker News thread explained the team ethos: "There is no point in code-reviewing AI-generated code. Simply update your spec and regenerate." This lack of review introduces verification debt, the gap between AI's code generation speed and humans' ability to validate it.

The issue tracker tells the same story. Claude Sonnet bots handle deduplication, mark issues stale after 14 days, and close them. An estimated 49 to 71% of all 26,792 issue closures were bot-driven. One issue with 201 upvotes got zero team responses and was labeled "invalid."

AI writes the code. AI reviews the code. AI checks the deployment. When it breaks, you add more AI. The loop has no exit condition.