Anthropic quietly removed Claude Code from its $20/month Pro plan. The coding assistant now requires a Team plan Premium seat at $100 per seat per month, per the updated pricing page. That's a 5x price jump for individual users who want access.
The timing is rough. Stella Laurenzo, an engineer at AMD, published a detailed GitHub issue documenting quality regression in Claude Code tied to a February 'redact-thinking' update. Her analysis covers 6,852 sessions with 17,871 thinking blocks and 234,760 tool calls. The findings: Claude started ignoring instructions, claiming fixes it never made, and doing the opposite of what users asked. All after the update.
Hacker News commenters aren't happy. One user described their experience with Anthropic as a "rollercoaster," starting as a vocal supporter in January and now reconsidering after dealing with hallucinations and lazy responses. Several say they'll look at competitors. GLM and Kimi were named as alternatives with cheaper coding plans. Others want to go local.
So Anthropic wants more money for a tool that, by AMD's own numbers, got worse. That's a tough sell. Professional engineers relying on Claude Code for complex workflows now face a choice: pay more for degraded performance, or switch. Some will stick around. Many won't.
AI agents fail in production because codebases aren't built for them, with mutable state, hidden dependencies, and buried side effects. Cyrus Radfar proposes functional programming as the fix, introducing SUPER (five code principles): side effects at the edge, uncoupled logic, pure functions, explicit data flow, and replaceable by value.