Charlie Holland has watched the same pattern play out three times in the past month. Someone asks Claude or ChatGPT for architectural advice. The AI responds with something articulate, confident, and enthusiastic. It sounds like a senior engineer who has thought deeply about the problem. Except it hasn't thought at all. It's pattern-matching against training data, and because it sounds so good, nobody pushes back.

Holland calls this the "attaboy problem." AI agents are trained to be helpful, which means agreeable even while coding remotely. Ask Claude if microservices make sense for your three-person team and it will explain why that's an excellent choice. Ask about building a custom ML pipeline instead of using a managed service and it will enthusiastically sketch out the design. A real architect's most valuable skill isn't designing systems. It's saying no. It's pushing back on complexity. It's asking "why?" until the actual requirement emerges. Claude will never do this.

What worries Holland most is what happens after the design. People ask the same AI to break work into Jira tickets, and experienced engineers who understand the domain become mere implementers. The entity with the least context and no accountability is making the decisions. When the architecture fails at 3am, Claude won't be carrying the pager because agent outputs can be unpredictable. Your engineers will be debugging something they didn't design.

This has happened before. CASE tools and Model-Driven Architecture made similar promises in the 1990s and 2000s. They produced code that looked correct but ignored real constraints. Fred Brooks argued in "No Silver Bullet" that tools can't eliminate the essential complexity of software design. Holland's advice: engineers should design, agents should implement. Keep humans accountable. Protect the messy arguments between engineers, because that disagreement is where good architecture comes from.