Raj Nandan Sharma argues in a recent essay that when AI makes competent output cheap, the only durable advantage left is judgment. He calls it "taste," but he's not talking about aesthetics. He means the ability to look at a generic AI output and say exactly why it's wrong for your specific situation. Most people can spot something feels off. Few can articulate "this fails because it sounds like every other SaaS product" or "this explanation collapses a regulatory constraint into marketing language." That diagnostic precision separates people who get useful work from AI and those who just generate output without understanding. The scarce skill has shifted from generation to refusal, the willingness to reject the first acceptable draft and demand something specific. Hacker News commenters reinforced this point about agentic coding. One user noted that without an extremely clear product vision and vocabulary, AI-assisted development creates an incoherent mess regardless of speed. You need to know what "perfect" looks like before you start.
There's a catch. Sharma points out that taste alone isn't enough if humans reduce themselves to selecting from AI outputs. The real opportunity is combining judgment with actual ownership of the problem, constraints, and stakes. And there's an experience paradox lurking here. Junior engineers traditionally developed taste through the grind of writing and debugging boilerplate code. That repetitive exposure to edge cases and system failures built the intuition needed to critique outputs. Outsourcing that work to LLMs risks producing engineers who can prompt well but lack the deep judgment to steer agentic systems effectively, echoing findings that displaced workers face a critical pipeline break.