A Hacker News thread flagged something practitioners have apparently been noticing: AI agents, when offered a choice between natural language and structured queries, tend to pick structured. The original linked content wasn't accessible for review, so what follows is inference from publicly known context — not reported findings.

The observation cuts against an obvious assumption. Language models run on language; you'd expect them to lean on it. But structured queries — SQL, JSON API calls, typed function invocations — produce deterministic, parseable results. Natural language doesn't. An agent optimizing for task completion would plausibly learn to prefer the format that fails less, regardless of explicit instruction.

Token count matters too. A SQL query consumes far fewer tokens than an equivalent natural language description of the same request. For an agent managing a tight context window, that gap has real operational consequences.

Anthropic, OpenAI, and Google have all built structured invocation as a first-class feature — tool_use, function calling, Gemini function declarations. If agents do genuinely prefer structured formats by default, these APIs may be doing more than simplifying developer integration. They could be shaping how agents route their own requests, a feedback loop that would compound through successive training cycles.

Hard evidence for any of this is thin. The HN thread is anecdotal, and whatever research underpins the original claim wasn't reachable. Plausible enough to watch; not yet a finding.