A Hacker News thread asked a simple question: should you get your credits back when an AI model gives you bad output?

Most answers boiled down to: it's complicated.

Defining what counts as a mistake is genuinely hard. Code that runs but uses an approach you don't like. A UI with wrong colors but otherwise works. Quality judgments often depend on user preferences rather than objective criteria. A new study identifies 'cognitive surrender' as a phenomenon where users uncritically accept AI-generated answers without verification. One commenter drew a parallel to employment: do you pay your employer when you introduce bugs? Under that logic, users should feel lucky to get usable output at all.

Verification is where the economics fall apart. Someone would need to assess claims. Skilled human reviewers who understand both programming and domain-specific context. That's expensive, and those costs would land on everyone through higher token prices. Stores bake return costs into their pricing. Same idea here.

Don't hold your breath waiting for refunds anyway.

OpenAI's terms explicitly state that "AI output may not always be accurate" and disclaim all warranties. Anthropic's agreement says services are provided "as is" without warranties regarding "accuracy, reliability, or correctness of any AI outputs." Google and Microsoft have similar language. Microsoft's Copilot terms go furthest: customers are "solely responsible" for all uses of AI outputs and must implement their own "human review of AI-generated content."

They already decided. You pay regardless of quality. That's the deal.