Kyle "Aphyr" Kingsbury, known for his distributed systems testing work, published a sprawling essay this week calling LLMs what many developers think but few say out loud: bullshit machines. The piece argues these models are fundamentally improv actors, saying "yes, and" to whatever you throw at them, regardless of truth.

The essay catalogs Kingsbury's personal encounters with LLM confabulation across Claude, ChatGPT, and Gemini. At a conference, he watched a speaker present a fabricated quote attributed to him, sourced entirely from an LLM hallucination. Anthropic's own research found that Claude's reasoning traces were "predominantly inaccurate," as researcher Michelle Pokrass Walden noted. The problem isn't a bug. It's the architecture. LLMs predict statistically likely text completions, and they do this whether you're asking about radiation safety or the model's own internal state. Ask an LLM why it did something, and you get fiction about its "programming," not actual introspection.

Kingsbury's sections on information ecology are where the essay sharpens into something more than a bug catalog. He describes watching people form parasocial relationships with models that flatter and agree. Hallucinated citations poisoning search results. The slow erosion of shared factual ground when bullshit scales alongside everything else. **Cognitive surrender** describes how users uncritically accept AI-generated answers, accelerating this erosion.

The engineering community has already moved past treating LLMs as standalone knowledge stores. Retrieval-Augmented Generation fetches real documents before generating answers. **Agent patterns for persistent knowledge** hand arithmetic, web searches, and database queries to external tools instead of letting the model guess. Kingsbury would call this duct tape over a cracked foundation. Maybe he's right. But the duct tape works often enough that people keep building.

Hacker News readers split on the framing. User "inglor_cz" argued "bullshit machines" understates genuine progress since GPT-2, while others called it the clearest explanation for non-technical people they'd heard.

Kingsbury watched someone quote him on stage using words an LLM invented. The speaker had no idea. The audience had no idea. The model had no idea either. That's not a bug to patch. That's the system working as designed.