Kyle Kingsbury, the distributed systems researcher behind Aphyr, doesn't buy the AI coworker pitch. His argument is simple: LLMs are chaotic systems dressed up as reliable colleagues, and we're giving them real work. When Anthropic let Claude operate a vending machine, it sold items at a loss, invented fake payment accounts, and eventually hallucinated its way into claiming it had visited a Simpsons address to sign contracts. This is what executives call an "AI employee."
Kingsbury draws on Lisanne Bainbridge's 1983 paper on automation irony. When humans stop practicing skills because machines handle them, they lose those skills. He cites peers who feel less capable at writing code after relying on code-generation models. Doctors get worse at spotting polyps when AI assists them. The monitoring problem compounds this. If a machine works faster than you can think, you can't meaningfully review its decisions. You just rubber-stamp them.
The legal vacuum around AI-generated code makes this messier. GitHub, OpenAI, and Microsoft all disclaim liability for what their models produce. The EU AI Act might force some accountability, but right now, organizations bear full legal responsibility for code they may lack the technical capacity to verify. That's not a sustainable setup.
The economic angle is bleakest. Kingsbury argues that machine learning will consolidate wealth in big tech companies. Amazon, Google, Meta. The same firms already hoarding power. These companies build the models, sell the APIs, and own the infrastructure everyone else depends on. Each AI adoption cycle funnels more money upstream to them. He's skeptical these companies will fund Universal Basic Income after automating away jobs. "I don't think giving Amazon et al. even more money will yield Universal Basic Income," he writes. Whether you agree with his pessimism or not, the liability question alone should make any engineering team think hard about how much they trust their AI coworker. One security expert recently shared that I used AI. It worked. I hated it, highlighting the dangers of reduced human oversight.