Anil R. Doshi and Oliver Hauser ran an experiment with roughly 300 writers. Some got GPT-4 assistance, others worked alone. Independent judges rated the AI-assisted stories as more creative. Good news for AI, right? Not exactly. Those assisted stories were also noticeably more similar to each other, reducing humanity's collective wisdom. Publishing in Science Advances, the researchers called it a "tragedy of the commons." Individual writers improved. The collective body of work got blander.

Bright Simons extends this finding in The Ideas Letter with an uncomfortable argument. LLMs don't actually think. They remember how humans thought together. The intelligence in these systems comes from the accumulated social complexity of civilization, not from transformer architecture or compute scale. When companies like IBM, Duolingo, and Klarna replace workers with AI, they thin the human discourse that future models need to improve. IBM has already walked back some of its earlier workforce reduction plans.

This isn't speculation. Ilia Shumailov and colleagues at the University of Oxford showed in Nature Machine Intelligence that models trained on AI-generated data degenerate over time. Minority viewpoints vanish. Rare knowledge disappears. Output narrows to a statistical average that's fluent and empty. Meanwhile, Stack Overflow saw a 13.5% traffic drop in early 2023 as developers turned to ChatGPT instead of posting publicly. Analysis by Originality.ai found AI-generated content among top websites surged from 2.6% to over 10% in just six months. Less human conversation means less fresh data for the next generation of models.

The companies that win won't be those that replace humans fastest. They'll be the ones using AI to generate more human interaction, not less. If we automate away the social reasoning that produces training data, the models stop getting better. We're eating our seed corn, and it tastes like productivity.