Cal Newport has one question for the AI industry: who asked for any of this? Elizabeth Lopatto reported in The Verge that Silicon Valley forgot what normal people want somewhere after the financial crisis. Tech companies stopped identifying customer needs and started inventing the future, expecting consumers to tag along. Tech leaders generate absurd code metrics like 37,000 lines of code in a day. LLMs have more real utility than NFTs or the metaverse, Newport concedes. But that doesn't excuse the industry's failure to articulate what these tools are actually for.

Most people use ChatGPT as a verbose Google. Maybe they format an itinerary occasionally. Useful? Sure. Life-changing? Not yet. Less impactful than the iPod was 20 years ago. Yet unlike the iPod, users can't escape the constant AI noise. GPT 5.5 benchmarks. Dark predictions about automation. Media outlets can't even agree on whether AI is destroying the entry-level job market or saving it. Last summer, publications blamed AI for shrinking opportunities for college grads. Then hiring rebounded, and suddenly AI got credit for expansion. The public is getting whiplash.

Hacker News commenters point to where the actual demand lives: corporations looking to cut costs and what some call the surveillance-spam-slop industry. Genuine consumer appetite? Thin. And there's a deeper problem lurking. Research from Ilia Shumailov at Oxford demonstrates that training models on AI-generated data causes model collapse, where quality degrades as synthetic content pollutes training sets. The phenomenon, dubbed Model Autophagy Disorder, means the content AI companies produce at scale could poison future models. Reddit and Stack Overflow are now licensing human data at premium prices because authentic human output is getting scarce. The industry is building a business that eats its own foundation. Reddit communities are experimenting with model poisoning.