HN is doing something useful right now. Someone started a thread asking people to list all the bad things AI companies have done that we've collectively forgotten.
And it's working. Comments are piling up.
Someone brings up how Clearview AI scraped billions of photos from social media without consent. That story dominated headlines for about two weeks in 2020, then vanished. The company is still around. Still selling to law enforcement.
Another user recalls the content moderators. Workers in Kenya paid under $2 an hour to label violent and sexual content so companies could have clean training data. OpenAI, Meta, others all used these services. The reporting came and went. The practice didn't.
There's also the broader pattern of training on copyrighted work, getting caught, then quietly updating terms of service retroactively. Or launching 'beta' products that were really just unpaid user testing at scale. Gary Marcus recently flagged fraud claims behind Medvi's $1.8B valuation, a company that dominated headlines briefly before vanishing.
AI moves fast enough that last month feels like ancient history. New model drops, press covers what's shiny, and whatever happened before evaporates. This thread is a countermeasure. A community-maintained record of what actually occurred before PR teams got to rewrite the story.
For anyone building with AI agents, this matters more than you'd think. You're choosing platforms and providers to depend on. Knowing which ones have a pattern of cutting corners or treating users as test subjects isn't trivia. It's due diligence.
The thread is worth your time. If only to remember that the companies promising to automate your workflow haven't always been straight with you.