For over a month, acme.com couldn't stay online. The cause wasn't a hack or server failure. It was AI scrapers.

Starting February 25th, the site suffered intermittent outages lasting hours. High ping times, dropped packets, the works. The admin spent weeks troubleshooting with their ISP, Sonic, before the real issue surfaced at 1am during one anxious debugging session: nearly all incoming traffic was AI agents reading the site.

The fix was blunt. Close port 443. Outages stopped immediately. But about 10% of legitimate traffic uses HTTPS, so this is just a band-aid. The admin noted that "the LLM companies are not picking on me in particular, they are pounding every site on the net." PortableApps.com has faced similar issues with Chinese LLM scrapers for two weeks.

The uncomfortable question from Hacker News: if an individual launched a DDoS attack, they'd face serious consequences under the Computer Fraud and Abuse Act. But when AI companies do essentially the same thing by ignoring rate limits, there's no accountability. Scrapers from OpenAI, Anthropic, Google, and Common Crawl have all been tied to similar incidents. Small sites are paying the infrastructure costs of training AI models, and they have almost no recourse. One developer reduced infrastructure costs by using a virtual filesystem approach.