Daniel Stenberg has a problem most open-source maintainers would envy. The curl project now gets legitimate, high-quality AI-generated security reports at roughly one every 20 hours. The era of garbage "AI slop" vulnerability submissions is over. What's replaced it is something harder to complain about but just as exhausting: a relentless stream of real bugs that need real fixes.

Stenberg shared data on Mastodon and LinkedIn showing what he calls a "high volume high quality flood" of reports powered by AI tooling. Other open-source projects report the same pattern. The curl project currently has six pending security advisories queued for its next release, and Stenberg says 2026 could see more published curl vulnerabilities than any previous year.

Better AI security tools simply amplify the work. Instead of filtering out obvious nonsense, Stenberg and his team triage and address real findings at a pace that makes backlog management a daily concern. And curl is a well-resourced project with corporate sponsors. Smaller projects run by volunteers face the same flood with a fraction of the capacity.

This is the sustainability crisis nobody predicted. When every AI scanner can find real vulnerabilities across millions of open-source codebases, the bottleneck shifts from detection to remediation. The bugs are real. The fixes require human time, review, and coordination. Nobody has built infrastructure to handle this at scale yet, a challenge that [echoes the concerns driving projects like Cal.com to abandon open source](/news/2026-04-16-cal-com-is-going-closed-source).

Stenberg will present the full picture at foss-north on April 28 in a talk titled "Open Source AI reality," and joins an Anchore panel on third-party software risk on April 21. [Meanwhile, Anthropic has committed $100M to Project Glasswing](/news/2026-04-07-project-glasswing-anthropic-defenders-first), a collaborative initiative to help defenders gain advantage against AI-augmented cyber threats.