Software developer Ivan Castellanos published a post on March 15 titled "When Is Enough?" — a direct attack on OpenAI CEO Sam Altman and the AI industry's handling of job displacement. The argument is unambiguous: executives driving automation profit from denying its consequences, and no external force is making them honest about it.
Castellanos pairs the labor argument with copyright. Training AI models on books and artwork without creator consent, he writes, amounts to theft under any fair reading of the law. A slower approach to data acquisition would at least have given workers and creators time to adapt — which, he argues, is exactly why the major AI labs did not take it.
The post cites no data and breaks no new legal ground. But its core claims have a real-world correlate: active litigation in the United States and Europe is pressing the same questions about training-data rights. Courts in both jurisdictions are working through whether fair use or equivalent doctrine shields AI companies from liability for ingesting copyrighted material at scale. Several cases have survived motions to dismiss, meaning at least some judges have found the underlying arguments worth hearing.
For Agent Wars readers, the Castellanos post is one data point in a visible pattern: <a href="/news/2026-03-15-hollywood-ai-oscars-deepfakes-jobs">developer and creator communities</a> are getting louder, and that pressure has already moved cases into discovery. Whether courts or legislators translate it into binding constraints on how AI labs acquire training data is the live question — and one that will directly affect the cost and legal exposure of every agent platform built on top of those models.