At least eight organizations across the UK, Australia, and the United States are racing to establish a universally recognized "AI-free" or "human-made" certification label for creative content, BBC News reports. Contenders include Not By AI (notbyai.fyi), ai-free.io, no-ai-icon.com, aifreecert, Books by People, and Australia's Proudly Human — each operating with different definitions, fee structures, and levels of verification. Some labels are freely downloadable with no checks whatsoever; aifreecert employs professional analysts and AI-detection software for paid vetting. As generative AI displaces human workers across publishing, film, music, and advertising, demand for a credible trust signal is real — so is the scramble to supply one.
Publishing, film, and music are where the stakes are sharpest. UK publisher Faber and Faber has begun applying informal <a href="/news/2026-03-15-uk-society-of-authors-human-authored-logo">'Human Written' stamps</a> to select titles following requests from authors including Sarah Hall, who described AI's use of copyrighted training data as "creative larceny at scale." In film, the 2024 Hugh Grant thriller Heretic included closing-credit disclosures that no generative AI was used in production — a precedent extended by UK distributor The Mise en scène Company, whose CEO Paul Yates argues there is an "economic premium" on human-made content. On the other end of the spectrum, Bollywood studio Intelliflicks openly produces films entirely with generative AI, and the viral band Velvet Sundown — later revealed to be fully AI-generated — has become a cautionary example of what undisclosed use does to consumer trust.
The more immediate obstacle is fragmentation. AI Research Scientist Sasha Luccioni argues that AI is already embedded so deeply in everyday creative tools — grammar checkers, photo editors, autocomplete — that a binary AI/AI-free classification is technically unworkable, and that spectrum-based certification is the only honest alternative. Dr Amna Khan of Manchester Metropolitan University says competing definitions will leave consumers more confused than before, and that a single universal standard is the only way to build durable trust. Alan Finkel, whom Proudly Human lists as its chief executive — and who is best known publicly as Australia's former Chief Scientist — argues that without full external verification at every stage from manuscript to final edition, any self-certification scheme carries no weight. (His current role at Proudly Human is pending editorial verification before publication.)
The Fair Trade analogy, while rhetorically useful, may undersell the difficulty. Fair Trade certifies observable, auditable physical supply chains, tracked by third-party auditors who visit real locations. AI integration is invisible and recursive — embedded inside the tools creators already use and impossible to fully audit after the fact. With no neutral governing body, no interoperability between competing schemes, and no enforcement mechanism beyond periodic sampling or self-reporting, the AI-free certification landscape currently resembles the fragmented pre-consolidation era of ethical trade labels. That era took decades and significant institutional investment to resolve, and only partially at that.