In February 2019, OpenAI announced it wouldn't release the full GPT-2 text generation model, claiming it was too dangerous. The organization, founded by Elon Musk, Peter Thiel, and Reid Hoffman, said the 1.5 billion parameter model could generate fake news, spam, and impersonation content at scale. Media outlets ran wild. The Guardian published "AI Can Write Just Like Me. Brace for the Robot Apocalypse." Metro UK went with "Elon Musk-Founded OpenAI Builds Artificial Intelligence So Powerful That It Must Be Kept Locked Up for the Good of Humanity."

The embargo lasted about as long as you'd expect. The open-source community reverse-engineered GPT-2's architecture almost immediately. Hugging Face integrated it into their Transformers library within weeks. By September 2019, Salesforce released CTRL, a 1.6 billion parameter model that was fully open-sourced. The Allen Institute for AI released Grover around the same time, a model specifically designed to generate and detect fake news. Robert Frederking, principal systems scientist at Carnegie Mellon's Language Technologies Institute, was blunt: "A lot of people are wondering if you actually achieve anything by embargoing your results when everybody else can figure out how to do it anyway."

Looking back, GPT-2 established what tech observers now recognize as the OpenAI playbook. Claim your model is unprecedentedly powerful. Suggest it poses unique dangers. Generate breathless coverage. Then release it anyway when competitors inevitably catch up. The notion that OpenAI possessed uniquely dangerous technology requiring special containment was marketing dressed up as ethics. Anyone with sufficient compute and publicly available research papers could build something comparable. The question worth asking: was "responsible release" ever really about responsibility?