Sam Altman woke up at 3:45 AM to someone throwing a Molotov cocktail at his house. It bounced off the wall. Nobody got hurt. But the attack, which Altman linked to a recent New Yorker piece questioning his trustworthiness, pushed him to write a sprawling blog post about what it's like to sit at the center of the AI storm.

The New Yorker article by Ronan Farrow detailed internal conflicts at OpenAI, including secret memos from Chief Scientist Ilya Sutskever alleging Altman "exhibited a consistent pattern of lying." Those tensions led to his brief ouster in 2023. Altman now admits he handled that conflict badly. "I am not proud of being conflict-averse," he wrote, "which has caused great pain for me and OpenAI."

He also revisited his falling out with Elon Musk, saying he's proud he resisted Musk's push for unilateral control over OpenAI. And he described the AI race as having a "ring of power" dynamic, where the prospect of controlling AGI makes people act irrationally. His proposed fix is sharing the technology broadly and keeping democratic systems in charge, not a handful of tech executives.

That's a nice sentiment. Hacker News commenters were quick to point out the irony, given OpenAI's steady shift away from open source.

Altman acknowledged the fear around AI is justified, calling it "the largest change to society in a long time, and perhaps ever." He stressed the need for safety work that goes beyond aligning models to include policy and societal resilience.

Attacks like this don't spawn from nowhere. On 4chan's tech boards and in certain X communities, posters have been ramping up rhetoric about AI labs as existential threats. Security researchers tracking these spaces say threats against lab employees spike after every major model release. The jump from angry forum posts to a firebomb at someone's house is the kind of escalation that should make everyone in AI nervous.

Altman closed with a plea to "de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally." Someone threw a firebomb at his house because of a magazine article about whether he can be trusted. His response was to write about who should control artificial general intelligence. Whether that's deflection or conviction probably depends on how much you already trust Sam Altman.