OpenAI wants legal immunity if its models help cause mass casualties. Illinois bill SB 3444, which OpenAI actively supports and has hired lobbying firm Fierce Government Relations to push, would shield AI developers from liability for "critical harms." Those harms include incidents where 100 or more people die or suffer serious injury, or where at least $1 billion in property damage occurs. As long as the company didn't act intentionally or recklessly and published safety reports on its website, it walks away clean.
The bill applies to "frontier models," defined as any AI trained with more than $100 million in compute. That captures every major lab: OpenAI, Google, Anthropic, xAI, and Meta. OpenAI spokesperson Jamie Radice framed the support as focusing "on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses."
If someone uses a frontier model to build a chemical weapon, or if a model autonomously commits acts that would be criminal for a human, the lab that built it faces no legal consequences. OpenAI's Caitlin Niedermeyer testified that the bill avoids "a patchwork of inconsistent state requirements," echoing broader industry calls for federal standards over state-by-state rules.
The bill probably won't pass. Scott Wisor, policy director for the Secure AI project, told WIRED that 90 percent of Illinois residents polled oppose liability exemptions for AI companies. Illinois has a track record of aggressive tech regulation, including being the first state to limit AI in mental health services. OpenAI started as a nonprofit focused on responsible AI development safety stunts. Now it's paying lobbyists to make sure it can't be sued when its products cause catastrophic harm.