The Citizen Lab at the University of Toronto, working with Clemson University's Media Forensics Hub — whose team originally identified the coordinated accounts — has published research exposing "PRISONBREAK," a coordinated AI-enabled influence operation using more than 50 inauthentic X profiles to push regime-change narratives at Iranian audiences. The network was created in 2023 but stayed largely dormant until January 2025, then escalated in direct coordination with the Israel Defense Forces' June 2025 strikes on Iranian targets, including a bombing of Evin Prison, Tehran's notorious political detention facility. After ruling out alternative explanations, the researchers attribute the operation with high confidence to an unidentified Israeli government agency or a private subcontractor under close state supervision.

What made PRISONBREAK technically notable was its use of generative AI throughout the operation: synthetic profile pictures, fabricated BBC Persian news screenshots, and automated or semi-automated synchronized posting. Most strikingly, a deepfake video of the Evin Prison bombing appeared on X within roughly one hour of the actual IDF airstrike. That video fooled multiple international news outlets before BBC Persian flagged it as fabricated. The timing implies either foreknowledge of the strike or a pre-staged contingency pipeline — a level of coordination that goes well beyond simple content scheduling.

The Citizen Lab report places PRISONBREAK inside a documented seven-year pattern of Israeli intelligence alumni firms running covert synthetic-persona campaigns globally. Lead author Alberto Fittarelli and Citizen Lab director Ron Deibert explicitly reference Team Jorge — whose AIMS platform manages over 30,000 automated fake profiles and whose corporate vehicle Demoman International appears on Israel's own Defense Ministry export promotion website — and Archimedes Group, a Tel Aviv firm Facebook banned in 2019 after it ran coordinated influence operations across Africa and Southeast Asia. The most recent precedent is STOIC, which received at least $2 million from Israel's Ministry of Diaspora Affairs to run an AI-persona sockpuppet campaign in the United States and Canada after the October 7, 2023 attacks — the first documented case of Israeli government funds directly commissioning AI-generated persona operations at scale.

Citizen Lab calls PRISONBREAK a "kinetic" influence operation — one timed to run alongside active military strikes rather than independently of them. The deepfake video posted within an hour of the IDF strike is the clearest evidence of that. What the report doesn't resolve is who built the pipeline: whether PRISONBREAK used off-the-shelf AI tools, custom software, or a private contractor's proprietary platform. X's own systems apparently flagged nothing — detection only came when the Clemson team spotted the coordinated accounts. Two years ago, generating a convincing deepfake and deploying it within an hour of a live airstrike would have required significant resources. Now, as <a href="/news/2026-03-14-datacenters-become-warfare-targets-as-iran-strikes-aws-facilities-in-gulf-states">AI has become central to modern military operations</a>, it's tradecraft.