Apple told Elon Musk's xAI in January that Grok would be booted from the App Store if it didn't fix its deepfake problem. The company found both X and Grok in violation of its guidelines after the AI app generated nude and sexualized images, according to a letter Apple sent to senators that NBC News obtained.
This isn't new territory for Apple. In 2022, the company pushed Lensa AI's developer, Prisma Labs, to strengthen content filters and age gates after that app started generating NSFW content, citing App Store Review Guidelines 1.1 on objectionable content and 2.3.10 on age ratings. Apple has banned smaller apps built for non-consensual sexual imagery outright. But Grok is different. It's baked into X, a major social platform owned by one of the world's richest men.
Lawmakers have been pressing Apple on what it's actually doing to detect AI content. The letter to senators was Apple's answer. We threatened removal, and we meant it.
Grok sits inside X with hundreds of millions of users. Apple was willing to apply the same rules it uses on tiny developers to Musk's operation anyway. That's the real story here. Meta, Google, Microsoft, they're all racing to ship AI image generators. If Apple holds this line, every one of those apps will need real content filters, not just lip service. The App Store has always been a gatekeeper. Now it's becoming a de facto AI regulator, one removal threat at a time.