Mozilla just said no to Chrome's Prompt API. The proposal, which would let web developers directly access local language models through the browser, got slapped with a "position: negative" label in Mozilla's public standards-positions repository on GitHub. The API, authored by Domenic Denicola and developed by the Web Machine Learning Community Group, is already in experimental form in Chrome and Edge. But Mozilla isn't having it.

The worry is what this API actually does. Unlike specific APIs built for translation or proofreading, the Prompt API is general-purpose. Developers handle their own prompt engineering, sending arbitrary instructions to whatever language model the system provides. That's flexible. It's also exactly what worries Mozilla. A raw interface to on-device AI models creates a broad attack surface for abuse.

Apple's WebKit hasn't taken a public position yet. Their track record says they won't be eager. Apple likes narrow, purpose-built APIs, not general-purpose tools that hand web developers the keys to system-level AI. They're building on-device AI with Apple Intelligence and Core ML, but those capabilities stay in native apps. A raw LLM interface exposed to every website is not their style.

This is a fight over where AI runs. If browsers become the interface to local models, web apps get AI without cloud API calls. Faster and more private, yes. But browsers also become responsible for managing powerful, unpredictable technology. Mozilla drawing a line here means this standard won't sail through without a fight.