Anthropic is rolling out identity verification for some Claude users, requiring government-issued photo ID and possibly a live selfie through partner Persona Identities. The company says this is about safety, abuse prevention, and legal compliance. Verification takes under five minutes and accepts passports, driver's licenses, and national identity cards from most countries. What Anthropic hasn't said: which specific capabilities will trigger the requirement.
Anthropic promises your verification data won't be used to train their models. But Persona's own privacy policy tells a different story. They use uploaded ID images and selfies to train their AI systems, citing "legitimate interests" as the legal basis rather than user consent. Your passport photo becomes their training data.
Persona doesn't just verify your ID. They capture behavioral biometrics including hesitation detection and copy-paste tracking, device information, geolocation, and NFC chip data from passports. They cross-reference all of it against government databases, credit agencies, and mobile network providers. Then there are the 17 subprocessors who may handle your data, including OpenAI, AWS, and Google Cloud Platform. Your Claude verification could flow through infrastructure belonging to Anthropic's direct competitors.
Anthropic is the data controller here, and they've set contractual limits on how Persona uses verification data. But those limits apparently don't extend to Persona's own AI training. Hacker News commenters have already flagged this gap. When a company built on safety branding asks for your government ID, the trust chain matters. Right now that chain has some weak links. You're effectively contributing to a warrant-free data pipeline.