Anthropic has started requiring some Claude users to submit government-issued photo IDs and live selfies in an effort to shut out access from US adversaries including China, Russia, and North Korea, according to a report by Juro Osawa in The Information published April 21, 2026. The policy escalates a months-long effort by the AI lab to enforce its geographic restrictions — and arrives as evidence mounts that Chinese firms have been routing around those rules at scale.
From quiet rollout to hard border
The identity checks first surfaced publicly in mid-April when screenshots of the verification screen spread on X, making Claude the first major consumer AI chatbot to demand passport-grade verification to access certain capabilities. Anthropic framed the step as necessary to "prevent abuse, enforce our usage policies, and comply with legal obligations," according to statements reported by the South China Morning Post.
The Information's reporting today adds a sharper national-security frame: Anthropic is specifically trying to block users connected to countries the US government treats as adversaries. It is also acknowledgement, in effect, that Claude's official ban in China, Hong Kong, and Macau has not kept the product out of those markets.
The workaround economy
Despite the ban, Anthropic has "quietly flourished" in China through businesses that resell or relay API access, the Information reported. The South China Morning Post profiled one such service, AICodeMirror, which claims more than 10,000 registered users and over 200 institutional clients. VPN-based access and third-party wrappers have been the norm for Chinese developers who view Claude — and particularly Claude Code — as a top tool for software engineering tasks.
The new ID requirement is expected to sharply narrow that gray market. Early reporting suggests China-issued national ID cards are not accepted by Anthropic's verification partner, meaning users without a passport could be locked out entirely. Black-market vendors are already advertising workarounds, according to SCMP.
Part of a wider frontier-model crackdown
The verification push follows a coordinated move earlier this month in which OpenAI, Anthropic, and Google began sharing intelligence through the Frontier Model Forum to detect and disrupt attempts by Chinese AI firms to distill their models. Anthropic has said it documented 16 million unauthorized API exchanges tied to three named Chinese companies.
Implications
For enterprises, the change turns identity verification into a gating function for certain Claude capabilities — a shift that will ripple into procurement, compliance reviews, and data-residency conversations. For developers in restricted regions, it tightens an already narrow door. And for the broader AI industry, it sets a precedent: frontier labs are now willing to demand biometric identity checks to enforce export-style controls, even at the cost of user friction and a vocal backlash on social platforms.
The open question is enforcement. If relay platforms and forged-document markets can stay a step ahead of verification, Anthropic's wall will leak. If they cannot, Claude could become the first major US AI product to meaningfully wall itself off from one of the world's largest developer populations.



