Goldman Sachs has stopped letting its Hong Kong staff use Anthropic's Claude, according to reporting from the Financial Times that was picked up by Bloomberg and Reuters on April 28 and 29. Bankers in the territory had previously been able to call Claude through Goldman's internal AI platform, but in recent weeks the option has disappeared from their tooling.
The move is narrow on paper but loud in implication. It is the first time a tier-one Wall Street bank has been reported to carve out a single frontier-model vendor on geographic grounds, and it lands squarely on top of an already brittle US-China AI relationship.
A contract reading, not a security breach
People familiar with the decision told reporters that Goldman did not pull Claude after an incident. Instead, the bank consulted with Anthropic about the terms of their commercial agreement and then took what was described as a strict interpretation: Goldman concluded its Hong Kong-based employees should not be able to access any Anthropic products under the contract.
Neither Goldman nor Anthropic has issued an on-the-record statement detailing the precise clause at issue. What is clear from the public reporting is that other frontier models remain available to the same Hong Kong desks. ChatGPT and Gemini are still on Goldman's internal platform, which means the bank is not retreating from generative AI in the territory — it is retreating from one specific vendor.
The China backdrop
The context is hard to ignore. Anthropic recently tightened its identity-verification policy to block users in China, Russia, North Korea and other adversary jurisdictions from accessing Claude through any subsidiary or affiliate. Hong Kong is not mainland China, but it is also not treated as fully separate by either side of the current export-control conversation.
US AI labs have spent the past quarter publicly worrying about distillation — the idea that heavy enterprise use of a frontier model inside China could be turned into training signal for a domestic competitor. The State Department's recent global cable on Chinese model distillation, and the White House memo accusing Chinese labs of industrial-scale capability extraction, both flow into the same risk calculus that Goldman appears to be applying inside its own contract.
Implications
For Anthropic, the Goldman pullback is reputationally awkward more than financially material. Hong Kong is not a meaningful slice of revenue, and the bank still pays for Claude in other regions. The bigger risk is that other multinationals with sensitive Asia-Pacific footprints — banks, law firms, defence-adjacent contractors — read the FT story and decide that the cleanest legal posture is to mirror Goldman.
For enterprise buyers, the takeaway is sharper: AI procurement is now a geographic decision, not just a model-quality decision. Legal teams that treated Claude, GPT-5.5 and Gemini as interchangeable line items will increasingly have to maintain per-jurisdiction allowlists, and vendors will have to write contracts that survive a tightening export-control regime.
— Michael Ouroumis



