Hong Kong, April 29, 2026 — Goldman Sachs has blocked its bankers in Hong Kong from using Anthropic’s Claude AI models, according to people familiar with the matter. The restriction took effect a few weeks ago.
Employees in the region can no longer access the company’s AI tools through internal systems. The move signals growing caution among global banks about AI use in regulated markets.
Also read: UK Bank Taxes Return to Political Spotlight
Compliance concerns drive the decision
Goldman’s Hong Kong office handles sensitive client data and cross-border transactions. Regulators in the region have tightened rules on data privacy and AI oversight in recent months.
Industry analysts note that banks face conflicting pressures. They want to adopt AI for efficiency, but they must also meet strict compliance standards. The restriction on Claude suggests Goldman is prioritizing regulatory alignment over productivity gains.
Also read: UK Home Asking Prices Rise Amid Higher Mortgage Costs
Anthropic did not respond to a request for comment. Goldman Sachs declined to discuss internal policies.
Broader industry trend
Goldman is not alone. Several major banks have limited or banned employee use of third-party AI tools. JPMorgan Chase restricted ChatGPT access in 2023. Citigroup followed with similar measures last year.
The concern is data leakage. AI models process user inputs on external servers. If a banker uploads confidential deal information, that data could leave the bank’s control.
What this means for investors is that AI adoption in finance will likely remain uneven. Banks with strong compliance cultures may move slower than fintech rivals.
Hong Kong’s regulatory environment
Hong Kong’s Securities and Futures Commission has not issued specific AI rules. But the Office of the Privacy Commissioner for Personal Data published guidelines in 2025 on AI data handling.
Those guidelines require companies to assess risks before deploying AI tools. They also mandate transparency about how AI models use personal data.
Data from the Hong Kong Monetary Authority shows that banks in the city spent $1.2 billion on compliance technology in 2025. That figure is expected to rise this year.
Impact on Anthropic
For Anthropic, the restriction is a setback. The company has positioned Claude as a safe alternative to competitors like OpenAI’s ChatGPT. But Goldman’s decision suggests that even Claude faces hurdles in highly regulated sectors.
Anthropic has built a strong reputation for safety research. Its models include features like constitutional AI to reduce harmful outputs. But compliance in finance requires more than safety features — it demands full data control.
The implication is that Anthropic may need to offer on-premise or private cloud deployments to win banking clients. The company has not announced such plans.
What’s next
Goldman’s restriction applies only to Hong Kong for now. Bankers in other regions may still use Claude. But the move could set a precedent. If other banks follow, Anthropic’s growth in the financial sector could slow.
Industry watchers expect more banks to review their AI policies in the coming months. The balance between innovation and regulation will shape the next phase of AI adoption in finance.
This article was produced with AI assistance and reviewed by our editorial team for accuracy and quality.