When AI procurement turns into geopolitics: the Anthropic–Pentagon–OpenAI shockwave
- Deborah Nas
- Mar 2
- 5 min read
Updated: Mar 3

Over one weekend, a dispute about Artificial Intelligence safeguards escalated into something much bigger: a public clash between the US defence establishment, President Trump and two of the world’s most influential AI companies, Anthropic (Claude) and OpenAI (ChatGPT).
This is not just Silicon Valley drama. It is a preview of how AI will be governed in practice: through contracts, supply-chain labels, political pressure and the strategic race between blocs.
What happened (and why it moved so fast)
The trigger was a contract fight about how Claude could be used by the US military. Anthropic insisted that contracts should explicitly prohibit two use cases: mass domestic surveillance and fully autonomous weapons that can kill without meaningful human control. The Pentagon wanted broader permission for “all lawful uses” without specific carve-outs.
A deadline was set for 5:01 pm ET on Friday February 27, 2026.
When Anthropic did not move, the administration escalated publicly:
President Trump ordered federal agencies to stop using Anthropic’s technology.
The Pentagon moved to designate Anthropic a “supply chain risk”.
OpenAI announced a Pentagon deal within hours, presenting it as compatible with similar safety principles, combined with “technical safeguards” and close operational oversight.
Then a second shockwave followed: users revolted.
Why the consumer backlash matters
Within days, a “QuitGPT” boycott narrative spread on social media, urging people to cancel paid ChatGPT subscriptions and switch to Claude.
The immediate market signal was hard to ignore: Claude’s app climbed to #1 in the App Store, briefly overtaking ChatGPT. That is rare in a category where habits are sticky and switching costs are real.
This tension is now part of AI strategy. You cannot separate product adoption from geopolitical positioning anymore.
What does “supply chain risk” mean in this context?
A supply-chain-risk designation is normally used to protect sensitive defence systems from compromise, for example risks related to foreign ownership, control or influence. It gives the Pentagon authority to restrict or exclude vendors from defence contracts.
What made this case explosive is that it was applied to a US AI company after a dispute over ethics and safeguards. Anthropic argues the designation is legally unsound and says it will challenge it in court.
The broader implication is the chilling effect: if “risk” labels can be triggered by contract disagreements, vendors and customers will price political volatility into their choices.
The key question: why OpenAI “yes” and Anthropic “no”?
OpenAI’s message is that the difference is control, not principles. Sam Altman stated that OpenAI’s deal includes prohibitions on domestic mass surveillance and insists on human responsibility for the use of force, including autonomous weapon systems. OpenAI also said it will build technical safeguards and deploy engineers to support safe deployment in classified environments.
Anthropic’s position was that guardrails should be explicit and contractual, not only operational or “best effort”, because once deployed, incentives drift and edge cases multiply.
Three scenarios for what is really going on (all speculative)
These are not mutually exclusive. Real-world outcomes often combine them.
The “not actually equivalent” scenario
In this scenario, OpenAI’s contract is being framed as “same guardrails, different setup”, but the operational reality is looser.
That can happen in several ways:
Different definitions. “No mass surveillance” sounds clear, until you define what “mass” means, what counts as “surveillance”, and whether public data at scale is included.
Different enforcement. A safeguard written in policy is not the same as a safeguard enforced through technical constraints, audit logs, review rights and sanctions for misuse.
Different escalation paths. When the customer is a defence organisation, pressure to “make it work” is constant. If escalation routes favour speed, the system drifts towards broader use.
If this scenario is true, the key risk is that the market learns a cynical lesson: ethics are flexible when strategic contracts are on the table.
The “signal politics” scenario
In this scenario, the main purpose is deterrence. Anthropic pushed back. The administration responded with a highly visible punishment that doubles as a warning to others: do not negotiate too hard, do not create friction, do not set terms that constrain the state.
The “supply chain risk” label is powerful here because it reframes the story. It turns a governance dispute into a security decision, which is politically harder to contest and easier to justify.
If this scenario is true, it changes how every major supplier should behave:
expect contract language to become a political battleground
expect “risk” tools to be used as leverage
expect public rhetoric to be part of procurement strategy
The “money, access and influence” scenario
This scenario is about credibility, not just legality. Reports have highlighted major political donations connected to OpenAI leadership, including a $25 million donation to MAGA Inc linked to Greg Brockman, which has fuelled suspicion in public discourse.
Even if donations were not decisive, they create a perception problem at the worst possible moment. When procurement happens under intense political pressure, perceived proximity to power can become a strategic advantage. It can also become a reputational liability for the entire sector.
If this scenario is true, it accelerates a trend we already see: AI firms behaving like defence contractors and political actors, whether they admit it or not.
What this means for organisations outside the US
If you are a company, university, hospital or government agency using frontier AI, here’s a practical checklist:
1) Plan for portability
Keep prompts, workflows, evaluation sets and system instructions under your control.
Design your architecture so you can swap models without rebuilding the whole product.
2) Build multi-vendor resilience
Not for “nice-to-have” redundancy, but for political and regulatory shocks.
Test the fallback quarterly, not once.
3) Contract for governance
Audit rights, logging, incident reporting, red-team access, clear use restrictions where needed.
Define what happens if the vendor becomes unavailable due to regulation, blacklisting or export controls.
4) Separate capability from legitimacy
A model can be excellent technically and still become unacceptable for your stakeholders.
Track reputational risk the same way you track cybersecurity risk.
5) Do a “policy shock” tabletop exercise
What happens if your primary vendor is blocked in one jurisdiction?
What happens if your customer demands broader use than you can accept?
And watch China
While Silicon Valley and Washington fight in public, Beijing is positioning itself as a stable, predictable partner for AI development and governance. China is tightening its governance toolkit, including new draft rules targeting emotionally interactive AI services and stronger lifecycle responsibilities for providers.
More broadly, analysts describe a shift towards frameworks that embed principles like human control, transparency and sovereignty into technical requirements, alongside export-control and cybersecurity tools that can be used strategically.
For Europe, this creates a strategic dilemma. If the US model is increasingly politicised through contracts and supplier labels, and the Chinese model is increasingly standardised through regulation and state leverage, Europe needs its own credible path: resilient AI infrastructure, clear procurement principles and the ability to enforce safeguards without becoming dependent on any single supplier or bloc.
Looking for a keynote that puts the latest AI developments into context? Get in touch. I translate the noise into clear implications for your industry, your organisation and your people.


