The US military is still using Claude — but defense-tech clients are fleeing

The US military continues to utilize Anthropic's AI model, Claude, for targeting in the ongoing conflict with Iran despite the firm's distancing from defense contracts after government directives. Meanwhile, several defense contractors are actively replacing Claude due to concerns over supply chain risks, reflecting a confusing landscape of military reliance on disputed tech amid geopolitical tensions.
Key Points
- Anthropic's AI, Claude, is actively used by the US military in targeting decisions amid ongoing conflict with Iran.
- Confusion stems from President Trump's directive for civilian agencies to stop using Anthropic, but a six-month grace period for Pentagon use is in place.
- The Pentagon is using Claude in conjunction with Palantir's Maven system, which assists in real-time target prioritization.
- Several defense contractors, including Lockheed Martin, have begun replacing Claude with alternative models due to risks associated with its use.
- The Secretary of Defense's intended designation of Anthropic as a supply-chain risk remains unfulfilled, leaving legal grey areas in military tech use.
Relevance
- The scenario echoes historical trends where military contractors reassess partnerships with tech firms amid ethical concerns and regulatory pressures.
- The ongoing situation highlights the growing tension between defense needs and governmental regulations, a key theme in 2025's IT landscape focused on responsible AI use.
- This unfolding conflict illustrates the broader implications of AI in military applications, calling attention to accountability and regulatory frameworks in tech deployment by defense agencies.
The situation illustrates a critical intersection between military needs and ethical considerations in technology, suggesting that while current demands for advanced AI tools persist, a pivot towards compliant alternatives may become the norm in defense operations.
