
Major technology companies supporting artificial intelligence developer Anthropic have raised concerns after the United States Department of Defence classified the firm as a potential supply chain risk, a designation that could restrict the use of its technology across federal programmes and defence contractors.
The move emerged from a dispute between Anthropic and defence officials over safeguards governing how the company’s AI systems may be used in military contexts. Anthropic has maintained strict internal policies limiting applications tied to autonomous weapons, large scale surveillance and other sensitive operations. Pentagon officials have sought broader contractual terms that would allow the technology to be deployed for any lawful defence purpose. The disagreement escalated when the Defence Department signalled it could formally categorise the company as a supply chain risk, a step that carries significant procurement consequences within the federal contracting system.
If applied broadly, the designation would mean government contractors and agencies could be discouraged or prevented from integrating Anthropic’s AI models into projects linked to defence or national security. In practice, such a classification can trigger internal compliance reviews and procurement restrictions across the federal supply chain, forcing contractors to avoid technologies considered operational or security risks. For a rapidly growing AI developer seeking government partnerships, the label could sharply limit its participation in public sector technology deployments.
The potential classification has drawn a response from a technology industry group that includes major Anthropic backers such as Amazon and Nvidia. Representatives from the group expressed concern that the designation could disrupt the wider AI ecosystem by creating uncertainty around how advanced models are evaluated for government use. Industry participants worry that treating a leading AI developer as a supply chain risk could discourage collaboration between technology companies and defence institutions at a time when governments are investing heavily in artificial intelligence capabilities.
The dispute highlights the increasingly complex relationship between AI developers and national security agencies. Governments view advanced AI systems as strategically important for intelligence analysis, cyber operations and defence planning, while technology firms are attempting to set ethical boundaries around deployment. The outcome of the Anthropic dispute may therefore influence how future AI contracts are structured, shaping procurement rules and governance standards as artificial intelligence becomes embedded within critical national infrastructure.