We are facing a point of no return in the history of artificial intelligence. Between the end of February and the beginning of March 2026, the relationship between Silicon Valley and the US military apparatus suffered a tectonic fracture. The catalyst for this earthquake has a name: Anthropic.
CEO Dario Amodei ‘s clear refusal to bow to the Pentagon’s ultimatum (now renamed the War Department by the Trump administration) has triggered a chain reaction that is forcing the tech giants to throw off the mask, revealing where they stand in the new geopolitical and military chessboard.
Anthropic’s ‘Red Line’ and Government Retaliation
Anthropic has never been an out-and-out pacifist company. Its models were already integrated into the classified networks of US intelligence. However, Amodei drew two impassable ‘red lines’ to protect democratic values: no use of AI for domestic mass surveillance and no use in fully autonomous weapons.
Defence Secretary Pete Hegseth’s response was lightning fast and, according to many critics, purely punitive: a three-day ultimatum followed by the designation of Anthropic as a ‘supply chain risk’. An unprecedented move for an American company, designed to cut Anthropic off not only from direct contracts, but from any defence contractor.
The Total Alignment: Elon Musk and xAI
While Anthropic was being put on the Index, Elon Musk’s company xAI immediately seized the opportunity. Unreservedly embracing the Pentagon’s ‘all lawful use’ clause, Grok was quickly approved for classified military networks.
Musk, a close ally of the Trump administration, did not just cash in on the commercial advantage: he lashed out at Amodei on X, accusing Anthropic of ‘hating Western civilisation’. For xAI, the priority is American technological-military supremacy, without the ethical filters it considers an obstacle.
The Funambulism of OpenAI
In the middle of the ford lies OpenAI. The company led by Sam Altman signed a lucrative agreement with the Pentagon a few hours after Anthropic’s ban, triggering a wave of controversy.
Officially, OpenAI claims to have maintained its ethical stakes against mass surveillance and lethal autonomous weapons, guaranteeing that models will only run on secure, supervised clouds. However, acceptance of the vague ‘legal purposes’ standard is seen by many as a convenient loophole. Critics, including many dissident employees, accuse OpenAI of opportunism: caving in substance to grab contracts while maintaining only an ethical façade.
Google’s Return to the Front Line
If OpenAI is tightrope walking, Google has decided to march in lockstep. Forget the days of 2018, when internal protests forced the company to abandon Project Maven, today Mountain View is in the front line.
Google Cloud, with its ‘Gemini for Government’, was the first system to land on the GenAI.mil military platform. CEO Sundar Pichai defended the choice, pointing to the operational efficiency provided to the military. Despite the resumption of internal protests, with over 200 Google and OpenAI employees signing an open letter in solidarity with Anthropic, the company seems determined not to leave this multi-million dollar market to competitors.
One thing is certain: Anthropic’s rejection has torn the veil of hypocrisy in Silicon Valley. There is no longer a shared AI ethic. On the one hand, there are those who, like Anthropic, believe that technology must be subordinated to unbreakable democratic principles, at the cost of government sanctions. On the other, a pragmatic and aligned bloc (Google, OpenAI, xAI) ready to shape the new military-industrial-technological complex of the 21st century.
And it seems only the beginning.



