Latest
Privacy

Anthropic Stands Firm Against Pentagon on AI Safeguards as Deadline Approaches

Anthropic Stands Firm Against Pentagon on AI Safeguards as Deadline Approaches

The conflict between the Trump administration and the artificial intelligence company Anthropic has reached a critical point. Military officials are demanding that Anthropic modify its ethical policies by Friday, or face serious repercussions for its business.

Anthropic's CEO, Dario Amodei, firmly stated just a day before the deadline that his company “cannot in good conscience accede” to the Pentagon’s ultimatum, which includes allowing unrestricted use of its technology.

While Anthropic, the creator of the chatbot Claude, has the financial stability to withstand the loss of a defense contract, the ultimatum issued by Defense Secretary Pete Hegseth carries broader implications. This demand comes at a pivotal moment in the company’s rapid ascent from a lesser-known computer science research lab in San Francisco to one of the world’s most valuable startups.

If Amodei stands his ground, military officials have indicated that they will not only terminate Anthropic’s contract but may also classify the company as a supply chain risk. This designation is typically reserved for foreign adversaries and could jeopardize Anthropic’s vital partnerships with other businesses.

Conversely, if Amodei were to concede to the Pentagon's demands, he risks losing the trust of the burgeoning AI community. Many top talents have been attracted to Anthropic due to its commitment to developing advanced AI responsibly, a commitment that could be compromised without proper safeguards.

Anthropic has requested specific assurances from the Pentagon to ensure that Claude will not be utilized for mass surveillance of U.S. citizens or in fully autonomous weapons systems. However, after months of private negotiations turned into public discourse, Anthropic expressed concerns in a Thursday statement that what was presented as a compromise included legal language that could allow these safeguards to be ignored.

Adding fuel to the fire, Sean Parnell, the Pentagon’s chief spokesman, took to social media to assert that “we will not let ANY company dictate the terms regarding how we make operational decisions,” and emphasized that Anthropic has until 5:01 p.m. ET on Friday to comply with the demands or face consequences.

Emil Michael, the defense undersecretary for research and engineering, criticized Amodei on social media, accusing him of having a “God-complex” and suggesting that he is willing to jeopardize national safety for personal control over the U.S. military.

This message, however, has not garnered widespread support in Silicon Valley. A growing number of tech employees from Anthropic’s major competitors, OpenAI and Google, expressed their solidarity with Amodei in an open letter released late Thursday.

OpenAI, Google, and Elon Musk's xAI also have contracts to provide their AI models to the military. The open letter highlighted that “the Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused,” implying an attempt to create division among the companies out of fear that one might concede.

Concerns regarding the Pentagon’s approach have also been raised by lawmakers from both parties, as well as a former leader of the Defense Department’s AI initiatives. Retired Air Force General Jack Shanahan noted that while targeting Anthropic may generate attention, it ultimately harms everyone involved.

Shanahan previously faced significant backlash from tech workers during the first Trump administration while leading Project Maven, an initiative aimed at using AI to analyze drone footage for targeting. The backlash from Google employees was so intense that the company decided not to renew the contract and promised not to engage in AI-driven weaponry.

Reflecting on his past experiences, Shanahan expressed sympathy for Anthropic’s position, stating that Claude is already being utilized across various government sectors, including classified areas, and that Anthropic's concerns are reasonable. He emphasized that the AI models driving chatbots like Claude are not yet suitable for national security applications, particularly in the realm of fully autonomous weapons.

Parnell asserted that the Pentagon intends to use Anthropic’s model only for lawful purposes, claiming that expanding the usage of the technology would prevent the company from jeopardizing critical military operations. He reiterated that the military has no intention of employing AI for illegal mass surveillance of Americans or developing fully autonomous weapons that operate independently of human oversight.

During a meeting between Hegseth and Amodei, military officials warned that they could classify Anthropic as a supply chain risk, terminate its contract, or invoke the Defense Production Act a Cold War-era law that would grant the military broader authority to use its products without the company’s consent.

In response, Amodei expressed that these two threats are inherently contradictory, as one would categorize the company as a security risk while the other would label Claude as essential for national security. He expressed hope that the Pentagon would reconsider, given Claude's value to the military. However, if not, Anthropic is prepared to facilitate a smooth transition to another provider.

More in Privacy & Policy

Australia's Cybersecurity Strategy: Testing Kids with Pentesting
Privacy

Australia's Cybersecurity Strategy: Testing Kids with Pentesting

Mar 27, 2026 2 min read
Why Cyberinsurance Is Essential for Small Businesses
Privacy

Why Cyberinsurance Is Essential for Small Businesses

Mar 14, 2026 2 min read
WebcamGate 2009: High School Laptop Program Sparks National Spying Controversy
Privacy

WebcamGate 2009: High School Laptop Program Sparks National Spying Controversy

Mar 13, 2026 2 min read
Tire Pressure Sensors Allow Covert Vehicle Tracking
Privacy

Tire Pressure Sensors Allow Covert Vehicle Tracking

Mar 6, 2026 3 min read