Anthropic, a leading artificial intelligence company, has submitted two sworn declarations to a California federal court, vehemently challenging the Pentagon’s assertion that the firm poses an “unacceptable risk to national security.” Filed late Friday afternoon, these declarations contend that the government’s case is built upon technical misunderstandings and claims that were conspicuously absent during months of pre-dispute negotiations. The move marks a critical juncture in the escalating legal battle between the Department of Defense (DoD) and Anthropic, coming just days before a pivotal hearing scheduled for Tuesday, March 24, before Judge Rita Lin in San Francisco.
The core of the conflict centers on the Pentagon’s unprecedented decision to issue a supply-chain risk designation against Anthropic, a measure typically reserved for foreign entities or those with demonstrable ties to hostile nations. Anthropic views this designation, the first ever applied to an American company, as an act of retaliation for its publicly stated ethical positions on AI safety, particularly regarding the use of its technology in autonomous weapons and mass surveillance. The company argues this constitutes a violation of its First Amendment rights. Conversely, the government, in a 40-page filing earlier this week, firmly rejected this framing, characterizing Anthropic’s refusal to permit unrestricted military use of its AI as a business decision, not protected speech. The DoD maintains that the designation was a straightforward national security imperative, not a punitive measure for the company’s views. This legal and ethical standoff highlights the increasingly complex relationship between rapidly advancing AI technology and national defense imperatives, setting a significant precedent for future government-tech partnerships.
A Chronology of Escalation: From Partnership to Legal Confrontation
The dispute’s origins trace back to late February, when President Trump and Defense Secretary Pete Hegseth publicly announced the termination of ties with Anthropic. This dramatic severance followed Anthropic’s steadfast refusal to grant the military unfettered access and use of its sophisticated AI technology, specifically citing ethical “red lines” concerning its application in autonomous weapons systems and broad-scale surveillance of American citizens. These red lines, articulated by Anthropic, are integral to its corporate ethos and public commitment to responsible AI development.
The immediate aftermath of the public split saw a flurry of contradictory statements and actions, now brought into sharp relief by Anthropic’s recent court filings. On March 4, the day after the Pentagon formally finalized its supply-chain risk designation against Anthropic, Under Secretary Emil Michael — a key figure in the DoD’s engagement with Anthropic — reportedly emailed Anthropic CEO Dario Amodei. In this email, Michael indicated that the two sides were "very close" to an agreement on the very issues the government now cites as the foundation of Anthropic’s purported national security threat: its positions on autonomous weapons and mass surveillance. This private communication stands in stark contrast to Michael’s subsequent public declarations.
On March 5, Amodei published a statement on Anthropic’s website, describing "productive conversations" with the Pentagon, signaling an ongoing dialogue and potential for resolution. However, the very next day, Michael posted on X (formerly Twitter) a starkly different message, asserting, "there is no active Department of War negotiation with Anthropic." A week later, Michael further solidified the DoD’s public stance in an interview with CNBC, stating unequivocally that there was "no chance" of renewed talks with the AI firm. These seemingly inconsistent communications form a critical part of Anthropic’s argument that the government’s narrative has shifted, potentially to retroactively justify a designation that was not fully supported by the facts at the time of its issuance.
Key Declarations: Technical Safeguards and Policy Misrepresentations
Anthropic’s latest court submission includes two crucial declarations from high-ranking company officials: Sarah Heck, Head of Policy, and Thiyagu Ramasamy, Head of Public Sector. Their testimonies aim to dismantle the Pentagon’s claims by offering both policy and technical counter-arguments, leveraging their extensive experience at the intersection of technology and national security.
Sarah Heck’s Perspective: Policy and Misrepresentation
Sarah Heck, a former National Security Council official who served under the Obama administration and previously held a role at Stripe before joining Anthropic, brings significant experience in government relations and policy work. She was personally present at the pivotal February 24 meeting where CEO Dario Amodei met with Defense Secretary Hegseth and Under Secretary Emil Michael, providing her with direct insight into the failed negotiations.
In her detailed declaration, Heck directly confronts what she identifies as a fundamental misrepresentation in the government’s filings: the claim that Anthropic demanded an "approval role" over military operations. Heck unequivocally states, "At no time during Anthropic’s negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role." This refutation directly challenges the narrative that Anthropic sought to exert control over military decision-making, framing it instead as a concern over the ethical deployment of its technology.
Furthermore, Heck asserts that another critical concern raised by the Pentagon in its court filings—the possibility of Anthropic remotely disabling or altering its technology mid-operation—was never brought up during the months of negotiations. This omission, she argues, deprived Anthropic of any opportunity to address or clarify this technical point before it became a core component of the government’s legal argument. This suggests a pattern where the government’s official concerns in court differ from those articulated during direct discussions, raising questions about the sincerity and basis of the initial designation.
Heck’s declaration also meticulously details the timeline discrepancies, particularly the March 4 email from Under Secretary Michael. Her point is potent: if Anthropic’s stance on autonomous weapons and mass surveillance truly rendered it an "unacceptable risk to national security," why was a senior Pentagon official indicating that the two parties were "very close" to an agreement on precisely these issues just after the designation was finalized? While Heck refrains from explicitly accusing the government of using the designation as a bargaining chip, the chronology she presents strongly implies this possibility, leaving a significant question mark over the DoD’s motivations and consistency.
Thiyagu Ramasamy’s Expertise: Technical Safeguards and Security
Thiyagu Ramasamy, Anthropic’s Head of Public Sector, provides crucial technical expertise to the company’s defense. Before joining Anthropic in 2025, Ramasamy spent six years at Amazon Web Services (AWS), where he managed AI deployments for various government customers, including those in highly classified environments. His background includes building the team responsible for integrating Anthropic’s Claude models into national security and defense settings, including the significant $200 million contract with the Pentagon announced last summer—a contract now imperiled by the current dispute.
Ramasamy’s declaration directly addresses the government’s fears that Anthropic could interfere with military operations by remotely disabling or altering its AI technology. He categorically states that this scenario is "not technically possible." He explains that once Claude models are deployed within a government-secured, "air-gapped" system operated by a third-party contractor, Anthropic loses all access. Such "air-gapped" systems are physically and logically isolated from external networks, preventing unauthorized remote access. Ramasamy emphasizes that there is "no remote kill switch, no backdoor, and no mechanism to push unauthorized updates." Any modification or "operational veto" would require the Pentagon’s explicit approval and active intervention to install, effectively making the government the sole arbiter of changes post-deployment. This technical explanation directly counters the notion of Anthropic maintaining surreptitious control over deployed systems.
Furthermore, Ramasamy clarifies that Anthropic cannot even access the data input by government users into these air-gapped systems, let alone extract it, thereby addressing concerns about data privacy and potential exfiltration.
He also disputes the government’s claim that Anthropic’s employment of foreign nationals poses a security risk. Ramasamy highlights that Anthropic employees undergo rigorous U.S. government security clearance vetting—the same stringent background check process required for access to classified information. He asserts, "to my knowledge," Anthropic stands as the sole AI company where personnel with such clearances have actively built AI models specifically designed for classified environments. This point aims to underscore Anthropic’s commitment to robust security protocols and its alignment with government security standards, directly challenging the blanket assertion of risk due to workforce composition.
Broader Implications for AI, National Security, and the Tech Industry
The legal confrontation between Anthropic and the Department of Defense extends far beyond the immediate parties, carrying profound implications for the evolving landscape of AI development, national security, and the future of public-private partnerships in critical technological sectors. This case is not merely about a contract dispute; it delves into fundamental questions about control, ethics, and the role of private enterprise in defense.
The Precedent of the Supply-Chain Risk Designation
The application of a supply-chain risk designation to an American company is unprecedented. Typically, such designations are used to flag foreign-owned or foreign-controlled companies, especially those from adversarial nations, due to concerns about espionage, sabotage, or undue influence over critical infrastructure. By applying this label to Anthropic, the DoD is setting a new, potentially broad precedent. This could signal a more aggressive stance by the U.S. government in scrutinizing domestic tech companies that develop dual-use technologies with military applications, particularly when those companies seek to impose ethical constraints on their use. Other AI firms, defense contractors, and even unrelated tech companies could face similar scrutiny if their operational policies or ethical stances are perceived to conflict with national security objectives, creating a chilling effect on innovation and open dialogue about responsible technology use.
AI Ethics and the First Amendment
Anthropic’s lawsuit boldly invokes the First Amendment, arguing that the designation is retaliation for its publicly stated views on AI safety. This introduces a complex legal question: where does a company’s right to free speech end and its commercial obligations begin, especially when dealing with critical national security technologies? If the courts side with Anthropic, it could empower tech companies to articulate and enforce ethical guidelines for their products without fear of government reprisal, potentially fostering a more responsible approach to AI development. Conversely, if the government prevails, it could reinforce the notion that national security concerns can override a company’s stated ethical positions, potentially leading to greater government oversight or even coercion in the development of cutting-edge technologies. The outcome will significantly influence how the tech industry balances innovation, ethics, and national defense.
The Future of Government-Tech Partnerships
The dispute underscores a growing tension between the rapid pace of technological advancement in the private sector and the often more deliberate, risk-averse approach of government procurement and deployment, especially in defense. The DoD relies heavily on private sector innovation for its technological edge, particularly in AI. However, this case reveals a fundamental disconnect regarding control and ethical boundaries. Anthropic’s insistence on "red lines" reflects a broader industry movement towards responsible AI, acknowledging the profound societal implications of these powerful tools. If such disagreements lead to the breakdown of partnerships and legal battles, it could deter other innovative companies from engaging with the government, potentially slowing down critical advancements for national defense. The resolution of this case may necessitate a re-evaluation of how the government and tech sector establish frameworks for collaboration, ensuring both national security and ethical development.
Technical Control and Trust in AI Deployment
Ramasamy’s detailed explanation of "air-gapped" systems and the absence of a "kill switch" addresses a core technical concern that plagues many discussions about AI in warfare: the fear of unintended consequences or external manipulation. His testimony highlights the technical safeguards that can be implemented to ensure that deployed AI operates autonomously within defined parameters, without external interference from the developer. This clarity is vital for building trust in AI systems, especially in sensitive military applications. The court’s acceptance or rejection of this technical defense will impact how future contracts are structured and how trust in AI’s reliability and security is established within government operations.
Looking Ahead: A Pivotal Hearing
The upcoming hearing on Tuesday, March 24, before Judge Rita Lin, represents a critical juncture in this high-stakes legal battle. The court will weigh Anthropic’s arguments regarding First Amendment violations and technical misunderstandings against the Pentagon’s assertions of national security imperative. The outcome of this hearing, and potentially the broader lawsuit, will not only determine the fate of Anthropic’s contract and its designation but will also send a powerful message across the AI industry, the defense sector, and the broader landscape of technology governance. It will help define the boundaries of corporate ethics in national security contexts and shape the future dialogue on responsible AI development in an increasingly complex geopolitical environment. The implications for how the United States procures and deploys cutting-edge AI, and how private sector innovation can coexist with national defense needs, hang in the balance.
