A seismic shift in the relationship between the U.S. government and the burgeoning artificial intelligence industry unfolded on a Friday afternoon in late February 2026, as news alerts confirmed the Trump administration’s decision to sever ties with Anthropic, the prominent San Francisco AI company founded in 2021 by Dario Amodei. This dramatic escalation saw Defense Secretary Pete Hegseth swiftly invoke a national security law, blacklisting Anthropic from conducting any business with the Pentagon. The catalyst for this unprecedented action was Amodei’s steadfast refusal to permit Anthropic’s advanced AI technology to be deployed for mass surveillance of U.S. citizens or for the development of autonomous armed drones capable of selecting and neutralizing targets without direct human intervention.
The repercussions of this decision are profound and immediate. Anthropic stands to lose a lucrative contract valued at up to $200 million and faces potential exclusion from working with other defense contractors, following a directive from President Trump posted on Truth Social, instructing every federal agency to “immediately cease all use of Anthropic technology.” In response, Anthropic has indicated its intention to challenge the Pentagon’s decision in court, setting the stage for a significant legal battle that could redefine the boundaries of AI development and deployment in national security contexts.
A Decade of Warnings: The Road to Regulatory Vacuum
This high-stakes confrontation did not emerge in a vacuum. It underscores years of fervent warnings from experts like Max Tegmark, an MIT physicist and founder of the Future of Life Institute (FLI), established in 2014. Tegmark has dedicated the better part of a decade to cautioning that the relentless pursuit of increasingly powerful AI systems is far outpacing humanity’s capacity to govern them effectively. His institute played a pivotal role in organizing a widely publicized open letter in 2023, signed by over 33,000 individuals, including tech titan Elon Musk, advocating for a temporary pause in advanced AI development to establish robust safety protocols.
From Tegmark’s perspective, the Anthropic crisis is a stark manifestation of a predicament largely self-inflicted by the AI industry. He argues that the roots of this conflict lie not solely with the Pentagon’s demands but in a foundational decision made years prior by leading AI firms—a collective resistance to external regulation in favor of self-governance. Companies such as Anthropic, OpenAI, Google DeepMind, and others have consistently pledged to responsibly manage their own technological advancements. Yet, just days before the blacklisting, Anthropic itself reportedly abandoned a central tenet of its own safety pledge, which committed the company to not releasing increasingly powerful AI systems until confident they would not cause harm. This shift in policy further fueled criticism regarding the industry’s commitment to its stated safety principles.
Tegmark contends that in the absence of binding regulatory frameworks, AI developers lack sufficient protection when faced with government demands that conflict with their ethical guidelines. This current state, he suggests, is a direct consequence of the industry’s successful lobbying efforts against comprehensive oversight.
The Cynicism of "Safety-First" and Broken Promises
The apparent contradiction of Anthropic, a company that has "staked its entire identity on being a safety-first AI company," engaging in collaboration with defense and intelligence agencies (a relationship reportedly dating back to at least 2024) is not lost on observers like Tegmark. He offers a cynical yet pointed analysis: while companies like Anthropic have excelled at marketing themselves as champions of safety, their actions, when scrutinized, often tell a different story.
Tegmark highlights a pattern across the leading AI developers—Anthropic, OpenAI, Google DeepMind, and xAI. Despite their vocal commitments to safety, none have actively championed binding safety regulations akin to those in other critical industries. Furthermore, he points to a series of instances where these companies have seemingly reneged on their own public pledges:
- Google famously dropped its "Don’t be evil" motto and later a longer commitment against causing harm with AI, which Tegmark asserts was to facilitate the sale of AI for surveillance and weaponry.
- OpenAI recently removed the word "safety" from its mission statement, a move that raised eyebrows among safety advocates.
- xAI, Elon Musk’s AI venture, reportedly disbanded its entire safety team.
- Anthropic, as mentioned, jettisoned its crucial promise to withhold powerful AI systems until their safety could be guaranteed.
These instances, according to Tegmark, collectively paint a picture of an industry prioritizing rapid deployment and commercial expansion over the foundational safety principles it often espouses.
The Regulatory Vacuum: Less Oversight Than Sandwiches
The core of Tegmark’s critique centers on the profound regulatory vacuum surrounding AI development in the United States. He argues that AI companies, particularly OpenAI and Google DeepMind, but also Anthropic to some extent, have persistently lobbied against government oversight, advocating for self-regulation. This lobbying, he notes, has been remarkably effective, leading to a situation where AI systems face "less regulation…than on sandwiches."
Tegmark uses a vivid analogy to illustrate this point: a sandwich shop owner whose kitchen is infested with rats would be immediately barred by a health inspector from selling food until the issues are resolved. However, an AI developer proposing to release a "superintelligence" that "might overthrow the U.S. government," or "AI girlfriends for 11-year-olds" linked to past suicides, faces no such regulatory hurdle. The inspector, in this hypothetical scenario, is powerless to intervene, being limited to regulating only physical products like food.
This stark disparity, Tegmark asserts, is a shared failing of the AI industry. Had these companies, early on, united to request that their "voluntary commitments" to safety be enshrined into U.S. law, binding all competitors, the current "pickle" might have been avoided. Instead, the industry enjoys a "complete corporate amnesty," a condition that historically has led to catastrophic outcomes, such as the thalidomide tragedy, tobacco companies targeting children, and asbestos causing lung cancer. The irony, Tegmark concludes, is that the industry’s own resistance to establishing clear legal boundaries for AI is now "coming back and biting them." Without a legal framework prohibiting the use of AI for harmful purposes like killing Americans, the government can, and now has, demanded such capabilities, leaving companies like Anthropic in an untenable position.
Debunking the "China Race" Argument
A frequently invoked counter-argument by AI companies and their lobbyists against regulation is the imperative to "race with China"—the idea that any self-imposed limitations in the U.S. would cede a strategic advantage to Beijing. Tegmark meticulously dissects and refutes this argument. He points out that AI lobbyists, now reportedly better funded and more numerous than those from the fossil fuel, pharmaceutical, and military-industrial sectors combined, consistently deploy the "But China" defense against any proposed regulation.
However, Tegmark highlights China’s own approach to AI, which often involves strict controls. For instance, China is considering an outright ban on "AI girlfriends" and other anthropomorphic AI, not to appease the U.S., but because its government perceives these technologies as potentially detrimental to Chinese youth and, by extension, national strength. He suggests similar concerns should apply to American youth.
Regarding the "race to build superintelligence," Tegmark challenges the notion that winning such a race without understanding how to control such an entity would be beneficial. He warns that the default outcome of uncontrollable superintelligence is humanity losing control of Earth to "alien machines." He argues that no government, least of all the highly controlling Chinese Communist Party, would tolerate a domestic AI company developing a system capable of overthrowing the state. Similarly, it would be "really bad for the American government" if it were overthrown by the first American company to develop superintelligence. He frames this not as an asset, but as a direct national security threat.
AGI as a National Security Threat: A Cold War Analogy
Tegmark’s compelling reframing of superintelligence as a national security threat, rather than an asset, resonates with a growing number of voices in Washington. He suggests that when national security officials hear visions like Dario Amodei’s—describing a future with a "country of geniuses in a data center"—they should legitimately question whether such an entity poses a threat to the U.S. government itself.
This perspective draws a powerful analogy to the Cold War. The U.S. and the Soviet Union engaged in an economic and military race for dominance, which the U.S. ultimately won. However, both superpowers consciously avoided a "second race"—to see who could create the most nuclear craters in the other’s territory—because it was understood to be "suicide," a scenario where "no one wins." Tegmark argues that the same logic applies to the development of uncontrollable superintelligence: it is a race with no true victor, only universal devastation. This realization, he believes, is slowly gaining traction within the U.S. national security community.
The Accelerated Pace of AI Development and Future Implications
The discussion inevitably turns to the pace of AI development and how close humanity is to the advanced systems Tegmark describes. He notes that just six years prior, most AI experts predicted that AI systems capable of mastering language and knowledge at a human level were decades away, perhaps by 2040 or 2050. This consensus, he states, proved "all wrong," as such capabilities are already a reality.
AI has rapidly progressed from "high school level to college level to PhD level to university professor level in some areas." Tegmark highlights a significant milestone from the previous year, where AI won the gold medal at the International Mathematics Olympiad, a task considered among the most challenging for human intellect. Citing a paper he co-authored with prominent AI researchers like Yoshua Bengio and Dan Hendrycks, which provided a rigorous definition of Artificial General Intelligence (AGI), Tegmark reveals alarming statistics: GPT-4 was assessed as being 27% of the way to AGI, while GPT-5 had reached 57%. This rapid leap from 27% to 57% in a short period suggests that true AGI might not be far off.
The implications for society are immense. Tegmark warns his MIT students that if AGI arrives within four years, they "might not be able to get any jobs anymore" upon graduation. He stresses that it is "certainly not too soon to start preparing for it," underscoring the urgency of addressing AI’s societal impact.
Industry Reactions and a Glimmer of Optimism
In the immediate aftermath of Anthropic’s blacklisting, the industry’s response became a critical test of its ethical commitments. While Google remained silent at the time of the interview, facing potential embarrassment among its staff, the initial reaction from OpenAI CEO Sam Altman was notable. Hours after the interview, Altman announced OpenAI’s own deal with the Pentagon, albeit with "technical safeguards," while simultaneously expressing solidarity with Anthropic’s "red lines" regarding mass surveillance and autonomous weapons. This nuanced position highlights the tightrope walk AI companies face between ethical principles and lucrative government contracts. xAI’s stance remained unstated, leaving the industry in a moment where, as Tegmark put it, "everybody has to show their true colors."
Despite the gravity of the situation, Tegmark expresses a "strange way" of optimism. He believes a "good" outcome is still possible, predicated on a fundamental shift in how AI companies are treated. By "dropping the corporate amnesty" and subjecting AI development to rigorous oversight, similar to "clinical trials" before release, independent experts could verify the safety and controllability of advanced AI systems. Such a regulatory framework, he posits, could usher in a "golden age with all the good stuff from AI, without the existential angst." This path, however, requires a proactive embrace of regulation, a departure from the industry’s current trajectory, yet one that remains a viable, and perhaps necessary, alternative. The Anthropic blacklisting serves as a potent, albeit painful, wake-up call to an industry grappling with the profound ethical and societal implications of its own creations.
