FeaturedFeaturesPoliticsScience & TechnologyUnited States

Pentagon-Anthropic Dispute Further Exposes Government Push for Autonomous Weapons and AI Surveillance

The clash between the Pentagon and artificial intelligence company Anthropic has been making headlines for weeks, as it centers on the limits of AI use. The fallout escalated quickly. After the company refused to drop its restrictions on using AI for autonomous war machines and mass surveillance of Americans, the Department of Defense (DOD) designated the firm, one of its major contractors, a “supply-chain risk.” Anthropic responded by suing the administration. Now more details about the conflict are emerging.

Pentagon officials say the disagreement arose during a debate over how AI might be used in President Donald Trump’s proposed “Golden Dome for America” missile-defense program.

The Pentagon is increasingly pursuing systems that rely on greater machine autonomy, including swarms of armed drones and other automated combat platforms.

Meanwhile, the department’s insistence on virtually unrestricted, yet “lawful,” AI use, including for large-scale data analysis, “fuels government surveillance fears.” Critics warn the technology could enable monitoring of civilian populations at an unprecedented scale, resembling the Chinese model.

The Golden Dome

Golden Dome is envisioned as a network of sensors and interceptors designed to react within seconds to incoming threats. The project remains largely conceptual, though Congress has begun allocating billions of dollars for its development.

Last Friday, U.S. Defense Undersecretary Emil Michael, the Pentagon’s chief technology officer, said during the All-In podcast, “This is part of the debate I had with Anthropic, which is we need AI for things like Golden Dome.”

Michael described a scenario involving a Chinese hypersonic missile traveling at extreme speed.

“If you have a Chinese hypersonic missile coming in, you may have 90 seconds to respond,” he said.

A human operator might not react quickly enough.

“A human anti-missile operator may not be able to discriminate with their own eyes what they’re going after,” Michael argued.

In that situation, an automated system could respond faster.

“But an autonomous counterattack would be a low-risk because it’s in space and you’re just trying to hit something that’s trying to get you,” he said.

Anthropic says it lacks confidence in the system’s reliability and safety for such a critical role. In its lawsuit, the company argued that it has not even tested its chief AI model, Claude, for that type of use.

Dystopian Future of War

The Golden Dome concept reflects a broader shift in Pentagon policy.

“Drone-on-drone warfare, robot-on-robot warfare — those things are the future, for sure,” Michael declared confidently.

Likely seeking to market the coming age of autonomous killing machines in reassuring terms, he offered a humane example.

“Who could oppose if you have a military base, you have a bunch of soldiers sleeping, that you have a laser that can take down drones autonomously?” he asked rhetorically.

Yet the example obscures the broader reality of what the Defense Department is building. Systems designed to defend a sleeping base represent only a narrow use case within a rapidly expanding ecosystem of AI-enabled military tools and agents.

This rather dystopian vision is spelled out in the Pentagon’s official policy documents. For example, a January 9 memorandum on AI implementation outlines the objective of achieving “AI dominance” and moving toward “AI-first” military force capable of embedding advanced algorithms into combat systems and battlefield decision-making.

The push stems from a broader policy framework from the White House. The Pentagon’s acceleration of military AI forms part of President Trump’s sweeping “America’s AI Action Plan.” That plan mirrors long-standing proposals from global technology and governance circles, including those within the World Economic Forum and United Nations.

Military programs already underway illustrate the shift. Under the Pentagon’s Replicator Initiative, launched in 2023 and expanded in subsequent years, the military is working to deploy thousands of autonomous platforms across air, sea, and land.

Michael said the U.S. military needs technology partners willing to support this trajectory.

“I need a reliable, steady partner that gives me something, that’ll work with me on autonomous, because someday it’ll be real and we’re starting to see earlier versions of that,” he said. “I need someone who’s not going to wig out in the middle.”

“All Lawful Use” Demand

The communications with Anthropic became heated after Michael took control of the military’s AI portfolio last August. He said he began reviewing the company’s contracts with the Pentagon, some of which dated back to Joe Biden’s administration. Michael concluded that Anthropic’s terms of service were too restrictive.

“I need to have the terms of service be rational relative to our mission set,” he said.

Negotiations lasted about three months. Pentagon officials presented scenarios involving missile defense, drone warfare, and other potential military uses of AI. Anthropic sometimes offered narrow exceptions.

“They’re like, ‘OK, we’ll give you an exception for that,’” Michael said. “Well, how about this drone swarm? ‘We’ll give you an exception for that.’”

Michael said that approach could not work for long-term military planning.

“I was like, exceptions [don’t] work. I can’t predict for the next 20 years … all the things we might use AI for,” he said.

The Pentagon ultimately insisted that companies working with the military must allow “all lawful use” of their AI systems.

That position reflects language in the aforementioned Pentagon memorandum. Quoting President Trump’s Executive Order 14179 on removing “ideological bias or engineered social agendas” from AI, the memo calls for replacing what it describes as “utopian idealism” with “hard-nosed realism.”

The directive requires officials to insert “any lawful use” language into AI contracts within 180 days and to deploy models “free from usage policy constraints that may limit lawful military applications.”

The document also signals a willingness to tolerate mistakes in exchange for speed. “We must accept that the risks of not moving fast enough outweigh the risks of imperfect alignment,” it states.

Anthropic refused to accept those terms.

Beyond autonomous weapons, the dispute also revived concerns about the surveillance potential of military AI.

Anthropic insisted that its models should not be used for mass domestic surveillance. Pentagon officials rejected that restriction. During the podcast, Michael framed the disagreement as a dispute over data collection.

“They didn’t want us to bulk collect public information on people using their AI system,” he said. The remark raises an obvious question: Why would the Pentagon need to bulk-collect information on Americans at all?

In late February, Michael told Bloomberg that the DOD had assured Anthropic “in writing” that it operates within existing legal frameworks. Specifically, he quoted the National Security Act of 1947 and the Foreign Intelligence Surveillance Act (FISA). But those laws are themselves unconstitutional. Among other things, they allow government agencies to bypass traditional warrant requirements by purchasing large volumes of personal data from commercial brokers, including location histories, purchasing behavior, and demographic profiles.

AI sharply magnifies the power of such data. Systems capable of analyzing billions of records can merge disparate datasets and generate detailed profiles of individuals in seconds.

That process received a boost last May, when the White House contracted Palantir — a CIA-seeded data company co-founded by Trump megadonor Peter Thiel — to link all government data on citizens.

It is also worth noting that Anthropic, rightfully admired for resisting Pentagon demands, remains deeply embedded in its machinery. The company’s Claude model is integrated into military systems through a partnership with none other than Palantir. Reports indicate the DOD has used the system in unlawful operations, including unconstitutional strikes on Iran and the raid that captured Venezuelan president Nicolás Maduro.

Source link

Related Posts

1 of 208