FeaturedScience & TechnologyUnited States

Tech Workers Push Back Against DOD Demands for AI in Domestic Mass Surveillance and Autonomous Weapons

In the wake of mounting Defense Department pressure on AI developer Anthropic to loosen restrictions on how its artificial intelligence systems may be used, hundreds of AI employees at rival tech giants say the DOD’s demands go too far. They argue that certain lines should not be erased, even in the name of “national security.”

An open letter titled “We Will Not Be Divided” has pushed that internal resistance into public view. As of Monday morning, 691 Google employees and 96 OpenAI employees had signed the appeal urging their leadership to refuse the demands of the DOD (also called the Department of War). All are current staff. The signers are openly challenging the Pentagon’s insistence on “lawful” AI uses that, apparently, include domestic mass surveillance and fully autonomous weapons.

The Letter

The signatories accuse the Pentagon of targeting Anthropic, writing:

The Department of War is threatening to

  1. Invoke the Defense Production Act to force Anthropic to serve their model to the military and “tailor its model to the military’s needs”
  2. Label the company a “supply chain risk”

Why is that happening? Apparently, the tech company is opposing the DOD’s use of its models for spying on Americans and deploying AI in autonomous weapons systems. The letter says the department’s actions are

in retaliation for Anthropic sticking to their red lines to not allow their models to be used for domestic mass surveillance and autonomously killing people without human oversight.

They cite Axios reporting on February 23 that the Pentagon is now negotiating with Google and OpenAI “to try to get them to agree to what Anthropic has refused.” The report says that Elon Musk’s xAI has already agreed to the DOD’s terms.

The letter claims the Pentagon is attempting to “divide each company with fear that the other will give in.” It explains its purpose directly:

This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War.

The employees are not calling for a total break with the military. Instead, they urge company leadership to “put aside their differences and stand together to continue to refuse” demands that would eliminate specific safeguards.

DOD Memo

Central to the concern is DOD’s January 9 memorandum. It ordered the military to become an “AI-first” force, declaring that “AI-enabled warfare … will re-define the character of military affairs over the next decade.” The memo frames the shift as “a race” and insists:

We must accept that the risks of not moving fast enough outweigh the risks of imperfect alignment.

Most critically, it directs officials to insert “‘any lawful use’ language” into any DOD contract for AI services. Finally, it orders to deploy models “free from usage policy constraints that may limit lawful military applications.”

Anthropic Draws Its Red Lines

Anthropic, one of the department’s existing AI contractors, whose Claude model was reportedly used during the raid that captured Venezuelan President Nicolás Maduro, responded with a level of public resistance rarely seen from a Silicon Valley firm.

The standoff came after more than two months of negotiations with the Defense Department over two limits on AI use — mass domestic surveillance and fully autonomous weapons systems. CEO Dario Amodei revealed last Thursday:

They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk” — a label reserved for US adversaries, never before applied to an American company — and to invoke the Defense Production Act to force the safeguards’ removal.

“These threats do not change our position: we cannot in good conscience accede to their request,” said the executive.

Domestic Surveillance

Making his case for the restrictions, Amodei wrote that using AI systems for “mass domestic surveillance is incompatible with democratic values.” He also warned that “AI-driven mass surveillance presents serious, novel risks to our fundamental liberties.” And he further argued that the law has not kept pace with AI’s expanding capabilities. That has left room for practices that may be technically legal yet deeply troubling:

For example, under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life — automatically and at massive scale.

The warning lands amid broader concerns about federal data consolidation. Last May, President Donald Trump’s administration contracted with Palantir Technologies to help link disparate federal datasets into a centralized database on Americans. The debate also unfolds as Congress approaches another FISA reauthorization deadline. The administration is reportedly seeking extension authority without warrant requirements for surveillance queries on Americans’ communications. Additionally, the administration expanded its “domestic terrorism” directive. Issued last September, it broadened policing and surveillance authorities into a wide range of activity and speech.

Autonomous Killing

As for the second problematic AI use case, Amodei acknowledged that partially autonomous systems already shape modern battlefields. He noted that systems used in Ukraine, where drones and AI-assisted targeting tools help identify and track enemy positions, play what he calls a “vital” role in “the defense of democracy.” Yet the existing models are far from perfect enough to let them pull the trigger independently. Per the statement:

Today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. 

He added that fully autonomous systems “need to be deployed with proper guardrails, which don’t exist today.” The DOD refused the company’s offer to collaborate on improving the systems, said the executive.

The warning comes as AI-assisted targeting has been expanding in recent conflicts. For instance, reporting from Israel’s war in Gaza described the use of AI systems such as “Lavender” and “The Gospel,” which helped generate target lists at scale. Investigations by +972 Magazine and other outlets alleged that these systems accelerated strike approvals and lowered thresholds for human review, resulting in mass civilian casualties.

Unrestricted Access — Or Else

Defense Secretary Pete Hegseth swiftly responded with open hostility.

“This week, Anthropic delivered a master class in arrogance and betrayal,” he wrote on Friday. The secretary called the company’s stance “a textbook case of how not to do business with the United States Government or the Pentagon.”

“Our position has never wavered and will never waver,” he declared, adding that

the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.

He accused Anthropic and its CEO of “duplicity,” dismissing their objections:

Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission — a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.

He claimed the company was attempting to “seize veto power over the operational decisions of the United States military.”

Hegseth also announced that he was designating Anthropic a “Supply-Chain Risk to National Security,” writing:

Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.

The company will have up to six months to continue services during a transition “to a better and more patriotic service.”

Hegseth called the decision “final” and concluded;

America’s warfighters will never be held hostage by the ideological whims of Big Tech.

Anthropic replied on the same day, saying, in part:

We have tried in good faith to reach an agreement with the Department of War, making clear that we support all lawful uses of AI for national security aside from the two narrow exceptions above.

“No amount of intimidation or punishment from the Department of War will change our position,” Amodei promised. He added that the company would challenge any formal designation in court.

Source link

Related Posts

1 of 201