Anthropic sues US over blacklisting; White House calls firm "radical left, woke"

March 11, 2026
Anthropic sues US over blacklisting; White House calls firm "radical left, woke"

Here's something that caught my attention — Anthropic is now suing the U.S. government over a blacklisting that feels pretty heavy-handed. According to Jon Brodkin writing in TechCrunch, the AI firm argues that the Trump-era White House retaliated after it refused to allow its Claude AI models to be used for autonomous warfare or mass surveillance. The lawsuit claims that, despite previous agreements, the administration suddenly blacklisted Anthropic, labeling it a 'Supply-Chain Risk' and cutting off its military contracts. But here’s where it gets interesting — Anthropic says this move violates their First Amendment rights to speak out about AI safety concerns, and that the process wasn’t even done properly, according to Brodkin. So what does this actually mean? Well, it’s a battle over free speech, government overreach, and how far the state can go when controlling AI technology. And honestly, this could set a big precedent for how tech firms push back when they’re targeted by political decisions.

Anthropic sued the Trump administration yesterday in an attempt to reverse the government's decision to blacklist its technology. Anthropic argues that it exercised its First Amendment rights by refusing to let its Claude AI models be used for autonomous warfare and mass surveillance of Americans and that the government blacklisted it in retaliation.

"When Anthropic held fast to its judgment that Claude cannot safely or reliably be used for autonomous lethal warfare and mass surveillance of Americans, the President directed every federal agency to 'IMMEDIATELY CEASE all use of Anthropic's technology'—even though the Department of War had previously agreed to those same conditions," Anthropic said in a lawsuit in US District Court for the Northern District of California. "Hours later, the Secretary of War [Pete Hegseth] directed his Department to designate Anthropic a 'Supply-Chain Risk to National Security,' and further directed that 'effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.'"

Anthropic said the First Amendment gives it "the right to express its views—both publicly and to the government—about the limitations of its own AI services and important issues of AI safety." Anthropic further argued that the process for designating it a supply chain risk did not comply with the procedures mandated by Congress. The supply chain risk designation is supposed to be used only to protect against risks that an adversary may sabotage systems used for national security, the lawsuit said.

Read full article

Comments

Audio Transcript

Anthropic sued the Trump administration yesterday in an attempt to reverse the government's decision to blacklist its technology. Anthropic argues that it exercised its First Amendment rights by refusing to let its Claude AI models be used for autonomous warfare and mass surveillance of Americans and that the government blacklisted it in retaliation.

"When Anthropic held fast to its judgment that Claude cannot safely or reliably be used for autonomous lethal warfare and mass surveillance of Americans, the President directed every federal agency to 'IMMEDIATELY CEASE all use of Anthropic's technology'—even though the Department of War had previously agreed to those same conditions," Anthropic said in a lawsuit in US District Court for the Northern District of California. "Hours later, the Secretary of War [Pete Hegseth] directed his Department to designate Anthropic a 'Supply-Chain Risk to National Security,' and further directed that 'effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.'"

Anthropic said the First Amendment gives it "the right to express its views—both publicly and to the government—about the limitations of its own AI services and important issues of AI safety." Anthropic further argued that the process for designating it a supply chain risk did not comply with the procedures mandated by Congress. The supply chain risk designation is supposed to be used only to protect against risks that an adversary may sabotage systems used for national security, the lawsuit said.

Read full article

Comments

0:00/0:00
Anthropic sues US over blacklisting; White House calls firm "radical left, woke" | Speasy