Judge offers clearest look yet at stakes in Anthropic vs Pentagon

Anthropic News

Judge offers clearest look yet at stakes in Anthropic vs Pentagon
AIDefenseLegal
  • 📰 BusinessInsider
  • ⏱ Reading Time:
  • 253 sec. here
  • 9 min. at publisher
  • 📊 Quality Score:
  • News: 115%
  • Publisher: 51%

Business Insider tells the global tech, finance, stock market, media, economy, lifestyle, real estate, AI and innovative stories you want to know.

On Tuesday, lawyers for Anthropic and the Department of Justice met in a San Francisco courtroom to argue over the AI company's request to block the Pentagon from labeling it a national security risk.

Before the hearing began, Judge Rita Lin read prepared remarks that broke down the complex case — and what's at stake — in unusually clear terms. In the process, she ripped into the Pentagon's action, saying it looks like an effort to "cripple" a company that went public with a contract dispute.We're sharing her remarks in full because they get to the heart of a fight that could change the AI landscape. A ruling is expected any day now.Read what she said in full here:"Good afternoon to both of you. Yesterday, I disclosed a list of questions that I asked counsel to be prepared to answer today. Before we go down that list, I thought it might be helpful for the attorneys to hear kind of a general overview of my tentative thoughts on the case so far, and you're welcome to sit down for that if you'd like. Then, I'll invite you back up to address the questions.I will say that I do think this case touches on an important debate. On the one hand, Anthropic is saying that its AI product, Claude, is not safe to use for autonomous lethal weapons and domestic mass surveillance. Anthropic's position is that if the government wants to use its technology, the government has to agree not to use it for those purposes. On the other hand, the Department of War is saying that military commanders have to decide what is safe for its AI to do, not a private company.It's a fascinating public policy debate, and it's not my role to decide who's right in that debate — that is Secretary Hegseth's call. The Department of War decides what AI product it wants to use and buy. And everyone, including Anthropic, agrees that the Department of War is free to stop using Claude and look for a more permissive AI vendor.I don't see that as being what this case is about. I see the question in this case as being a very different one, which is, whether the government violated the law when it went beyond that.After Anthropic went public with this contracting dispute, defendants seemed to have a pretty big reaction to that. They took three actions that are the subject of this lawsuit. First, the president announced that every federal agency, not just the Department of War, would immediately ban Anthropic from ever having another government contract. So, that would include the National Endowment for the Arts using Claude to design its website — not allowed.Second, Secretary Hegseth announced that anyone who wants to do business with the US military has to sever their commercial relationship with Anthropic. So, if a company uses Claude to have a customer service chatbot, now they can't do any defense work.Third, the Department of War designated Anthropic as a 'supply chain risk.' That label applies to adversaries of the US government who may sabotage its technology systems. It's typically directed at foreign intelligence, terrorists, or other hostile actors.What is troubling to me about these three actions is that they don't really seem to be tailored to the stated national security concern. If the worry is about the integrity of the operational chain of command, DOW could just stop using Claude. It looks like defendants went further than that because they were trying to punish Anthropic.One of the amicus briefs used the term 'attempted corporate murder.' I don't know if it's murder, but it looks like an attempt to cripple Anthropic. And specifically, my concern is whether Anthropic is being punished for criticizing the government's contracting position in the press.Defendants say they were doing this because Anthropic's 'sanctimonious rhetoric' was an attempt to 'strong-arm the government.' DOW's records say that it designated Anthropic as a supply chain risk because it was 'hostile in the press.' So it looks like DOW's punishing Anthropic for trying to bring public scrutiny to this contracting dispute, which, of course, would be a violation of the First Amendment. So I have a lot of concern about that, and I would like to hear more from the government about that.I also have a lot of questions about — one, whether Congress gave defendants the authority to do this in the first place, and two, whether defendants violated Anthropic's due process rights by not giving them notice and an opportunity to respond.The questions I put out yesterday really go more to those latter two topics. So, I want to start going through those questions, but I will just say that at the end of the questions, I'll give both parties an opportunity to address the court. You can give me your reaction to the tentative thoughts that I gave you, and you can also just let me know anything else you think is important that I know about the case before I take it under submission.So let me just invite counsel back up to the podiums, and we'll just go through the questions."

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

BusinessInsider /  🏆 729. in US

AI Defense Legal

 

United States Latest News, United States Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

Anthropic and Pentagon head to court in legal spat over supply chain risk labelAnthropic and Pentagon head to court in legal spat over supply chain risk labelArtificial intelligence company Anthropic is asking a federal judge on Tuesday to temporarily halt the Pentagon’s “unprecedented and stigmatizing” designation of the company as a supply chain risk.
Read more »

Anthropic and Pentagon head to court as AI firm seeks end to 'stigmatizing' supply chain riskAnthropic and Pentagon head to court as AI firm seeks end to 'stigmatizing' supply chain riskSAN FRANCISCO (AP) — Artificial intelligence company Anthropic is asking a federal judge on Tuesday to temporarily halt the Pentagon's “unprecedented and
Read more »

Judge says it looks like the Pentagon tried to 'cripple Anthropic'Judge says it looks like the Pentagon tried to 'cripple Anthropic'Business Insider tells the global tech, finance, stock market, media, economy, lifestyle, real estate, AI and innovative stories you want to know.
Read more »

Anthropic sues Pentagon to remove 'stigmatizing' AI supply‑chain risk labelAnthropic sues Pentagon to remove 'stigmatizing' AI supply‑chain risk labelSan Francisco-based AI company Anthropic is asking a federal judge on Tuesday to temporarily halt the Pentagon's 'unprecedented and stigmatizing' designation of the company as a supply chain risk.
Read more »

Pentagon’s ‘Attempt to Cripple’ Anthropic Is Troubling, Judge SaysPentagon’s ‘Attempt to Cripple’ Anthropic Is Troubling, Judge SaysDuring a hearing Tuesday, a district court judge questioned the Department of Defense’s motivations for labeling the Claude AI developer a supply-chain risk.
Read more »

Judge suggests Pentagon’s ‘supply-chain risk’ label for Anthropic attempts to ‘cripple’ companyJudge suggests Pentagon’s ‘supply-chain risk’ label for Anthropic attempts to ‘cripple’ companyA judge said it appears as though the government's decision to label Anthropic a 'supply-chain risk' is an 'attempt to cripple' them.
Read more »



Render Time: 2026-04-01 01:08:46