A US judge did not mince her words in a ruling that described the US administration’s designation of Anthropic as a supply chain risk as ‘arbitrary and capricious’.
Anthropic has won its first round in the court case it has taken against the US administration’s ban on the use of its products in government, with US district judge Rita F Lin issuing a preliminary injunction yesterday (26 March) pausing the US administration’s plan to ban all use of Claude products. The administration now has seven days to appeal the judgement.
Anthropic drew the ire of the US administration after a standoff with the Pentagon, where Anthropic refused to change its safeguards related to using its AI for fully autonomous weapons, or for mass surveillance of US citizens.
Anthropic confirmed on 5 March that it received a letter from the Department of Defense saying it had been designated a ‘supply chain risk’ by the US administration, and said it had no choice but to challenge the decision in the courts.
Many in Silicon Valley supported Anthropic’s relatively principled stand, and general users sent it to the top of the US Apple charts for free downloads at the time – beating OpenAI’s ChatGPT for the first time.
The US ‘supply chain risk’ designation was seen by most as a way of punishing Anthropic for not bowing to government pressure, and now a district judge has backed that premise and granted a temporary injunction on the ban.
“These broad measures do not appear to be directed at the government’s stated national security interests,” the judge said in her ruling. “If the concern is the integrity of the operational chain of command, the Department of War [sic] could just stop using Claude. Instead, these measures appear designed to punish Anthropic.
“One of the amicus briefs described these measures as ‘attempted corporate murder’. They might not be murder, but the evidence shows that they would cripple Anthropic. The record supports an inference that Anthropic is being punished for criticising the government’s contracting position in the press.”
She continued: “Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation. Moreover, Defendants’ [US government] designation of Anthropic as a ‘supply chain risk’ is likely both contrary to law, and arbitrary and capricious.”
“We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits,” an Anthopic spokesperson told SiliconRepublic.com. “While this case was necessary to protect Anthropic, our customers and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
You must be logged in to post a comment Login