Tech

US court won’t pause Anthropic ban, but wants case expedited

Published

on

A Washinton DC appeals court has declined to pause the US administration’s Anthropic ban, but recommended that the case be expedited.

Anthropic won its first round in court on 26 March, when a district judge granted a temporary injunction against the US administration’s decision to designate the Claude creator a ‘supply chain risk’, something normally reserved for foreign actors.

However, last night the Pentagon succeeded in a related but distinct case, as a Washington DC appeals court declined to pause the effective ‘ban’ on government use of Anthropic products. The court did, though, recognise the likely damage caused to Anthropic, and recommended the case be expedited.

The court substantially sided with the US administration in its order, saying: “In our view, the equitable balance here cuts in favour of the government. On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War [sic] secures vital AI technology during an active military conflict. For that reason, we deny Anthropic’s motion for a stay pending review on the merits.”

Advertisement

However, the court also recognised the potential harms that were being done to Anthropic and recommended the case be expedited: “Nonetheless, because Anthropic raises substantial challenges to the determination and will likely suffer some irreparable harm during the pendency of this litigation, we agree with Anthropic that substantial expedition is warranted.”

That latter request to expedite the process had been made by Anthropic’s legal team as an alternative to any stay, should that be unsuccessful, and the AI company welcomed that element of the order.

“We’re grateful the court recognised these issues need to be resolved quickly and remain confident the courts will ultimately agree that these supply chain designations were unlawful,” an Anthropic spokesperson told SiliconRepublic.com.

“While this case was necessary to protect Anthropic, our customers and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”

Advertisement

The judgement also found that “Anthropic’s petition raises novel and difficult questions, including what counts as a supply-chain risk under section 4713 and what qualifies as an urgent national-security interest justifying the use of truncated statutory procedures”, and that will be the fundamental question as the case proceeds.

US district judge Rita F Lin had found in the first court case, when granting a temporary injunction against the ban last month, that: “These broad measures do not appear to be directed at the government’s stated national security interests. If the concern is the integrity of the operational chain of command, the Department of War [sic] could just stop using Claude. Instead, these measures appear designed to punish Anthropic.”

It’s a view held by many. Anthropic drew the ire of the US administration after a standoff with the Pentagon, where Anthropic refused to change its safeguards related to using its AI for fully autonomous weapons, or for mass surveillance of US citizens. The relatively ethical stance in the face of huge pressure from the US administration has earned the company many defenders, and indeed a slew of new customers.

Project Glasswing

Anthropic again flexed its ethics and safety chops this week as it declined to release its powerful new Claude Mythos model to the public, as many fear the consequences of it falling into the hands of bad actors.

Advertisement

Instead, its Project Glasswing will bring together leading businesses, including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JP Morgan Chase, the Linux Foundation, Microsoft, Nvidia and Palo Alto Networks, allowing them to access the Mythos preview (released on 7 April) to boost their cyber defences.

According to Anthropic, its unreleased Claude Mythos has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser.

Anthropic’s Mythos preview is significantly capable at generating exploits. In its research, the company noted that Mythos developed working exploits 181 times out of the several hundred attempts, while Opus 4.6 had a near 0pc success rate.

“AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities,” said Anthropic, which has promised to share learnings from Project Glasswing to benefit the wider industry.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement

You must be logged in to post a comment Login

Leave a Reply

Cancel reply

Trending

Exit mobile version