Connect with us

Tech

Google clamps down on Antigravity ‘malicious usage’, cutting off OpenClaw users in sweeping ToS enforcement move

Published

on

Google caused controversy among some developers this weekend and today, Monday, February 23rd, after restricting their usage of its new Antigravity “vibe coding” platform, alleging “maliciously usage.” 

Some users who had been using the open source autonomous AI agent OpenClaw in conjunction with agents built on Antigravity, as well as those who had connected OpenClaw agents to their Gmails, claimed on social media that they lost access to their Google accounts. 

According to Google, said users had been using Antigravity to access a larger number of Gemini tokens via third-party platforms like OpenClaw, which overwhelmed the system for other Antigravity customers. 

This move has cut off several users, underscoring the architectural and trust issues that can arise with OpenClaw. The timing of Google’s crackdown is particularly pointed. Just one week ago, on February 15, OpenAI CEO Sam Altman announced that OpenClaw creator Peter Steinberger had joined OpenAI to lead its “next generation of personal agents.” While OpenClaw remains an open-source project under an independent foundation, it is now financially backed and strategically guided by Google’s primary rival.

Advertisement

By cutting off OpenClaw’s access to Antigravity, Google isn’t just protecting its server load; it is effectively severing a pipeline that allows an OpenAI-adjacent tool to leverage Google’s most advanced Gemini models.

Google DeepMind engineer and former CEO and founder of Windsurf, Varun Mohan, said in an X post that the company noticed “malicious usage” that led to service degradation.

“We’ve been seeing a massive increase in malicious usage of the Antigravity backend that has tremendously degraded the quality of service for our users. We needed to find a path to quickly shut off access to these users that are not using the product as intended. We understand that a subset of these users were not aware that this was against our ToS [Terms of Service] and will get a path for them to come back on but we have limited capacity and want to be fair to our actual users,” the post said. 

A Google DeepMind spokesperson told VentureBeat that the move is not to permanently ban the use of Antigravity to access third-party platforms, but to align its use with the platform’s terms of service.   

Advertisement

Unsurprisingly, Google’s move has caused a furor among OpenClaw users, including from OpenClaw creator Peter Steinberger, who announced that OpenClaw will remove Google support as a result. 

Infrastructure and connection uncertainty

OpenClaw emerged as a way for individual users to run shell commands and access local files, fulfilling a major promise of AI agents: efficiently running workflows for users.

But, as VentureBeat has frequently pointed out, it can often run into security and guardrail issues. There are companies building ways for enterprise customers to access OpenClaw securely and with a governance layer, though OpenClaw is so new that we should expect more announcements soon.

However, Google’s move was not framed as a security issue but rather as one of access and runtime, further showing that there is still significant uncertainty when users want to bring in something like OpenClaw into their workflow. 

Advertisement

This is not the first time developers and power users of agentic AI found their access curtailed. Last year, Anthropic throttled access to Claude Code after the company claimed some users were abusing the system by running it 24/7. 

What this does highlight is the disconnect between companies like Google and OpenClaw users. OpenClaw offered many interesting possibilities for creating workflows with agents. However, because it is continually evolving, users may inadvertently run afoul of ToS or rate limits. 

Mohan said Google is working to bring the banned users back, but whether this means the company will amend its ToS or figure out a secure connection between OpenClaw agents and Antigravity models remains to be seen. 

For developers, the message is clear: the era of “bring your own agent” to a frontier model is ending. Providers are now prioritizing vertically integrated experiences where they can capture 100% of the telemetry and subscription revenue, often at the expense of the open-source interoperability that defined the early days of the LLM boom.

Advertisement

Affected users

Several users said on both the Y Combinator chat boards and X that they no longer had access to their Google accounts after running OpenClaw instances for certain Google products.

Google’s move mirrors a broader industry shift toward “walled garden” agent ecosystems. Earlier this year, Anthropic introduced “client fingerprinting” to ensure that its Claude Code environment remains the exclusive interface for its models, effectively locking out third-party wrappers like OpenClaw. For developers, the message is clear: the era of “bring your own agent” to a frontier model is ending. Providers are now prioritizing vertically integrated experiences where they can capture 100% of the telemetry and subscription revenue, often at the expense of the open-source interoperability that defined the early days of the LLM boom.

Some have said they will no longer use Google or Gemini for their projects. Right now, people who still want to keep using Antigravity will need to wait until Google figures out a way for them to use OpenClaw and access Gemini tokens in a manner Google deems “fair.” 

Google DeepMind reiterated that it had only cut access to Antigravity, not to other Google applications. 

Advertisement

Conclusion: the enterprise takeaway

For enterprise technical decision-makers, the “Antigravity Ban” serves as a definitive case study in the risks of agentic dependency. As the industry moves from chatbots to autonomous agents, the following realities must now dictate strategy:

  • Platform fragility is the new normal: The sudden lockout of $250/month “Ultra” users proves that even high-paying enterprise customers have little leverage when a provider decides to change its “fair use” definitions. Relying on OAuth-based third-party wrappers for core business logic is now a high-risk gamble.

  • The rise of local-first governance: With OpenClaw moving toward an OpenAI-backed foundation and Google/Anthropic tightening their clouds, enterprises should prioritize agent frameworks that can run “local-first” or within VPCs. The “token loophole” that OpenClaw exploited is being closed; future agentic scale will require direct, high-cost API contracts rather than subsidized consumer seats.

  • Account portability as a requirement: The fact that users “lost access to their Google accounts” underscores the danger of bundling development environments with primary identity providers. Decision-makers should decouple AI development from core corporate identity (SSO) where possible to avoid a single ToS violation paralyzing an entire team’s communications.

Ultimately, the Antigravity incident marks the end of the “Wild West” for AI agents. As Google and OpenAI stake their claims, the enterprise must choose between the stability of the walled garden or the complexity (and cost) of truly independent, self-hosted infrastructure.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Take a peek into Apple's efforts to bring Mac mini assembly and chip fabrication stateside

Published

on

Apple is working to bring more manufacturing to the United States, including chip fabrication and Mac mini assembly, but it’s a slow-moving project.

A screenshot of the Apple Maps application satellite view of the TSMC facility, which shows several buildings in an arid environment
TSMC is building several fabs near Phoenix, Arizona

There is increasing pressure to bring more of Apple’s manufacturing and assembly stateside. However, even with $600 billion in investments, what can be done in the US is insignificant compared to the global supply chain.
The Wall Street Journal got special access to various facilities in the United States to examine how Apple is repatriating its supply chain. Executives like COO Sabih Khan joined tours of the TSMC Arizona plant, the Foxconn Houston facility, and others.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Nikon’s Nikkor Z 70-200mm f/2.8 VR S II is official, and improves on first-gen version in several key areas

Published

on


  • New Nikkor Z 70-200mm f/2.8 VR S II weighs just 998g
  • It promises quieter, faster autofocus and six stops of stabilization
  • Available from March, costing £2,999 / $2,999 / AU$5,399

Nikon has announced the Nikkor Z 70-200mm f/2.8 VR S II, a second-gen overhaul of its telephoto zoom promising class-leading weight savings, a faster autofocus system and a redesigned optical formula – all while retaining the f/2.8 maximum aperture that makes this type of lens so useful in low light. The new lens will be available from March 2026, priced at $2,999 / £2,999 / AU$5,399.

The headline figure is its weight. At just 998g (with the tripod collar removed), the new lens is 362g lighter than the original Nikkor Z 70-200mm f/2.8 VR S and, according to Nikon, the lightest lens among 70-200mm f/2.8 options for full-frame mirrorless cameras.

Source link

Advertisement
Continue Reading

Tech

Today’s NYT Mini Crossword Answers for Feb. 24

Published

on

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Need some help with today’s Mini Crossword? I thought 5-Down was very tricky, and not really representative of the clue, either. Read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Advertisement

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

completed-nyt-mini-crossword-puzzle-for-feb-24-2026.png

The completed NYT Mini Crossword puzzle for Feb. 24, 2026.

Advertisement

NYT/Screenshot by CNET

Mini across clues and answers

1A clue: Goosebumps-inducing
Answer: SCARY

6A clue: Buddy, informally
Answer: HOMIE

7A clue: Rub off, as pencil markings
Answer: ERASE

Advertisement

8A clue: Enjoys a quiet weekend morning, perhaps
Answer: LAZES

9A clue: David Szalay novel that won the 2025 Booker Prize
Answer: FLESH

Mini down clues and answers

1D clue: Section of a bookcase
Answer: SHELF

2D clue: Color similar to salmon that’s also named for a sea creature
Answer: CORAL

Advertisement

3D clue: Leave speechless
Answer: AMAZE

4D clue: Gets out of bed
Answer: RISES

5D clue: “Uff-da!”
Answer: YEESH

Advertisement

Source link

Continue Reading

Tech

How Copyright Litigation Over Anne Frank’s Diary Could Impact The Fate Of VPNs In The EU

Published

on

from the copyright-gone-mad dept

“The Diary of a Young Girl” is a Dutch language diary written by the young Jewish writer Anne Frank while she was in hiding for two years with her family during the Nazi occupation of the Netherlands. Although the diary and Anne Frank’s death in the Bergen-Belsen concentration camp are well known, few are aware that the text has a complicated copyright history – one that could have important implications for the legal status and use of Virtual Private Networks (VPNs) in the EU. TorrentFreak explains the copyright background:

These copyrights are controlled by the Swiss-based Anne Frank Fonds, which was the sole heir of Anne’s father, Otto Frank. The Fonds states that many print versions of the diary remain protected for decades, and even the manuscripts are not freely available everywhere.

In the Netherlands, for example, certain sections of the manuscripts remain protected by copyright until 2037, even though they have entered the public domain in neighboring countries like Belgium.

A separate foundation, the Netherlands-based Anne Frank Stichting, wanted to publish a scholarly edition of Anne Frank’s writing, at least in those parts of the world where her diary was in the public domain:

To navigate these conflicting laws, the Dutch Anne Frank Stichting published a scholarly edition online using “state-of-the-art” geo-blocking to prevent Dutch residents from accessing the site. Visitors from the Netherlands and other countries where the work is protected are met with a clear message, informing them about these access restrictions.

However, the Anne Frank Fonds was unhappy with this approach, and took legal action. Its argument was that such geo-blocking could be circumvented with VPNs, and so its copyrights in the Netherlands could be infringed upon by those using VPNs. The lower courts in the Netherlands dismissed this argument, and the case is now before the Dutch Supreme Court. Beyond the specifics of the Anne Frank scholarly edition, there are important issues regarding the use of VPNs to get around geo-blocking. Because of the potential knock-on effect the ruling in this case will have on EU law, the Dutch Supreme Court has asked for guidance from the EU’s top court, the Court of Justice of the European Union (CJEU).

Advertisement

The CJEU has yet to rule on the issues raised. But one of the court’s advisors, Advocate General Rantos, has published a preliminary opinion, as is normal in such cases. Although that advice is not binding on the CJEU, it often provides some indication as to how the court may eventually decide. On the main issue of whether the ability of people to circumvent geo-blocking is a problem, Rantos writes:

the fact that users manage to circumvent a geo-blocking measure put in place to restrict access to a protected work does not, in itself, mean that the entity that put the geo-blocking in place communicates that work to the public in a territory where access to it is supposed to be blocked. Such an interpretation would make it impossible to manage copyright on the internet on a territorial basis and would mean that any communication to the public on the internet would be global.

Moreover:

As the [European] Commission pointed out in its written observations, the holder of an exclusive right in a work does not have the right to authorise or prohibit, on the basis of the right granted to it in one Member State, communication to the public in another Member State in which that right has ceased to have effect.

Or, more succinctly: “service providers in the public domain country cannot be subject to unreasonable requirements”. That’s a good, common-sense view. But perhaps just as important is the following comment by Rantos regarding the use of VPNs to circumvent geo-blocking:

as the Commission points out in its observations, VPN services are legally accessible technical services which users may, however, use for unlawful purposes. The mere fact that those or similar services may be used for such purposes is not sufficient to establish that the service providers themselves communicate the protected work to the public. It would be different if those service providers actively encouraged the unlawful use of their services.

That’s an important point at a time when VPNs are under attack from some governments because of concerns about possible copyright infringement by those using them.

Advertisement

The hope has to be that the CJEU will agree with its Advocate General’s sensible and fair analysis, and will rule accordingly. But there is another important aspect to this story. The basic issue is that the Anne Frank Stichting wants to make its scholarly edition of Anne Frank’s diary available as widely as possible. That seems a laudable aim, since it will increase understanding and appreciation of the young woman’s remarkable diary by publishing an academically rigorous version. And yet the Anne Frank Fonds has taken legal action to stop that move, on the grounds that it would represent an infringement of its intellectual monopoly in some parts of Frank’s work, in some parts of the world. The current dispute is another clear example of how copyright has become for some an end in itself, more important than the things that it is supposed to promote.

Follow me @glynmoody on Mastodon and on Bluesky. Republished from Walled Culture.

Filed Under: anne frank, anne frank’s diary, cjeu, copyright, diary of anne frank, geoblocking, netherlands, public domain, vpns

Companies: anne frank fonds, anne frank stichting

Advertisement

Source link

Continue Reading

Tech

NASA’s moon rocket is about to leave the launchpad, but it ain’t going skyward

Published

on

The four astronauts preparing to end a five-decade gap in crewed lunar flights will have to wait until at least April before they can begin the Artemis II mission.

During the SLS rocket’s second wet dress rehearsal last weekend, NASA discovered an issue with the flow of helium to the rocket’s upper stage.

Engineers decided that to fix the problem, the massive rocket, which is currently on the launchpad at the Kennedy Space Center in Florida, will have to be transported back to the Vehicle Assembly Building (VAB). That four-mile rollback to the VAB is expected to take place on Tuesday, February 24.

On Monday, NASA confirmed that as a result of the latest issue, the rocket will no longer be launching on the recently announced March 6 target date, adding that the Artemis II mission will now lift off “no earlier than April 2026.”

Advertisement

NASA added: “The quick work to begin preparations for rolling the rocket and spacecraft back to the VAB potentially preserves the April launch window, pending the outcome of data findings, repair efforts, and how the schedule comes to fruition in the coming days and weeks.”

The Artemis II crew members — NASA’s Victor Glover, Reid Wiseman, and Christina Koch, along with the Canadian Space Agency’s Jeremy Hansen — left quarantine on Saturday evening and remain at NASA’s facility in Houston, Texas.

NASA originally targeted February 8 for the launch, but another issue in the first wet dress rehearsal prompted a delay, with NASA then announcing March 6 as a possible launch date. But that, too, has now been disregarded, with the team currently looking to launch in April.

The much-anticipated mission will involve the crew performing detailed tests on the Orion spacecraft’s systems while flying around the moon, with a smooth journey paving the way for a crewed lunar landing in the Artemis III mission, which could take place before the end of this decade.

Advertisement

Interested in following the 10-day mission when it finally gets underway? NASA recently shared a fascinating video revealing exactly how the flight is expected to unfold.

Source link

Advertisement
Continue Reading

Tech

Summer Game Fest runs from June 5-8

Published

on

It’s getting to be that time of year again. Summer Game Fest and will go until June 8. The Live Kickoff show will once again be hosted by Geoff Keighley and takes place on June 5 at 5PM ET. This is where we’ll see all of those juicy reveals and trailers.

The opening event will be streamed globally on just about every digital platform, including YouTube, Twitch, X and even Steam. Those in the Los Angeles area will be able to pick up tickets for the live show sometime in the Spring.

The kickoff event is just the beginning. There’s something called Play Days, which is an expo in downtown LA produced by iam8bit. This invite-only event promises “immersive exhibits and hands-on experiences from the industry’s leading publishers and developers.” Coverage of this will be shared across digital and social platforms.

There is, of course, another livestream scheduled for immediately after the kickoff. Day of the Devs: SGF Edition should provide us with even more trailers and reveals, this time for indie games.

Advertisement

Finally, there’s a “thought leadership event” on June 8 that’s primarily for developers and publishers. Game Business Live “brings together top industry voices on one stage for insightful discussions on key changes, challenges and opportunities shaping the global video game industry.”

We’ll be covering the event live and will have all of those trailers ready to go. After all, that’s pretty much the main reason people watch these things.

Source link

Advertisement
Continue Reading

Tech

Water, power, and transparency: Amazon’s $12B data center deal signals a new era of accountability

Published

on

Inside an Amazon data center. (Amazon Photo / Noah Berger)

Amazon on Monday announced a $12 billion data center project in Louisiana in which the company vowed to pay its own way for energy and other infrastructure.

The deal highlights the unwritten expectations now placed on tech giants to cover upfront power costs and other impacts. Such pledges have become commonplace as leaders at the state and national levels move to codify these commitments with new laws.

Amazon’s Louisiana project includes a deal with Southwestern Electric Power Company (SWEPCO) to pay for “energy infrastructure and upgrades required to serve the data centers, which also strengthens overall grid reliability for all SWEPCO customers. In addition, Amazon has invested in solar energy projects in Louisiana, bringing up to 200 [megawatts] of new carbon-free energy onto the grid,” the company said.

Amazon is also pledging to use “only verified surplus water” — which refers to water that is otherwise deemed unneeded by the community where the data centers are based.

Water is used by data centers to cool the electronics that produce heat while computing. Amazon expects to mostly use air to fan the machines, tapping into water cooling for less than 13% of the year in the peak of summer heat.

Advertisement

The company will also spend up to $400 million to improve water infrastructure, plus an additional $250,000 earmarked for the Amazon Northwest Louisiana Community Fund. The philanthropic dollars will help pay for STEM education, sustainability efforts, health and other local needs.

“Amazon is making a long-term commitment to Louisiana because our state delivers — prime sites, strong infrastructure and a skilled, hard-working workforce ready to support the next generation of technological innovation,” Louisiana Gov. Jeff Landry said in a statement.

New rules of engagement

Amazon’s deal in Louisiana comes amid mounting pushback to data centers from local communities and lawmakers.

On Monday, Sen. Bernie Sanders called again for a moratorium on data center deployments, citing Denver’s move to temporarily ban new facilities. The Vermont senator called out data centers’ environmental impacts, as well as AI’s threat to jobs and overall risks to humanity.

Advertisement

Washington state, where Amazon is based, is among the areas pursuing legislation to control the impact of data centers on local communities, including their use of energy and water to run the computer hubs that underpin the internet and support the growing use of artificial intelligence.

The measure, House Bill 2515, passed the House last week and is now being considered by the Senate. The legislation includes public reporting requirements about sustainability impacts and projected energy use, bringing heightened transparency to a sector that has often expanded and operated secrecy.

Meanwhile, tech companies like Amazon and fellow Seattle-area hyperscaler Microsoft are adjusting their approach as they spend heavily to build out the infrastructure needed to power their AI ambitions.

Microsoft last month made a good neighbor pledge for all of its new data centers, vowing to pay the company’s full power costs, reject local property tax breaks, replenish more water than it uses, train local workers, and invest in AI education and community programs.

Advertisement

“This sector worked one way in the past, and needs to work in some different ways going forward,” Microsoft President Brad Smith told GeekWire.

Pursuit of clean power

Amazon has committed to spending $200 billion this year on capital expenditures worldwide, predominately for its Amazon Web Services cloud business. Microsoft expects to shell out up to $140 billion in capital expenses this fiscal year.

Both companies are racing to secure clean energy for their expansions. Beyond wind, solar and batteries, their strategies include new and existing nuclear facilities — and Microsoft is even eyeing fusion energy, an unproven but potentially transformative technology.

Data centers are expected to drive roughly half of the energy growth demand in the U.S. by 2030, according to new data from the International Energy Agency. Solar will provide much of the supply, but so will natural gas, which contributes to the continued warming of the planet.

Advertisement

A new report from BloombergNEF found that Amazon, Microsoft, Meta and Google made nearly half of the world’s new clean energy deals last year. Amazon alone — which tied with Meta in making the most power purchase agreements — paid for nearly 10 gigawatts of energy globally. That’s about one-third of the power demand annually in California.

Overall, the volume of power purchase agreements declined for the first time in a decade as corporations in other sectors stepped back from the deals.

Since 2023, Amazon has annually bought enough clean energy to match its electricity use worldwide.

Last week, Microsoft announced that it, too, hit that benchmark in 2025. That doesn’t mean the companies are literally using only climate friendly power — depending on when and where they operate, their data centers and operations will require fossil fuels while still supporting clean energy use globally.

Advertisement

Source link

Continue Reading

Tech

Anthropic says DeepSeek, Moonshot, and MiniMax used 24,000 fake accounts to rip off Claude

Published

on

Anthropic dropped a bombshell on the artificial intelligence industry Monday, publicly accusing three prominent Chinese AI laboratories — DeepSeek, Moonshot AI, and MiniMax — of orchestrating coordinated, industrial-scale campaigns to siphon capabilities from its Claude models using tens of thousands of fraudulent accounts.

The San Francisco-based company said the three labs collectively generated more than 16 million exchanges with Claude through approximately 24,000 fake accounts, all in violation of Anthropic’s terms of service and regional access restrictions. The campaigns, Anthropic said, are the most concrete and detailed public evidence to date of a practice that has haunted Silicon Valley for months: foreign competitors systematically using a technique called distillation to leapfrog years of research and billions of dollars in investment.

“These campaigns are growing in intensity and sophistication,” Anthropic wrote in a technical blog post published Monday. “The window to act is narrow, and the threat extends beyond any single company or region. Addressing it will require rapid, coordinated action among industry players, policymakers, and the global AI community.”

The disclosure marks a dramatic escalation in the simmering tensions between American and Chinese AI developers — and it arrives at a moment when Washington is actively debating whether to tighten or loosen export controls on the advanced chips that power AI training. Anthropic, led by CEO Dario Amodei, has been among the most vocal advocates for restricting chip sales to China, and the company explicitly connected Monday’s revelations to that policy fight.

Advertisement

How AI distillation went from obscure research technique to geopolitical flashpoint

To understand what Anthropic alleges, it helps to understand what distillation actually is — and how it evolved from an academic curiosity into the most contentious issue in the global AI race.

At its core, distillation is a process of extracting knowledge from a larger, more powerful AI model — the “teacher” — to create a smaller, more efficient one — the “student.” The student model learns not from raw data, but from the teacher’s outputs: its answers, reasoning patterns, and behaviors. Done correctly, the student can achieve performance remarkably close to the teacher’s while requiring a fraction of the compute to train.

As Anthropic itself acknowledged, distillation is “a widely used and legitimate training method.” Frontier AI labs, including Anthropic, routinely distill their own models to create smaller, cheaper versions for customers. But the same technique can be weaponized. A competitor can pose as a legitimate customer, bombard a frontier model with carefully crafted prompts, collect the outputs, and use those outputs to train a rival system — capturing capabilities that took years and hundreds of millions of dollars to develop.

The technique burst into public consciousness in January 2025 when DeepSeek released its R1 reasoning model, which appeared to match or approach the performance of leading American models at dramatically lower cost. Databricks CEO Ali Ghodsi captured the industry’s anxiety at the time, telling CNBC: “This distillation technique is just so extremely powerful and so extremely cheap, and it’s just available to anyone.” He predicted the technique would usher in an era of intense competition for large language models.

Advertisement

That prediction proved prescient. In the weeks following DeepSeek’s release, researchers at UC Berkeley said they recreated OpenAI’s reasoning model for just $450 in 19 hours. Researchers at Stanford and the University of Washington followed with their own version built in 26 minutes for under $50 in compute credits. The startup Hugging Face replicated OpenAI’s Deep Research feature as a 24-hour coding challenge. DeepSeek itself openly released a family of distilled models on Hugging Face — including versions built on top of Qwen and Llama architectures — under the permissive MIT license, with the model card explicitly stating that the DeepSeek-R1 series supports commercial use and allows for any modifications and derivative works, “including, but not limited to, distillation for training other LLMs.”

But what Anthropic described Monday goes far beyond academic replication or open-source experimentation. The company detailed what it characterized as deliberate, covert, and large-scale intellectual property extraction by well-resourced commercial laboratories operating under the jurisdiction of the Chinese government.

Anthropic traces 16 million fraudulent exchanges to researchers at DeepSeek, Moonshot, and MiniMax

Anthropic attributed each campaign “with high confidence” through IP address correlation, request metadata, infrastructure indicators, and corroboration from unnamed industry partners who observed the same actors on their own platforms. Each campaign specifically targeted what Anthropic described as Claude’s most differentiated capabilities: agentic reasoning, tool use, and coding.

DeepSeek, the company that ignited the distillation debate, conducted what Anthropic described as the most technically sophisticated of the three operations, generating over 150,000 exchanges with Claude. Anthropic said DeepSeek’s prompts targeted reasoning capabilities, rubric-based grading tasks designed to make Claude function as a reward model for reinforcement learning, and — in a detail likely to draw particular political attention — the creation of “censorship-safe alternatives to policy sensitive queries.”

Advertisement

Anthropic alleged that DeepSeek “generated synchronized traffic across accounts” with “identical patterns, shared payment methods, and coordinated timing” that suggested load balancing to maximize throughput while evading detection. In one particularly notable technique, Anthropic said DeepSeek’s prompts “asked Claude to imagine and articulate the internal reasoning behind a completed response and write it out step by step — effectively generating chain-of-thought training data at scale.” The company also alleged it observed tasks in which Claude was used to generate alternatives to politically sensitive queries about “dissidents, party leaders, or authoritarianism,” likely to train DeepSeek’s own models to steer conversations away from censored topics. Anthropic said it was able to trace these accounts to specific researchers at the lab.

Moonshot AI, the Beijing-based creator of the Kimi models, ran the second-largest operation by volume at over 3.4 million exchanges. Anthropic said Moonshot targeted agentic reasoning and tool use, coding and data analysis, computer-use agent development, and computer vision. The company employed “hundreds of fraudulent accounts spanning multiple access pathways,” making the campaign harder to detect as a coordinated operation. Anthropic attributed the campaign through request metadata that “matched the public profiles of senior Moonshot staff.” In a later phase, Anthropic said, Moonshot adopted a more targeted approach, “attempting to extract and reconstruct Claude’s reasoning traces.”

MiniMax, the least publicly known of the three but the most prolific by volume, generated over 13 million exchanges — more than three-quarters of the total. Anthropic said MiniMax’s campaign focused on agentic coding, tool use, and orchestration. The company said it detected MiniMax’s campaign while it was still active, “before MiniMax released the model it was training,” giving Anthropic “unprecedented visibility into the life cycle of distillation attacks, from data generation through to model launch.” In a detail that underscores the urgency and opportunism Anthropic alleges, the company said that when it released a new model during MiniMax’s active campaign, MiniMax “pivoted within 24 hours, redirecting nearly half their traffic to capture capabilities from our latest system.”

How proxy networks and ‘hydra cluster’ architectures helped Chinese labs bypass Anthropic’s China ban

Anthropic does not currently offer commercial access to Claude in China, a policy it maintains for national security reasons. So how did these labs access the models at all?

Advertisement

The answer, Anthropic said, lies in commercial proxy services that resell access to Claude and other frontier AI models at scale. Anthropic described these services as running what it calls “hydra cluster” architectures — sprawling networks of fraudulent accounts that distribute traffic across Anthropic’s API and third-party cloud platforms. “The breadth of these networks means that there are no single points of failure,” Anthropic wrote. “When one account is banned, a new one takes its place.” In one case, Anthropic said, a single proxy network managed more than 20,000 fraudulent accounts simultaneously, mixing distillation traffic with unrelated customer requests to make detection harder.

The description suggests a mature and well-resourced infrastructure ecosystem dedicated to circumventing access controls — one that may serve many more clients than just the three labs Anthropic named.

Why Anthropic framed distillation as a national security crisis, not just an IP dispute

Anthropic did not treat this as a mere terms-of-service violation. The company embedded its technical disclosure within an explicit national security argument, warning that “illicitly distilled models lack necessary safeguards, creating significant national security risks.”

The company argued that models built through illicit distillation are “unlikely to retain” the safety guardrails that American companies build into their systems — protections designed to prevent AI from being used to develop bioweapons, carry out cyberattacks, or enable mass surveillance. “Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems,” Anthropic wrote, “enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance.”

Advertisement

This framing directly connects to the chip export control debate that Amodei has made a centerpiece of his public advocacy. In a detailed essay published in January 2025, Amodei argued that export controls are “the most important determinant of whether we end up in a unipolar or bipolar world” — a world where either only the U.S. and its allies possess the most powerful AI, or one where China achieves parity. He specifically noted at the time that he was “not taking any position on reports of distillation from Western models” and would “just take DeepSeek at their word that they trained it the way they said in the paper.”

Monday’s disclosure is a sharp departure from that earlier restraint. Anthropic now argues that distillation attacks “undermine” export controls “by allowing foreign labs, including those subject to the control of the Chinese Communist Party, to close the competitive advantage that export controls are designed to preserve through other means.” The company went further, asserting that “without visibility into these attacks, the apparently rapid advancements made by these labs are incorrectly taken as evidence that export controls are ineffective.” In other words, Anthropic is arguing that what some observers interpreted as proof that Chinese labs can innovate around chip restrictions was actually, in significant part, the result of stealing American capabilities.

The murky legal landscape around AI distillation may explain Anthropic’s political strategy

Anthropic’s decision to frame this as a national security issue rather than a legal dispute may reflect the difficult reality that intellectual property law offers limited recourse against distillation.

As a March 2025 analysis by the law firm Winston & Strawn noted, “the legal landscape surrounding AI distillation is unclear and evolving.” The firm’s attorneys observed that proving a copyright claim in this context would be challenging, since it remains unclear whether the outputs of AI models qualify as copyrightable creative expression. The U.S. Copyright Office affirmed in January 2025 that copyright protection requires human authorship, and that “mere provision of prompts does not render the outputs copyrightable.”

Advertisement

The legal picture is further complicated by the way frontier labs structure output ownership. OpenAI’s terms of use, for instance, assign ownership of model outputs to the user — meaning that even if a company can prove extraction occurred, it may not hold copyrights over the extracted data. Winston & Strawn noted that this dynamic means “even if OpenAI can present enough evidence to show that DeepSeek extracted data from its models, OpenAI likely does not have copyrights over the data.” The same logic would almost certainly apply to Anthropic’s outputs.

Contract law may offer a more promising avenue. Anthropic’s terms of service prohibit the kind of systematic extraction the company describes, and violation of those terms is a more straightforward legal claim than copyright infringement. But enforcing contractual terms against entities operating through proxy services and fraudulent accounts in a foreign jurisdiction presents its own formidable challenges.

This may explain why Anthropic chose the national security frame over a purely legal one. By positioning distillation attacks as threats to export control regimes and democratic security rather than as intellectual property disputes, Anthropic appeals to policymakers and regulators who have tools — sanctions, entity list designations, enhanced export restrictions — that go far beyond what civil litigation could achieve.

What Anthropic’s distillation crackdown means for every company running a frontier AI model

Anthropic outlined a multipronged defensive response. The company said it has built classifiers and behavioral fingerprinting systems designed to identify distillation attack patterns in API traffic, including detection of chain-of-thought elicitation used to construct reasoning training data. It is sharing technical indicators with other AI labs, cloud providers, and relevant authorities to build what it described as a more holistic picture of the distillation landscape. The company has also strengthened verification for educational accounts, security research programs, and startup organizations — the pathways most commonly exploited for setting up fraudulent accounts — and is developing model-level safeguards designed to reduce the usefulness of outputs for illicit distillation without degrading the experience for legitimate customers.

Advertisement

But the company acknowledged that “no company can solve this alone,” calling for coordinated action across the industry, cloud providers, and policymakers.

The disclosure is likely to reverberate through multiple ongoing policy debates. In Congress, the bipartisan No DeepSeek on Government Devices Act has already been introduced. Federal agencies including NASA have banned DeepSeek from employee devices. And the broader question of chip export controls — which the Trump administration has been weighing amid competing pressures from Nvidia and national security hawks — now has a new and vivid data point.

For the AI industry’s technical decision-makers, the implications are immediate and practical. If Anthropic’s account is accurate, the proxy infrastructure enabling these attacks is vast, sophisticated, and adaptable — and it is not limited to targeting a single company. Every frontier AI lab with an API is a potential target. The era of treating model access as a simple commercial transaction may be coming to an end, replaced by one in which API security is as strategically important as the model weights themselves.

Anthropic has now put names, numbers, and forensic detail behind accusations that the industry had only whispered about for months. Whether that evidence galvanizes the coordinated response the company is calling for — or simply accelerates an arms race between distillers and defenders — may depend on a question no classifier can answer: whether Washington sees this as an act of espionage or just the cost of doing business in an era when intelligence itself has become a commodity.

Advertisement

Source link

Continue Reading

Tech

US Farmers Are Rejecting Multimillion-Dollar Datacenter Bids For Their Land

Published

on

An anonymous reader quotes a report from the Guardian: When two men knocked on Ida Huddleston’s door last May, they carried a contract worth more than $33m in exchange for the Kentucky farm that had fed her family for centuries. According to Huddleston, the men’s client, an unnamed “Fortune 100 company,” sought her 650 acres (260 hectares) in Mason county for an unspecified industrial development. Finding out any more would require signing a non-disclosure agreement. More than a dozen of her neighbors received the same knock. Searching public records for answers, they discovered that a new customer (PDF) had applied for a 2.2 gigawatt project from the local power plant, nearly double its annual generation capacity. The unknown company was building a datacenter. “You don’t have enough to buy me out. I’m not for sale. Leave me alone, I’m satisfied,” Huddleston, 82, later told the men.

As tech companies race to build the massive datacenters needed to power artificial intelligence across the US and the world, bids like the one for Huddleston’s land are appearing on rural doorsteps nationwide. Globally, 40,000 acres of powered land – real estate prepped for datacenter development — are projected to be needed for new projects over the next five years, double the amount currently in use. Yet despite sums that often dwarf the land’s recent value, farmers are increasingly shutting the door. At least five of Huddleston’s neighbors gave similar categorical rejections, including one who was told he could name any price.

In Pennsylvania, a farmer rejected $15m in January for land he’d worked for 50 years. A Wisconsin farmer turned down $80m the same month. Other landowners have declined offers exceeding $120,000 per acre — prices unimaginable just a few years ago. The rebuffs are a jarring reminder of AI’s physical bounds, and limits of the dollars behind the technology. […] As AI promises to transcend corporeal fallibility, these standoffs reveal its very physical constraints — and Wall Street’s miscalculation of what some people value most. In the rolling hills of Mason county and farmland across America, that gap is measured not in dollars but in something harder to price: identity.

Source link

Advertisement
Continue Reading

Tech

OpenClaw should terrify anyone who thinks AI agents are ready for real responsibility

Published

on

A Meta executive wanted help cleaning up her inbox and thought the new OpenClaw automated AI agent would be just the trick. For safety’s sake, she made sure to tell it to “confirm before acting” and doing the cleanup. That linguistic child’s lock failed.

Instead, the agent barreled ahead, deleting messages at speed, ignoring the explicit requirement to check first. She described watching it “speedrun” her inbox, scrambling to shut it down from another device before more damage was done. Hundreds of emails vanished. The agent later apologized.

Source link

Continue Reading

Trending

Copyright © 2025