Connect with us

Tech

Anthropic says DeepSeek, Moonshot, and MiniMax used 24,000 fake accounts to rip off Claude

Published

on

Anthropic dropped a bombshell on the artificial intelligence industry Monday, publicly accusing three prominent Chinese AI laboratories — DeepSeek, Moonshot AI, and MiniMax — of orchestrating coordinated, industrial-scale campaigns to siphon capabilities from its Claude models using tens of thousands of fraudulent accounts.

The San Francisco-based company said the three labs collectively generated more than 16 million exchanges with Claude through approximately 24,000 fake accounts, all in violation of Anthropic’s terms of service and regional access restrictions. The campaigns, Anthropic said, are the most concrete and detailed public evidence to date of a practice that has haunted Silicon Valley for months: foreign competitors systematically using a technique called distillation to leapfrog years of research and billions of dollars in investment.

“These campaigns are growing in intensity and sophistication,” Anthropic wrote in a technical blog post published Monday. “The window to act is narrow, and the threat extends beyond any single company or region. Addressing it will require rapid, coordinated action among industry players, policymakers, and the global AI community.”

The disclosure marks a dramatic escalation in the simmering tensions between American and Chinese AI developers — and it arrives at a moment when Washington is actively debating whether to tighten or loosen export controls on the advanced chips that power AI training. Anthropic, led by CEO Dario Amodei, has been among the most vocal advocates for restricting chip sales to China, and the company explicitly connected Monday’s revelations to that policy fight.

Advertisement

How AI distillation went from obscure research technique to geopolitical flashpoint

To understand what Anthropic alleges, it helps to understand what distillation actually is — and how it evolved from an academic curiosity into the most contentious issue in the global AI race.

At its core, distillation is a process of extracting knowledge from a larger, more powerful AI model — the “teacher” — to create a smaller, more efficient one — the “student.” The student model learns not from raw data, but from the teacher’s outputs: its answers, reasoning patterns, and behaviors. Done correctly, the student can achieve performance remarkably close to the teacher’s while requiring a fraction of the compute to train.

As Anthropic itself acknowledged, distillation is “a widely used and legitimate training method.” Frontier AI labs, including Anthropic, routinely distill their own models to create smaller, cheaper versions for customers. But the same technique can be weaponized. A competitor can pose as a legitimate customer, bombard a frontier model with carefully crafted prompts, collect the outputs, and use those outputs to train a rival system — capturing capabilities that took years and hundreds of millions of dollars to develop.

The technique burst into public consciousness in January 2025 when DeepSeek released its R1 reasoning model, which appeared to match or approach the performance of leading American models at dramatically lower cost. Databricks CEO Ali Ghodsi captured the industry’s anxiety at the time, telling CNBC: “This distillation technique is just so extremely powerful and so extremely cheap, and it’s just available to anyone.” He predicted the technique would usher in an era of intense competition for large language models.

Advertisement

That prediction proved prescient. In the weeks following DeepSeek’s release, researchers at UC Berkeley said they recreated OpenAI’s reasoning model for just $450 in 19 hours. Researchers at Stanford and the University of Washington followed with their own version built in 26 minutes for under $50 in compute credits. The startup Hugging Face replicated OpenAI’s Deep Research feature as a 24-hour coding challenge. DeepSeek itself openly released a family of distilled models on Hugging Face — including versions built on top of Qwen and Llama architectures — under the permissive MIT license, with the model card explicitly stating that the DeepSeek-R1 series supports commercial use and allows for any modifications and derivative works, “including, but not limited to, distillation for training other LLMs.”

But what Anthropic described Monday goes far beyond academic replication or open-source experimentation. The company detailed what it characterized as deliberate, covert, and large-scale intellectual property extraction by well-resourced commercial laboratories operating under the jurisdiction of the Chinese government.

Anthropic traces 16 million fraudulent exchanges to researchers at DeepSeek, Moonshot, and MiniMax

Anthropic attributed each campaign “with high confidence” through IP address correlation, request metadata, infrastructure indicators, and corroboration from unnamed industry partners who observed the same actors on their own platforms. Each campaign specifically targeted what Anthropic described as Claude’s most differentiated capabilities: agentic reasoning, tool use, and coding.

DeepSeek, the company that ignited the distillation debate, conducted what Anthropic described as the most technically sophisticated of the three operations, generating over 150,000 exchanges with Claude. Anthropic said DeepSeek’s prompts targeted reasoning capabilities, rubric-based grading tasks designed to make Claude function as a reward model for reinforcement learning, and — in a detail likely to draw particular political attention — the creation of “censorship-safe alternatives to policy sensitive queries.”

Advertisement

Anthropic alleged that DeepSeek “generated synchronized traffic across accounts” with “identical patterns, shared payment methods, and coordinated timing” that suggested load balancing to maximize throughput while evading detection. In one particularly notable technique, Anthropic said DeepSeek’s prompts “asked Claude to imagine and articulate the internal reasoning behind a completed response and write it out step by step — effectively generating chain-of-thought training data at scale.” The company also alleged it observed tasks in which Claude was used to generate alternatives to politically sensitive queries about “dissidents, party leaders, or authoritarianism,” likely to train DeepSeek’s own models to steer conversations away from censored topics. Anthropic said it was able to trace these accounts to specific researchers at the lab.

Moonshot AI, the Beijing-based creator of the Kimi models, ran the second-largest operation by volume at over 3.4 million exchanges. Anthropic said Moonshot targeted agentic reasoning and tool use, coding and data analysis, computer-use agent development, and computer vision. The company employed “hundreds of fraudulent accounts spanning multiple access pathways,” making the campaign harder to detect as a coordinated operation. Anthropic attributed the campaign through request metadata that “matched the public profiles of senior Moonshot staff.” In a later phase, Anthropic said, Moonshot adopted a more targeted approach, “attempting to extract and reconstruct Claude’s reasoning traces.”

MiniMax, the least publicly known of the three but the most prolific by volume, generated over 13 million exchanges — more than three-quarters of the total. Anthropic said MiniMax’s campaign focused on agentic coding, tool use, and orchestration. The company said it detected MiniMax’s campaign while it was still active, “before MiniMax released the model it was training,” giving Anthropic “unprecedented visibility into the life cycle of distillation attacks, from data generation through to model launch.” In a detail that underscores the urgency and opportunism Anthropic alleges, the company said that when it released a new model during MiniMax’s active campaign, MiniMax “pivoted within 24 hours, redirecting nearly half their traffic to capture capabilities from our latest system.”

How proxy networks and ‘hydra cluster’ architectures helped Chinese labs bypass Anthropic’s China ban

Anthropic does not currently offer commercial access to Claude in China, a policy it maintains for national security reasons. So how did these labs access the models at all?

Advertisement

The answer, Anthropic said, lies in commercial proxy services that resell access to Claude and other frontier AI models at scale. Anthropic described these services as running what it calls “hydra cluster” architectures — sprawling networks of fraudulent accounts that distribute traffic across Anthropic’s API and third-party cloud platforms. “The breadth of these networks means that there are no single points of failure,” Anthropic wrote. “When one account is banned, a new one takes its place.” In one case, Anthropic said, a single proxy network managed more than 20,000 fraudulent accounts simultaneously, mixing distillation traffic with unrelated customer requests to make detection harder.

The description suggests a mature and well-resourced infrastructure ecosystem dedicated to circumventing access controls — one that may serve many more clients than just the three labs Anthropic named.

Why Anthropic framed distillation as a national security crisis, not just an IP dispute

Anthropic did not treat this as a mere terms-of-service violation. The company embedded its technical disclosure within an explicit national security argument, warning that “illicitly distilled models lack necessary safeguards, creating significant national security risks.”

The company argued that models built through illicit distillation are “unlikely to retain” the safety guardrails that American companies build into their systems — protections designed to prevent AI from being used to develop bioweapons, carry out cyberattacks, or enable mass surveillance. “Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems,” Anthropic wrote, “enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance.”

Advertisement

This framing directly connects to the chip export control debate that Amodei has made a centerpiece of his public advocacy. In a detailed essay published in January 2025, Amodei argued that export controls are “the most important determinant of whether we end up in a unipolar or bipolar world” — a world where either only the U.S. and its allies possess the most powerful AI, or one where China achieves parity. He specifically noted at the time that he was “not taking any position on reports of distillation from Western models” and would “just take DeepSeek at their word that they trained it the way they said in the paper.”

Monday’s disclosure is a sharp departure from that earlier restraint. Anthropic now argues that distillation attacks “undermine” export controls “by allowing foreign labs, including those subject to the control of the Chinese Communist Party, to close the competitive advantage that export controls are designed to preserve through other means.” The company went further, asserting that “without visibility into these attacks, the apparently rapid advancements made by these labs are incorrectly taken as evidence that export controls are ineffective.” In other words, Anthropic is arguing that what some observers interpreted as proof that Chinese labs can innovate around chip restrictions was actually, in significant part, the result of stealing American capabilities.

The murky legal landscape around AI distillation may explain Anthropic’s political strategy

Anthropic’s decision to frame this as a national security issue rather than a legal dispute may reflect the difficult reality that intellectual property law offers limited recourse against distillation.

As a March 2025 analysis by the law firm Winston & Strawn noted, “the legal landscape surrounding AI distillation is unclear and evolving.” The firm’s attorneys observed that proving a copyright claim in this context would be challenging, since it remains unclear whether the outputs of AI models qualify as copyrightable creative expression. The U.S. Copyright Office affirmed in January 2025 that copyright protection requires human authorship, and that “mere provision of prompts does not render the outputs copyrightable.”

Advertisement

The legal picture is further complicated by the way frontier labs structure output ownership. OpenAI’s terms of use, for instance, assign ownership of model outputs to the user — meaning that even if a company can prove extraction occurred, it may not hold copyrights over the extracted data. Winston & Strawn noted that this dynamic means “even if OpenAI can present enough evidence to show that DeepSeek extracted data from its models, OpenAI likely does not have copyrights over the data.” The same logic would almost certainly apply to Anthropic’s outputs.

Contract law may offer a more promising avenue. Anthropic’s terms of service prohibit the kind of systematic extraction the company describes, and violation of those terms is a more straightforward legal claim than copyright infringement. But enforcing contractual terms against entities operating through proxy services and fraudulent accounts in a foreign jurisdiction presents its own formidable challenges.

This may explain why Anthropic chose the national security frame over a purely legal one. By positioning distillation attacks as threats to export control regimes and democratic security rather than as intellectual property disputes, Anthropic appeals to policymakers and regulators who have tools — sanctions, entity list designations, enhanced export restrictions — that go far beyond what civil litigation could achieve.

What Anthropic’s distillation crackdown means for every company running a frontier AI model

Anthropic outlined a multipronged defensive response. The company said it has built classifiers and behavioral fingerprinting systems designed to identify distillation attack patterns in API traffic, including detection of chain-of-thought elicitation used to construct reasoning training data. It is sharing technical indicators with other AI labs, cloud providers, and relevant authorities to build what it described as a more holistic picture of the distillation landscape. The company has also strengthened verification for educational accounts, security research programs, and startup organizations — the pathways most commonly exploited for setting up fraudulent accounts — and is developing model-level safeguards designed to reduce the usefulness of outputs for illicit distillation without degrading the experience for legitimate customers.

Advertisement

But the company acknowledged that “no company can solve this alone,” calling for coordinated action across the industry, cloud providers, and policymakers.

The disclosure is likely to reverberate through multiple ongoing policy debates. In Congress, the bipartisan No DeepSeek on Government Devices Act has already been introduced. Federal agencies including NASA have banned DeepSeek from employee devices. And the broader question of chip export controls — which the Trump administration has been weighing amid competing pressures from Nvidia and national security hawks — now has a new and vivid data point.

For the AI industry’s technical decision-makers, the implications are immediate and practical. If Anthropic’s account is accurate, the proxy infrastructure enabling these attacks is vast, sophisticated, and adaptable — and it is not limited to targeting a single company. Every frontier AI lab with an API is a potential target. The era of treating model access as a simple commercial transaction may be coming to an end, replaced by one in which API security is as strategically important as the model weights themselves.

Anthropic has now put names, numbers, and forensic detail behind accusations that the industry had only whispered about for months. Whether that evidence galvanizes the coordinated response the company is calling for — or simply accelerates an arms race between distillers and defenders — may depend on a question no classifier can answer: whether Washington sees this as an act of espionage or just the cost of doing business in an era when intelligence itself has become a commodity.

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Nikon’s Nikkor Z 70-200mm f/2.8 VR S II is official, and improves on first-gen version in several key areas

Published

on


  • New Nikkor Z 70-200mm f/2.8 VR S II weighs just 998g
  • It promises quieter, faster autofocus and six stops of stabilization
  • Available from March, costing £2,999 / $2,999 / AU$5,399

Nikon has announced the Nikkor Z 70-200mm f/2.8 VR S II, a second-gen overhaul of its telephoto zoom promising class-leading weight savings, a faster autofocus system and a redesigned optical formula – all while retaining the f/2.8 maximum aperture that makes this type of lens so useful in low light. The new lens will be available from March 2026, priced at $2,999 / £2,999 / AU$5,399.

The headline figure is its weight. At just 998g (with the tripod collar removed), the new lens is 362g lighter than the original Nikkor Z 70-200mm f/2.8 VR S and, according to Nikon, the lightest lens among 70-200mm f/2.8 options for full-frame mirrorless cameras.

Source link

Advertisement
Continue Reading

Tech

Today’s NYT Mini Crossword Answers for Feb. 24

Published

on

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Need some help with today’s Mini Crossword? I thought 5-Down was very tricky, and not really representative of the clue, either. Read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Advertisement

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

completed-nyt-mini-crossword-puzzle-for-feb-24-2026.png

The completed NYT Mini Crossword puzzle for Feb. 24, 2026.

Advertisement

NYT/Screenshot by CNET

Mini across clues and answers

1A clue: Goosebumps-inducing
Answer: SCARY

6A clue: Buddy, informally
Answer: HOMIE

7A clue: Rub off, as pencil markings
Answer: ERASE

Advertisement

8A clue: Enjoys a quiet weekend morning, perhaps
Answer: LAZES

9A clue: David Szalay novel that won the 2025 Booker Prize
Answer: FLESH

Mini down clues and answers

1D clue: Section of a bookcase
Answer: SHELF

2D clue: Color similar to salmon that’s also named for a sea creature
Answer: CORAL

Advertisement

3D clue: Leave speechless
Answer: AMAZE

4D clue: Gets out of bed
Answer: RISES

5D clue: “Uff-da!”
Answer: YEESH

Advertisement

Source link

Continue Reading

Tech

How Copyright Litigation Over Anne Frank’s Diary Could Impact The Fate Of VPNs In The EU

Published

on

from the copyright-gone-mad dept

“The Diary of a Young Girl” is a Dutch language diary written by the young Jewish writer Anne Frank while she was in hiding for two years with her family during the Nazi occupation of the Netherlands. Although the diary and Anne Frank’s death in the Bergen-Belsen concentration camp are well known, few are aware that the text has a complicated copyright history – one that could have important implications for the legal status and use of Virtual Private Networks (VPNs) in the EU. TorrentFreak explains the copyright background:

These copyrights are controlled by the Swiss-based Anne Frank Fonds, which was the sole heir of Anne’s father, Otto Frank. The Fonds states that many print versions of the diary remain protected for decades, and even the manuscripts are not freely available everywhere.

In the Netherlands, for example, certain sections of the manuscripts remain protected by copyright until 2037, even though they have entered the public domain in neighboring countries like Belgium.

A separate foundation, the Netherlands-based Anne Frank Stichting, wanted to publish a scholarly edition of Anne Frank’s writing, at least in those parts of the world where her diary was in the public domain:

To navigate these conflicting laws, the Dutch Anne Frank Stichting published a scholarly edition online using “state-of-the-art” geo-blocking to prevent Dutch residents from accessing the site. Visitors from the Netherlands and other countries where the work is protected are met with a clear message, informing them about these access restrictions.

However, the Anne Frank Fonds was unhappy with this approach, and took legal action. Its argument was that such geo-blocking could be circumvented with VPNs, and so its copyrights in the Netherlands could be infringed upon by those using VPNs. The lower courts in the Netherlands dismissed this argument, and the case is now before the Dutch Supreme Court. Beyond the specifics of the Anne Frank scholarly edition, there are important issues regarding the use of VPNs to get around geo-blocking. Because of the potential knock-on effect the ruling in this case will have on EU law, the Dutch Supreme Court has asked for guidance from the EU’s top court, the Court of Justice of the European Union (CJEU).

Advertisement

The CJEU has yet to rule on the issues raised. But one of the court’s advisors, Advocate General Rantos, has published a preliminary opinion, as is normal in such cases. Although that advice is not binding on the CJEU, it often provides some indication as to how the court may eventually decide. On the main issue of whether the ability of people to circumvent geo-blocking is a problem, Rantos writes:

the fact that users manage to circumvent a geo-blocking measure put in place to restrict access to a protected work does not, in itself, mean that the entity that put the geo-blocking in place communicates that work to the public in a territory where access to it is supposed to be blocked. Such an interpretation would make it impossible to manage copyright on the internet on a territorial basis and would mean that any communication to the public on the internet would be global.

Moreover:

As the [European] Commission pointed out in its written observations, the holder of an exclusive right in a work does not have the right to authorise or prohibit, on the basis of the right granted to it in one Member State, communication to the public in another Member State in which that right has ceased to have effect.

Or, more succinctly: “service providers in the public domain country cannot be subject to unreasonable requirements”. That’s a good, common-sense view. But perhaps just as important is the following comment by Rantos regarding the use of VPNs to circumvent geo-blocking:

as the Commission points out in its observations, VPN services are legally accessible technical services which users may, however, use for unlawful purposes. The mere fact that those or similar services may be used for such purposes is not sufficient to establish that the service providers themselves communicate the protected work to the public. It would be different if those service providers actively encouraged the unlawful use of their services.

That’s an important point at a time when VPNs are under attack from some governments because of concerns about possible copyright infringement by those using them.

Advertisement

The hope has to be that the CJEU will agree with its Advocate General’s sensible and fair analysis, and will rule accordingly. But there is another important aspect to this story. The basic issue is that the Anne Frank Stichting wants to make its scholarly edition of Anne Frank’s diary available as widely as possible. That seems a laudable aim, since it will increase understanding and appreciation of the young woman’s remarkable diary by publishing an academically rigorous version. And yet the Anne Frank Fonds has taken legal action to stop that move, on the grounds that it would represent an infringement of its intellectual monopoly in some parts of Frank’s work, in some parts of the world. The current dispute is another clear example of how copyright has become for some an end in itself, more important than the things that it is supposed to promote.

Follow me @glynmoody on Mastodon and on Bluesky. Republished from Walled Culture.

Filed Under: anne frank, anne frank’s diary, cjeu, copyright, diary of anne frank, geoblocking, netherlands, public domain, vpns

Companies: anne frank fonds, anne frank stichting

Advertisement

Source link

Continue Reading

Tech

NASA’s moon rocket is about to leave the launchpad, but it ain’t going skyward

Published

on

The four astronauts preparing to end a five-decade gap in crewed lunar flights will have to wait until at least April before they can begin the Artemis II mission.

During the SLS rocket’s second wet dress rehearsal last weekend, NASA discovered an issue with the flow of helium to the rocket’s upper stage.

Engineers decided that to fix the problem, the massive rocket, which is currently on the launchpad at the Kennedy Space Center in Florida, will have to be transported back to the Vehicle Assembly Building (VAB). That four-mile rollback to the VAB is expected to take place on Tuesday, February 24.

On Monday, NASA confirmed that as a result of the latest issue, the rocket will no longer be launching on the recently announced March 6 target date, adding that the Artemis II mission will now lift off “no earlier than April 2026.”

Advertisement

NASA added: “The quick work to begin preparations for rolling the rocket and spacecraft back to the VAB potentially preserves the April launch window, pending the outcome of data findings, repair efforts, and how the schedule comes to fruition in the coming days and weeks.”

The Artemis II crew members — NASA’s Victor Glover, Reid Wiseman, and Christina Koch, along with the Canadian Space Agency’s Jeremy Hansen — left quarantine on Saturday evening and remain at NASA’s facility in Houston, Texas.

NASA originally targeted February 8 for the launch, but another issue in the first wet dress rehearsal prompted a delay, with NASA then announcing March 6 as a possible launch date. But that, too, has now been disregarded, with the team currently looking to launch in April.

The much-anticipated mission will involve the crew performing detailed tests on the Orion spacecraft’s systems while flying around the moon, with a smooth journey paving the way for a crewed lunar landing in the Artemis III mission, which could take place before the end of this decade.

Advertisement

Interested in following the 10-day mission when it finally gets underway? NASA recently shared a fascinating video revealing exactly how the flight is expected to unfold.

Source link

Advertisement
Continue Reading

Tech

Summer Game Fest runs from June 5-8

Published

on

It’s getting to be that time of year again. Summer Game Fest and will go until June 8. The Live Kickoff show will once again be hosted by Geoff Keighley and takes place on June 5 at 5PM ET. This is where we’ll see all of those juicy reveals and trailers.

The opening event will be streamed globally on just about every digital platform, including YouTube, Twitch, X and even Steam. Those in the Los Angeles area will be able to pick up tickets for the live show sometime in the Spring.

The kickoff event is just the beginning. There’s something called Play Days, which is an expo in downtown LA produced by iam8bit. This invite-only event promises “immersive exhibits and hands-on experiences from the industry’s leading publishers and developers.” Coverage of this will be shared across digital and social platforms.

There is, of course, another livestream scheduled for immediately after the kickoff. Day of the Devs: SGF Edition should provide us with even more trailers and reveals, this time for indie games.

Advertisement

Finally, there’s a “thought leadership event” on June 8 that’s primarily for developers and publishers. Game Business Live “brings together top industry voices on one stage for insightful discussions on key changes, challenges and opportunities shaping the global video game industry.”

We’ll be covering the event live and will have all of those trailers ready to go. After all, that’s pretty much the main reason people watch these things.

Source link

Advertisement
Continue Reading

Tech

Water, power, and transparency: Amazon’s $12B data center deal signals a new era of accountability

Published

on

Inside an Amazon data center. (Amazon Photo / Noah Berger)

Amazon on Monday announced a $12 billion data center project in Louisiana in which the company vowed to pay its own way for energy and other infrastructure.

The deal highlights the unwritten expectations now placed on tech giants to cover upfront power costs and other impacts. Such pledges have become commonplace as leaders at the state and national levels move to codify these commitments with new laws.

Amazon’s Louisiana project includes a deal with Southwestern Electric Power Company (SWEPCO) to pay for “energy infrastructure and upgrades required to serve the data centers, which also strengthens overall grid reliability for all SWEPCO customers. In addition, Amazon has invested in solar energy projects in Louisiana, bringing up to 200 [megawatts] of new carbon-free energy onto the grid,” the company said.

Amazon is also pledging to use “only verified surplus water” — which refers to water that is otherwise deemed unneeded by the community where the data centers are based.

Water is used by data centers to cool the electronics that produce heat while computing. Amazon expects to mostly use air to fan the machines, tapping into water cooling for less than 13% of the year in the peak of summer heat.

Advertisement

The company will also spend up to $400 million to improve water infrastructure, plus an additional $250,000 earmarked for the Amazon Northwest Louisiana Community Fund. The philanthropic dollars will help pay for STEM education, sustainability efforts, health and other local needs.

“Amazon is making a long-term commitment to Louisiana because our state delivers — prime sites, strong infrastructure and a skilled, hard-working workforce ready to support the next generation of technological innovation,” Louisiana Gov. Jeff Landry said in a statement.

New rules of engagement

Amazon’s deal in Louisiana comes amid mounting pushback to data centers from local communities and lawmakers.

On Monday, Sen. Bernie Sanders called again for a moratorium on data center deployments, citing Denver’s move to temporarily ban new facilities. The Vermont senator called out data centers’ environmental impacts, as well as AI’s threat to jobs and overall risks to humanity.

Advertisement

Washington state, where Amazon is based, is among the areas pursuing legislation to control the impact of data centers on local communities, including their use of energy and water to run the computer hubs that underpin the internet and support the growing use of artificial intelligence.

The measure, House Bill 2515, passed the House last week and is now being considered by the Senate. The legislation includes public reporting requirements about sustainability impacts and projected energy use, bringing heightened transparency to a sector that has often expanded and operated secrecy.

Meanwhile, tech companies like Amazon and fellow Seattle-area hyperscaler Microsoft are adjusting their approach as they spend heavily to build out the infrastructure needed to power their AI ambitions.

Microsoft last month made a good neighbor pledge for all of its new data centers, vowing to pay the company’s full power costs, reject local property tax breaks, replenish more water than it uses, train local workers, and invest in AI education and community programs.

Advertisement

“This sector worked one way in the past, and needs to work in some different ways going forward,” Microsoft President Brad Smith told GeekWire.

Pursuit of clean power

Amazon has committed to spending $200 billion this year on capital expenditures worldwide, predominately for its Amazon Web Services cloud business. Microsoft expects to shell out up to $140 billion in capital expenses this fiscal year.

Both companies are racing to secure clean energy for their expansions. Beyond wind, solar and batteries, their strategies include new and existing nuclear facilities — and Microsoft is even eyeing fusion energy, an unproven but potentially transformative technology.

Data centers are expected to drive roughly half of the energy growth demand in the U.S. by 2030, according to new data from the International Energy Agency. Solar will provide much of the supply, but so will natural gas, which contributes to the continued warming of the planet.

Advertisement

A new report from BloombergNEF found that Amazon, Microsoft, Meta and Google made nearly half of the world’s new clean energy deals last year. Amazon alone — which tied with Meta in making the most power purchase agreements — paid for nearly 10 gigawatts of energy globally. That’s about one-third of the power demand annually in California.

Overall, the volume of power purchase agreements declined for the first time in a decade as corporations in other sectors stepped back from the deals.

Since 2023, Amazon has annually bought enough clean energy to match its electricity use worldwide.

Last week, Microsoft announced that it, too, hit that benchmark in 2025. That doesn’t mean the companies are literally using only climate friendly power — depending on when and where they operate, their data centers and operations will require fossil fuels while still supporting clean energy use globally.

Advertisement

Source link

Continue Reading

Tech

US Farmers Are Rejecting Multimillion-Dollar Datacenter Bids For Their Land

Published

on

An anonymous reader quotes a report from the Guardian: When two men knocked on Ida Huddleston’s door last May, they carried a contract worth more than $33m in exchange for the Kentucky farm that had fed her family for centuries. According to Huddleston, the men’s client, an unnamed “Fortune 100 company,” sought her 650 acres (260 hectares) in Mason county for an unspecified industrial development. Finding out any more would require signing a non-disclosure agreement. More than a dozen of her neighbors received the same knock. Searching public records for answers, they discovered that a new customer (PDF) had applied for a 2.2 gigawatt project from the local power plant, nearly double its annual generation capacity. The unknown company was building a datacenter. “You don’t have enough to buy me out. I’m not for sale. Leave me alone, I’m satisfied,” Huddleston, 82, later told the men.

As tech companies race to build the massive datacenters needed to power artificial intelligence across the US and the world, bids like the one for Huddleston’s land are appearing on rural doorsteps nationwide. Globally, 40,000 acres of powered land – real estate prepped for datacenter development — are projected to be needed for new projects over the next five years, double the amount currently in use. Yet despite sums that often dwarf the land’s recent value, farmers are increasingly shutting the door. At least five of Huddleston’s neighbors gave similar categorical rejections, including one who was told he could name any price.

In Pennsylvania, a farmer rejected $15m in January for land he’d worked for 50 years. A Wisconsin farmer turned down $80m the same month. Other landowners have declined offers exceeding $120,000 per acre — prices unimaginable just a few years ago. The rebuffs are a jarring reminder of AI’s physical bounds, and limits of the dollars behind the technology. […] As AI promises to transcend corporeal fallibility, these standoffs reveal its very physical constraints — and Wall Street’s miscalculation of what some people value most. In the rolling hills of Mason county and farmland across America, that gap is measured not in dollars but in something harder to price: identity.

Source link

Advertisement
Continue Reading

Tech

OpenClaw should terrify anyone who thinks AI agents are ready for real responsibility

Published

on

A Meta executive wanted help cleaning up her inbox and thought the new OpenClaw automated AI agent would be just the trick. For safety’s sake, she made sure to tell it to “confirm before acting” and doing the cleanup. That linguistic child’s lock failed.

Instead, the agent barreled ahead, deleting messages at speed, ignoring the explicit requirement to check first. She described watching it “speedrun” her inbox, scrambling to shut it down from another device before more damage was done. Hundreds of emails vanished. The agent later apologized.

Source link

Continue Reading

Tech

This AI Tool Doesn’t Help With Homework. It Does It for You

Published

on

A new AI tool called Einstein is pushing the boundaries of what automation in education looks like. Created by the startup Companion, Einstein does more than generate answers to homework questions. It logs directly into a student’s Canvas account and completes coursework on the student’s behalf.

According to its creators, Einstein operates through its own virtual computer. It can open a browser, navigate class pages, watch lecture videos, read PDFs and essays, write papers, complete quizzes and post replies in discussion boards. Once connected to a student’s account, the system can monitor deadlines and automatically submit assignments.

Advertisement
CNET AI Atlas badge; click to see more

Unlike chatbots that respond when prompted, Einstein functions more like a digital stand-in for a human student. After setup, it can run in the background with little ongoing input.

“Students are already using AI. We’re just giving them a better version of it,” Companion CEO Advait Paliwal said in a statement. 

Read more: ‘Machines Can’t Think for You.’ How Learning Is Changing in the Age of AI

How Einstein works

Einstein connects to Canvas, a widely used learning-management system in colleges and high schools. From there, it reviews course materials and identifies assigned tasks. The AI can analyze lecture recordings, summarize readings and generate written work that matches the assignment requirements.

The company says the system produces original essays with citations and context-aware discussion posts. It can also track new announcements and upcoming deadlines. In practice, this means a student could enroll in an online course and let Einstein handle much — if not all — of the required work.

Advertisement

The technology builds on advances in generative AI, browser automation and so-called autonomous agents that can take multistep actions on behalf of their human counterpart. While many students already use AI tools to brainstorm ideas or check grammar, Einstein moves beyond assistance into complete automation.

“Our companions aren’t simple chatbots,” Paliwal said. “Each one has access to an entire virtual computer with a persistent file system and internet access, so they can actually do things on your behalf. This makes ChatGPT look like a toy.”

A crossroads for academic integrity?

The release of Einstein comes at a time when schools are still adapting to widespread AI use. Since the arrival of powerful language models, educators have debated how to distinguish legitimate support from academic dishonesty. Most policies focus on whether students are using AI to help draft or edit their work, or do it entirely for them. 

Einstein complicates that conversation. 

Advertisement

If an AI logs in as a student and completes assignments independently, the question shifts from assistance to substitution. Is the tool essentially taking the student’s place? 

Not all in education are sounding the alarm, though. 

“I think the Canvas method of teaching already has a proclivity for cheating. This change, I think, will ultimately be good because it will force educators to redesign classes to not rely on virtual assignments,” said Nicholas DiMaggio, a PhD student at The University of Chicago Booth School of Business and teaching assistant for a course in consumer behavior this quarter. 

DiMaggio said that this may prompt institutions to emphasize in-person work, oral exams or project-based learning instead. Beyond this one tool, schools will have to decide whether to ban such tools outright, integrate them under strict guidelines or rethink how learning is measured in the age of AI.

Advertisement

Read more: How to Use AI to Get Better Grades — Without Cheating

Source link

Advertisement
Continue Reading

Tech

One engineer made a production SaaS product in an hour: here’s the governance system that made it possible

Published

on

Every engineering leader watching the agentic coding wave is eventually going to face the same question: if AI can generate production-quality code faster than any team, what does governance look like when the human isn’t writing the code anymore?

Most teams don’t have a good answer yet. Treasure Data, a SoftBank-backed customer data platform serving more than 450 global brands, now has one, though they learned parts of it the hard way.

The company today officially announced Treasure Code, a new AI-native command-line interface that lets data engineers and platform teams operate its full CDP through natural language, with Claude Code handling creation and iteration underneath. It was built by a single engineer.

The company says the coding itself took roughly 60 minutes. But that number is almost beside the point. The more important story is what had to be true before those 60 minutes were possible, and what broke after.

Advertisement

“From a planning standpoint, we still have to plan to derisk the business, and that did take a couple of weeks,” Rafa Flores, Chief Product Officer at Treasure Data, told VentureBeat. “From an ideation and execution standpoint, that’s where you kind of just blend the two and you just go, go, go. And it’s not just prototyping, it’s rolling things out in production in a safe way.”

Build the governance layer first

Before even a single line of code was written, Treasure Data had to answer a harder question: what does the system need to be prohibited from doing, and how do you enforce that at the platform level rather than hoping the code respects it?

The guardrails Treasure Data built live upstream of the code itself. When any user connects to the CDP through Treasure Code, access control and permission management are inherited directly from the platform. Users can only reach resources they already have permission for. PII cannot be exposed. API keys cannot be surfaced. The system cannot speak disparagingly about a brand or competitor.

“We had to get CISOs involved. I was involved. Our CTO, heads of engineering, just to make sure that this thing didn’t just go rogue,” Flores said.

Advertisement

This foundation made the next step possible: letting AI generate 100% of the codebase, with a three-tier quality pipeline enforcing production standards throughout.

The three-tier pipeline for AI code generation 

The first tier is an AI-based code reviewer also using Claude Code.

The code reviewer sits at the pull request stage and runs a structured review checklist against every proposed merge, checking for architectural alignment, security compliance, proper error handling, test coverage and documentation quality. When all criteria are satisfied it can merge automatically. When they aren’t, it flags for human intervention.

The fact that Treasure Data built the code reviewer in Claude Code is not incidental. It means the tool validating AI-generated code was itself AI-generated, a proof point that the workflow is self-reinforcing rather than dependent on a separate human-written quality layer.

Advertisement

The second tier is a standard CI/CD pipeline running automated unit, integration and end-to-end tests, static analysis, linting and security checks against every change. The third is human review, required wherever automated systems flag risk or enterprise policy demands sign-off.

The internal principle Treasure Data operates under: AI writes code, but AI does not ship code.

Why this isn’t just Cursor pointed at a database

The obvious question for any engineering team is why not just point an existing tool like Cursor at your data platform, or expose it as an MCP server and let Claude Code query it directly.

Flores argued the difference is governance depth. A generic connection gives you natural language access to data but inherits none of the platform’s existing permission structures, meaning every query runs with whatever access the API key allows. 

Advertisement

Treasure Code inherits Treasure Data’s full access control and permissioning layer, so what a user can do through natural language is bounded by what they’re already authorized to do in the platform. 

The second distinction is orchestration. Because Treasure Code connects directly to Treasure Data’s AI Agent Foundry, it can coordinate sub-agents and skills across the platform rather than executing single tasks in isolation: the difference between telling an AI to run an analysis and having it orchestrate that analysis across omni-channel activation, segmentation and reporting simultaneously.

What broke anyway

Even with the governance architecture in place, the launch didn’t go cleanly, and Flores was candid about it.

Treasure Data initially made Treasure Code available to customers without a go-to-market plan. The assumption was that it would stay quiet while the team figured out next steps. Customers found it anyway. More than 100 customers and close to 1,000 users adopted it within two weeks, entirely through organic discovery.

Advertisement

“We didn’t put any go-to-market motions behind it. We didn’t think people were going to find it. Well, they did,” Flores said. “We were left scrambling with, how do we actually do the go-to-market motions? Do we even do a beta, since technically it’s live?”

The unplanned adoption also created a compliance gap. Treasure Data is still in the process of formally certifying Treasure Code under its Trust AI compliance program, a certification it had not completed before the product reached customers.

A second problem emerged when Treasure Data opened skill development to non-engineering teams. CSMs and account directors began building and submitting skills without understanding what would get approved and merged, creating significant wasted effort and a backlog of submissions that couldn’t clear the repository’s access policies.

Enterprise validation and what’s still missing

Thomson Reuters is among the early adopters. Flores said that the company had been attempting to build an in-house AI agent platform and struggling to move fast enough. It connected with Treasure Data’s AI Agent Foundry to accelerate audience segmentation work, then extended into Treasure Code to customize and iterate more rapidly.

Advertisement

The feedback, Flores said, has centered on extensibility and flexibility, and the fact that procurement was already done, removing a significant enterprise barrier to adoption.

The gap Thomson Reuters has flagged, and that Flores acknowledges the product doesn’t yet address, is guidance on AI maturity. Treasure Code doesn’t tell users who should use it, what to tackle first, or how to structure access across different skill levels within an organization.

“AI that allows you to be leveraged, but also tells you how to leverage it, I think that’s very differentiated,” Flores said. He sees it as the next meaningful layer to build.

What engineering leaders should take from this

Flores has had time to reflect on what the experience actually taught him, and he was direct about what he’d change. Next time, he said, the release would stay internal first.

Advertisement

“We will release it internally only. I will not release it to anyone outside of the organization,” he said. “It will be more of a controlled release so we can actually learn what we’re actually being exposed to at lower risk.”

On skill development, the lesson was to establish clear criteria for what gets approved and merged before opening the process to teams outside engineering, not after.

The common thread in both lessons is the same one that shaped the governance architecture and the three-tier pipeline: speed is only an advantage if the structure around it holds. For engineering leaders evaluating whether agentic coding is ready for production, the Treasure Data experience translates into three practical conclusions.

  1. Governance infrastructure has to precede the code, not follow it. The platform-level access controls and permission inheritance were what made it safe to let AI generate freely. Without that foundation, the speed advantage disappears because every output requires exhaustive manual review.

  2. A quality gate that doesn’t depend entirely on humans is not optional at scale.
    Build a quality gate that doesn’t depend entirely on humans. AI can review every pull request consistently, without fatigue, and check policy compliance systematically across the entire codebase. Human review remains essential, but as a final check rather than the primary quality mechanism.

  3. Plan for organic adoption. If the product works, people will find it before you’re ready. The compliance and go-to-market gaps Treasure Data is still closing are a direct result of underestimating that.

“Yes, vibe coding can work if done in a safe way and proper guardrails are in place,” Flores said. “Embrace it in a way to find means of not replacing the good work you do, but the tedious work that you can probably automate.”

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025