Connect with us

Tech

Is Age Verification a Trap?

Published

on

Social media is going the way of alcohol, gambling, and other social sins: Societies are deciding it’s no longer kid stuff. Lawmakers point to compulsive use, exposure to harmful content, and mounting concerns about adolescent mental health. So, many propose to set a minimum age, usually 13 or 16.

In cases when regulators demand real enforcement rather than symbolic rules, platforms run into a basic technical problem. The only way to prove that someone is old enough to use a site is to collect personal data about who they are. And the only way to prove that you checked is to keep the data indefinitely. Age-restriction laws push platforms toward intrusive verification systems that often directly conflict with modern data-privacy law.

This is the age-verification trap. Strong enforcement of age rules undermines data privacy.

How Does Age Enforcement Actually Work?

Most age-restriction laws follow a familiar pattern. They set a minimum age and require platforms to take “reasonable steps” or “effective measures” to prevent underage access. What these laws rarely spell out is how platforms are supposed to tell who is actually over the line. At the technical level, companies have only two tools.

Advertisement

The first is identity-based verification. Companies ask users to upload a government ID, link a digital identity, or provide documents that prove their age. Yet in many jurisdictions, 16-year-olds do not have IDs. In others, IDs exist but are not digital, not widely held, or not trustworthy. Storing copies of identity documents also creates security and misuse risks.

The second option is inference. Platforms try to guess age based on behavior, device signals, or biometric analysis, most commonly facial age estimation from selfies or videos. This avoids formal ID collection, but it replaces certainty with probability and error.

In practice, companies combine both. Self-declared ages are backed by inference systems. When confidence drops, or regulators ask for proof of effort, inference escalates to ID checks. What starts as a light-touch checkpoint turns into layered verification that follows users over time.

What Are Platforms Doing Now?

This pattern is already visible on major platforms.

Advertisement

Meta has deployed facial age estimation on Instagram in multiple markets, using video-selfie checks through third-party partners. When the system flags users as possibly underaged, it prompts them to record a short selfie video. An AI system estimates their age and, if it decides they are under the threshold, restricts or locks the account. Appeals often trigger additional checks, and misclassifications are common.

TikTok has confirmed that it also scans public videos to infer users’ ages. Google and YouTube rely heavily on behavioral signals tied to viewing history and account activity to infer age, then ask for government ID or a credit card when the system is unsure. A credit card functions as a proxy for adulthood, even though it says nothing about who is actually using the account. The Roblox games site, which recently launched a new age-estimate system, is already suffering from users selling child-aged accounts to adult predators seeking entry to age-restricted areas, Wired reports.

For a typical user, age is no longer a one-time declaration. It becomes a recurring test. A new phone, a change in behavior, or a false signal can trigger another check. Passing once does not end the process.

How Do Age-Verification Systems Fail?

These systems fail in predictable ways.

Advertisement

False positives are common. Platforms identify as minors adults with youthful faces, or adults who are sharing family devices, or have otherwise unusual usage. They lock accounts, sometimes for days. False negatives also persist. Teenagers learn quickly how to evade checks by borrowing IDs, cycling accounts, or using VPNs.

The appeal process itself creates new privacy risks. Platforms must store biometric data, ID images, and verification logs long enough to defend their decisions to regulators. So if an adult who is tired of submitting selfies to verify their age finally uploads an ID, the system must now secure that stored ID. Each retained record becomes a potential breach target.

Scale that experience across millions of users, and you bake the privacy risk into how platforms work.

Is Age Verification Compatible With Privacy Law?

This is where emerging age-restriction policy collides with existing privacy law.

Advertisement

Modern data-protection regimes all rest on similar ideas: Collect only what you need, use it only for a defined purpose, and keep it only as long as necessary.

Age enforcement undermines all three.

To prove they are following age-verification rules, platforms must log verification attempts, retain evidence, and monitor users over time. When regulators or courts ask whether a platform took reasonable steps, “We collected less data” is rarely persuasive. For companies, defending themselves against accusations of neglecting to properly verify age supersedes defending themselves against accusations of inappropriate data collection.

It is not an explicit choice by voters or policymakers, but instead a reaction to enforcement pressure and how companies perceive their litigation risk.

Advertisement

Less Developed Countries, Deeper Surveillance

Outside wealthy democracies, the trade-off is even starker.

Brazil’s Statute of Child-rearing and Adolescents (ECA in Portuguese) imposes strong child-protection duties online, while its data-protection law restricts data collection and processing. Now providers operating in Brazil must adopt effective age-verification mechanisms and can no longer rely on self-declaration alone for high-risk services. Yet they also face uneven identity infrastructure and widespread device sharing. To compensate, they rely more heavily on facial estimation and third-party verification vendors.

In Nigeria many users lack formal IDs. Digital service providers fill the gap with behavioral analysis, biometric inference, and offshore verification services, often with limited oversight. Audit logs grow, data flows expand, and the practical ability of users to understand or contest how companies infer their age shrinks accordingly. Where identity systems are weak, companies do not protect privacy. They bypass it.

The paradox is clear. In countries with less administrative capacity, age enforcement often produces more surveillance, not less, because inference fills the void of missing documents.

Advertisement

How Do Enforcement Priorities Change Expectations?

Some policymakers assume that vague standards preserve flexibility. In the U.K., then–Digital Secretary Michelle Donelan, argued in 2023 that requiring certain online safety outcomes without specifying the means would avoid mandating particular technologies. Experience suggests the opposite.

When disputes reach regulators or courts, the question is simple: Can minors still access the platform easily? If the answer is yes, authorities tell companies to do more. Over time, “reasonable steps” become more invasive.

Repeated facial scans, escalating ID checks, and long-term logging become the norm. Platforms that collect less data start to look reckless by comparison. Privacy-preserving designs lose out to defensible ones.

This pattern is familiar, including online sales-tax enforcement. After courts settled that large platforms had an obligation to collect and remit sales taxes, companies began continuous tracking and storage of transaction destinations and customer location signals. That tracking is not abusive, but once enforcement requires proof over time, companies build systems to log, retain, and correlate more data. Age verification is moving the same way. What begins as a one-time check becomes an ongoing evidentiary system, with pressure to monitor, retain, and justify user-level data.

Advertisement

The Choice We Are Avoiding

None of this is an argument against protecting children online. It is an argument against pretending there is no trade-off.

Some observers present privacy-preserving age proofs involving a third party, such as the government, as a solution, but they inherit the same structural flaw: Many users who are legally old enough to use a platform do not have government ID. In countries where the minimum age for social media is lower than the age at which ID is issued, platforms face a choice between excluding lawful users and monitoring everyone. Right now, companies are making that choice quietly, after building systems and normalizing behavior that protects them from the greater legal risks. Age-restriction laws are not just about kids and screens. They are reshaping how identity, privacy, and access work on the Internet for everyone.

The age-verification trap is not a glitch. It is what you get when regulators treat age enforcement as mandatory and privacy as optional.

From Your Site Articles

Advertisement

Related Articles Around the Web

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Summer Game Fest runs from June 5-8

Published

on

It’s getting to be that time of year again. Summer Game Fest and will go until June 8. The Live Kickoff show will once again be hosted by Geoff Keighley and takes place on June 5 at 5PM ET. This is where we’ll see all of those juicy reveals and trailers.

The opening event will be streamed globally on just about every digital platform, including YouTube, Twitch, X and even Steam. Those in the Los Angeles area will be able to pick up tickets for the live show sometime in the Spring.

The kickoff event is just the beginning. There’s something called Play Days, which is an expo in downtown LA produced by iam8bit. This invite-only event promises “immersive exhibits and hands-on experiences from the industry’s leading publishers and developers.” Coverage of this will be shared across digital and social platforms.

There is, of course, another livestream scheduled for immediately after the kickoff. Day of the Devs: SGF Edition should provide us with even more trailers and reveals, this time for indie games.

Advertisement

Finally, there’s a “thought leadership event” on June 8 that’s primarily for developers and publishers. Game Business Live “brings together top industry voices on one stage for insightful discussions on key changes, challenges and opportunities shaping the global video game industry.”

We’ll be covering the event live and will have all of those trailers ready to go. After all, that’s pretty much the main reason people watch these things.

Source link

Advertisement
Continue Reading

Tech

Water, power, and transparency: Amazon’s $12B data center deal signals a new era of accountability

Published

on

Inside an Amazon data center. (Amazon Photo / Noah Berger)

Amazon on Monday announced a $12 billion data center project in Louisiana in which the company vowed to pay its own way for energy and other infrastructure.

The deal highlights the unwritten expectations now placed on tech giants to cover upfront power costs and other impacts. Such pledges have become commonplace as leaders at the state and national levels move to codify these commitments with new laws.

Amazon’s Louisiana project includes a deal with Southwestern Electric Power Company (SWEPCO) to pay for “energy infrastructure and upgrades required to serve the data centers, which also strengthens overall grid reliability for all SWEPCO customers. In addition, Amazon has invested in solar energy projects in Louisiana, bringing up to 200 [megawatts] of new carbon-free energy onto the grid,” the company said.

Amazon is also pledging to use “only verified surplus water” — which refers to water that is otherwise deemed unneeded by the community where the data centers are based.

Water is used by data centers to cool the electronics that produce heat while computing. Amazon expects to mostly use air to fan the machines, tapping into water cooling for less than 13% of the year in the peak of summer heat.

Advertisement

The company will also spend up to $400 million to improve water infrastructure, plus an additional $250,000 earmarked for the Amazon Northwest Louisiana Community Fund. The philanthropic dollars will help pay for STEM education, sustainability efforts, health and other local needs.

“Amazon is making a long-term commitment to Louisiana because our state delivers — prime sites, strong infrastructure and a skilled, hard-working workforce ready to support the next generation of technological innovation,” Louisiana Gov. Jeff Landry said in a statement.

New rules of engagement

Amazon’s deal in Louisiana comes amid mounting pushback to data centers from local communities and lawmakers.

On Monday, Sen. Bernie Sanders called again for a moratorium on data center deployments, citing Denver’s move to temporarily ban new facilities. The Vermont senator called out data centers’ environmental impacts, as well as AI’s threat to jobs and overall risks to humanity.

Advertisement

Washington state, where Amazon is based, is among the areas pursuing legislation to control the impact of data centers on local communities, including their use of energy and water to run the computer hubs that underpin the internet and support the growing use of artificial intelligence.

The measure, House Bill 2515, passed the House last week and is now being considered by the Senate. The legislation includes public reporting requirements about sustainability impacts and projected energy use, bringing heightened transparency to a sector that has often expanded and operated secrecy.

Meanwhile, tech companies like Amazon and fellow Seattle-area hyperscaler Microsoft are adjusting their approach as they spend heavily to build out the infrastructure needed to power their AI ambitions.

Microsoft last month made a good neighbor pledge for all of its new data centers, vowing to pay the company’s full power costs, reject local property tax breaks, replenish more water than it uses, train local workers, and invest in AI education and community programs.

Advertisement

“This sector worked one way in the past, and needs to work in some different ways going forward,” Microsoft President Brad Smith told GeekWire.

Pursuit of clean power

Amazon has committed to spending $200 billion this year on capital expenditures worldwide, predominately for its Amazon Web Services cloud business. Microsoft expects to shell out up to $140 billion in capital expenses this fiscal year.

Both companies are racing to secure clean energy for their expansions. Beyond wind, solar and batteries, their strategies include new and existing nuclear facilities — and Microsoft is even eyeing fusion energy, an unproven but potentially transformative technology.

Data centers are expected to drive roughly half of the energy growth demand in the U.S. by 2030, according to new data from the International Energy Agency. Solar will provide much of the supply, but so will natural gas, which contributes to the continued warming of the planet.

Advertisement

A new report from BloombergNEF found that Amazon, Microsoft, Meta and Google made nearly half of the world’s new clean energy deals last year. Amazon alone — which tied with Meta in making the most power purchase agreements — paid for nearly 10 gigawatts of energy globally. That’s about one-third of the power demand annually in California.

Overall, the volume of power purchase agreements declined for the first time in a decade as corporations in other sectors stepped back from the deals.

Since 2023, Amazon has annually bought enough clean energy to match its electricity use worldwide.

Last week, Microsoft announced that it, too, hit that benchmark in 2025. That doesn’t mean the companies are literally using only climate friendly power — depending on when and where they operate, their data centers and operations will require fossil fuels while still supporting clean energy use globally.

Advertisement

Source link

Continue Reading

Tech

Anthropic says DeepSeek, Moonshot, and MiniMax used 24,000 fake accounts to rip off Claude

Published

on

Anthropic dropped a bombshell on the artificial intelligence industry Monday, publicly accusing three prominent Chinese AI laboratories — DeepSeek, Moonshot AI, and MiniMax — of orchestrating coordinated, industrial-scale campaigns to siphon capabilities from its Claude models using tens of thousands of fraudulent accounts.

The San Francisco-based company said the three labs collectively generated more than 16 million exchanges with Claude through approximately 24,000 fake accounts, all in violation of Anthropic’s terms of service and regional access restrictions. The campaigns, Anthropic said, are the most concrete and detailed public evidence to date of a practice that has haunted Silicon Valley for months: foreign competitors systematically using a technique called distillation to leapfrog years of research and billions of dollars in investment.

“These campaigns are growing in intensity and sophistication,” Anthropic wrote in a technical blog post published Monday. “The window to act is narrow, and the threat extends beyond any single company or region. Addressing it will require rapid, coordinated action among industry players, policymakers, and the global AI community.”

The disclosure marks a dramatic escalation in the simmering tensions between American and Chinese AI developers — and it arrives at a moment when Washington is actively debating whether to tighten or loosen export controls on the advanced chips that power AI training. Anthropic, led by CEO Dario Amodei, has been among the most vocal advocates for restricting chip sales to China, and the company explicitly connected Monday’s revelations to that policy fight.

Advertisement

How AI distillation went from obscure research technique to geopolitical flashpoint

To understand what Anthropic alleges, it helps to understand what distillation actually is — and how it evolved from an academic curiosity into the most contentious issue in the global AI race.

At its core, distillation is a process of extracting knowledge from a larger, more powerful AI model — the “teacher” — to create a smaller, more efficient one — the “student.” The student model learns not from raw data, but from the teacher’s outputs: its answers, reasoning patterns, and behaviors. Done correctly, the student can achieve performance remarkably close to the teacher’s while requiring a fraction of the compute to train.

As Anthropic itself acknowledged, distillation is “a widely used and legitimate training method.” Frontier AI labs, including Anthropic, routinely distill their own models to create smaller, cheaper versions for customers. But the same technique can be weaponized. A competitor can pose as a legitimate customer, bombard a frontier model with carefully crafted prompts, collect the outputs, and use those outputs to train a rival system — capturing capabilities that took years and hundreds of millions of dollars to develop.

The technique burst into public consciousness in January 2025 when DeepSeek released its R1 reasoning model, which appeared to match or approach the performance of leading American models at dramatically lower cost. Databricks CEO Ali Ghodsi captured the industry’s anxiety at the time, telling CNBC: “This distillation technique is just so extremely powerful and so extremely cheap, and it’s just available to anyone.” He predicted the technique would usher in an era of intense competition for large language models.

Advertisement

That prediction proved prescient. In the weeks following DeepSeek’s release, researchers at UC Berkeley said they recreated OpenAI’s reasoning model for just $450 in 19 hours. Researchers at Stanford and the University of Washington followed with their own version built in 26 minutes for under $50 in compute credits. The startup Hugging Face replicated OpenAI’s Deep Research feature as a 24-hour coding challenge. DeepSeek itself openly released a family of distilled models on Hugging Face — including versions built on top of Qwen and Llama architectures — under the permissive MIT license, with the model card explicitly stating that the DeepSeek-R1 series supports commercial use and allows for any modifications and derivative works, “including, but not limited to, distillation for training other LLMs.”

But what Anthropic described Monday goes far beyond academic replication or open-source experimentation. The company detailed what it characterized as deliberate, covert, and large-scale intellectual property extraction by well-resourced commercial laboratories operating under the jurisdiction of the Chinese government.

Anthropic traces 16 million fraudulent exchanges to researchers at DeepSeek, Moonshot, and MiniMax

Anthropic attributed each campaign “with high confidence” through IP address correlation, request metadata, infrastructure indicators, and corroboration from unnamed industry partners who observed the same actors on their own platforms. Each campaign specifically targeted what Anthropic described as Claude’s most differentiated capabilities: agentic reasoning, tool use, and coding.

DeepSeek, the company that ignited the distillation debate, conducted what Anthropic described as the most technically sophisticated of the three operations, generating over 150,000 exchanges with Claude. Anthropic said DeepSeek’s prompts targeted reasoning capabilities, rubric-based grading tasks designed to make Claude function as a reward model for reinforcement learning, and — in a detail likely to draw particular political attention — the creation of “censorship-safe alternatives to policy sensitive queries.”

Advertisement

Anthropic alleged that DeepSeek “generated synchronized traffic across accounts” with “identical patterns, shared payment methods, and coordinated timing” that suggested load balancing to maximize throughput while evading detection. In one particularly notable technique, Anthropic said DeepSeek’s prompts “asked Claude to imagine and articulate the internal reasoning behind a completed response and write it out step by step — effectively generating chain-of-thought training data at scale.” The company also alleged it observed tasks in which Claude was used to generate alternatives to politically sensitive queries about “dissidents, party leaders, or authoritarianism,” likely to train DeepSeek’s own models to steer conversations away from censored topics. Anthropic said it was able to trace these accounts to specific researchers at the lab.

Moonshot AI, the Beijing-based creator of the Kimi models, ran the second-largest operation by volume at over 3.4 million exchanges. Anthropic said Moonshot targeted agentic reasoning and tool use, coding and data analysis, computer-use agent development, and computer vision. The company employed “hundreds of fraudulent accounts spanning multiple access pathways,” making the campaign harder to detect as a coordinated operation. Anthropic attributed the campaign through request metadata that “matched the public profiles of senior Moonshot staff.” In a later phase, Anthropic said, Moonshot adopted a more targeted approach, “attempting to extract and reconstruct Claude’s reasoning traces.”

MiniMax, the least publicly known of the three but the most prolific by volume, generated over 13 million exchanges — more than three-quarters of the total. Anthropic said MiniMax’s campaign focused on agentic coding, tool use, and orchestration. The company said it detected MiniMax’s campaign while it was still active, “before MiniMax released the model it was training,” giving Anthropic “unprecedented visibility into the life cycle of distillation attacks, from data generation through to model launch.” In a detail that underscores the urgency and opportunism Anthropic alleges, the company said that when it released a new model during MiniMax’s active campaign, MiniMax “pivoted within 24 hours, redirecting nearly half their traffic to capture capabilities from our latest system.”

How proxy networks and ‘hydra cluster’ architectures helped Chinese labs bypass Anthropic’s China ban

Anthropic does not currently offer commercial access to Claude in China, a policy it maintains for national security reasons. So how did these labs access the models at all?

Advertisement

The answer, Anthropic said, lies in commercial proxy services that resell access to Claude and other frontier AI models at scale. Anthropic described these services as running what it calls “hydra cluster” architectures — sprawling networks of fraudulent accounts that distribute traffic across Anthropic’s API and third-party cloud platforms. “The breadth of these networks means that there are no single points of failure,” Anthropic wrote. “When one account is banned, a new one takes its place.” In one case, Anthropic said, a single proxy network managed more than 20,000 fraudulent accounts simultaneously, mixing distillation traffic with unrelated customer requests to make detection harder.

The description suggests a mature and well-resourced infrastructure ecosystem dedicated to circumventing access controls — one that may serve many more clients than just the three labs Anthropic named.

Why Anthropic framed distillation as a national security crisis, not just an IP dispute

Anthropic did not treat this as a mere terms-of-service violation. The company embedded its technical disclosure within an explicit national security argument, warning that “illicitly distilled models lack necessary safeguards, creating significant national security risks.”

The company argued that models built through illicit distillation are “unlikely to retain” the safety guardrails that American companies build into their systems — protections designed to prevent AI from being used to develop bioweapons, carry out cyberattacks, or enable mass surveillance. “Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems,” Anthropic wrote, “enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance.”

Advertisement

This framing directly connects to the chip export control debate that Amodei has made a centerpiece of his public advocacy. In a detailed essay published in January 2025, Amodei argued that export controls are “the most important determinant of whether we end up in a unipolar or bipolar world” — a world where either only the U.S. and its allies possess the most powerful AI, or one where China achieves parity. He specifically noted at the time that he was “not taking any position on reports of distillation from Western models” and would “just take DeepSeek at their word that they trained it the way they said in the paper.”

Monday’s disclosure is a sharp departure from that earlier restraint. Anthropic now argues that distillation attacks “undermine” export controls “by allowing foreign labs, including those subject to the control of the Chinese Communist Party, to close the competitive advantage that export controls are designed to preserve through other means.” The company went further, asserting that “without visibility into these attacks, the apparently rapid advancements made by these labs are incorrectly taken as evidence that export controls are ineffective.” In other words, Anthropic is arguing that what some observers interpreted as proof that Chinese labs can innovate around chip restrictions was actually, in significant part, the result of stealing American capabilities.

The murky legal landscape around AI distillation may explain Anthropic’s political strategy

Anthropic’s decision to frame this as a national security issue rather than a legal dispute may reflect the difficult reality that intellectual property law offers limited recourse against distillation.

As a March 2025 analysis by the law firm Winston & Strawn noted, “the legal landscape surrounding AI distillation is unclear and evolving.” The firm’s attorneys observed that proving a copyright claim in this context would be challenging, since it remains unclear whether the outputs of AI models qualify as copyrightable creative expression. The U.S. Copyright Office affirmed in January 2025 that copyright protection requires human authorship, and that “mere provision of prompts does not render the outputs copyrightable.”

Advertisement

The legal picture is further complicated by the way frontier labs structure output ownership. OpenAI’s terms of use, for instance, assign ownership of model outputs to the user — meaning that even if a company can prove extraction occurred, it may not hold copyrights over the extracted data. Winston & Strawn noted that this dynamic means “even if OpenAI can present enough evidence to show that DeepSeek extracted data from its models, OpenAI likely does not have copyrights over the data.” The same logic would almost certainly apply to Anthropic’s outputs.

Contract law may offer a more promising avenue. Anthropic’s terms of service prohibit the kind of systematic extraction the company describes, and violation of those terms is a more straightforward legal claim than copyright infringement. But enforcing contractual terms against entities operating through proxy services and fraudulent accounts in a foreign jurisdiction presents its own formidable challenges.

This may explain why Anthropic chose the national security frame over a purely legal one. By positioning distillation attacks as threats to export control regimes and democratic security rather than as intellectual property disputes, Anthropic appeals to policymakers and regulators who have tools — sanctions, entity list designations, enhanced export restrictions — that go far beyond what civil litigation could achieve.

What Anthropic’s distillation crackdown means for every company running a frontier AI model

Anthropic outlined a multipronged defensive response. The company said it has built classifiers and behavioral fingerprinting systems designed to identify distillation attack patterns in API traffic, including detection of chain-of-thought elicitation used to construct reasoning training data. It is sharing technical indicators with other AI labs, cloud providers, and relevant authorities to build what it described as a more holistic picture of the distillation landscape. The company has also strengthened verification for educational accounts, security research programs, and startup organizations — the pathways most commonly exploited for setting up fraudulent accounts — and is developing model-level safeguards designed to reduce the usefulness of outputs for illicit distillation without degrading the experience for legitimate customers.

Advertisement

But the company acknowledged that “no company can solve this alone,” calling for coordinated action across the industry, cloud providers, and policymakers.

The disclosure is likely to reverberate through multiple ongoing policy debates. In Congress, the bipartisan No DeepSeek on Government Devices Act has already been introduced. Federal agencies including NASA have banned DeepSeek from employee devices. And the broader question of chip export controls — which the Trump administration has been weighing amid competing pressures from Nvidia and national security hawks — now has a new and vivid data point.

For the AI industry’s technical decision-makers, the implications are immediate and practical. If Anthropic’s account is accurate, the proxy infrastructure enabling these attacks is vast, sophisticated, and adaptable — and it is not limited to targeting a single company. Every frontier AI lab with an API is a potential target. The era of treating model access as a simple commercial transaction may be coming to an end, replaced by one in which API security is as strategically important as the model weights themselves.

Anthropic has now put names, numbers, and forensic detail behind accusations that the industry had only whispered about for months. Whether that evidence galvanizes the coordinated response the company is calling for — or simply accelerates an arms race between distillers and defenders — may depend on a question no classifier can answer: whether Washington sees this as an act of espionage or just the cost of doing business in an era when intelligence itself has become a commodity.

Advertisement

Source link

Continue Reading

Tech

US Farmers Are Rejecting Multimillion-Dollar Datacenter Bids For Their Land

Published

on

An anonymous reader quotes a report from the Guardian: When two men knocked on Ida Huddleston’s door last May, they carried a contract worth more than $33m in exchange for the Kentucky farm that had fed her family for centuries. According to Huddleston, the men’s client, an unnamed “Fortune 100 company,” sought her 650 acres (260 hectares) in Mason county for an unspecified industrial development. Finding out any more would require signing a non-disclosure agreement. More than a dozen of her neighbors received the same knock. Searching public records for answers, they discovered that a new customer (PDF) had applied for a 2.2 gigawatt project from the local power plant, nearly double its annual generation capacity. The unknown company was building a datacenter. “You don’t have enough to buy me out. I’m not for sale. Leave me alone, I’m satisfied,” Huddleston, 82, later told the men.

As tech companies race to build the massive datacenters needed to power artificial intelligence across the US and the world, bids like the one for Huddleston’s land are appearing on rural doorsteps nationwide. Globally, 40,000 acres of powered land – real estate prepped for datacenter development — are projected to be needed for new projects over the next five years, double the amount currently in use. Yet despite sums that often dwarf the land’s recent value, farmers are increasingly shutting the door. At least five of Huddleston’s neighbors gave similar categorical rejections, including one who was told he could name any price.

In Pennsylvania, a farmer rejected $15m in January for land he’d worked for 50 years. A Wisconsin farmer turned down $80m the same month. Other landowners have declined offers exceeding $120,000 per acre — prices unimaginable just a few years ago. The rebuffs are a jarring reminder of AI’s physical bounds, and limits of the dollars behind the technology. […] As AI promises to transcend corporeal fallibility, these standoffs reveal its very physical constraints — and Wall Street’s miscalculation of what some people value most. In the rolling hills of Mason county and farmland across America, that gap is measured not in dollars but in something harder to price: identity.

Source link

Advertisement
Continue Reading

Tech

OpenClaw should terrify anyone who thinks AI agents are ready for real responsibility

Published

on

A Meta executive wanted help cleaning up her inbox and thought the new OpenClaw automated AI agent would be just the trick. For safety’s sake, she made sure to tell it to “confirm before acting” and doing the cleanup. That linguistic child’s lock failed.

Instead, the agent barreled ahead, deleting messages at speed, ignoring the explicit requirement to check first. She described watching it “speedrun” her inbox, scrambling to shut it down from another device before more damage was done. Hundreds of emails vanished. The agent later apologized.

Source link

Continue Reading

Tech

This AI Tool Doesn’t Help With Homework. It Does It for You

Published

on

A new AI tool called Einstein is pushing the boundaries of what automation in education looks like. Created by the startup Companion, Einstein does more than generate answers to homework questions. It logs directly into a student’s Canvas account and completes coursework on the student’s behalf.

According to its creators, Einstein operates through its own virtual computer. It can open a browser, navigate class pages, watch lecture videos, read PDFs and essays, write papers, complete quizzes and post replies in discussion boards. Once connected to a student’s account, the system can monitor deadlines and automatically submit assignments.

Advertisement
CNET AI Atlas badge; click to see more

Unlike chatbots that respond when prompted, Einstein functions more like a digital stand-in for a human student. After setup, it can run in the background with little ongoing input.

“Students are already using AI. We’re just giving them a better version of it,” Companion CEO Advait Paliwal said in a statement. 

Read more: ‘Machines Can’t Think for You.’ How Learning Is Changing in the Age of AI

How Einstein works

Einstein connects to Canvas, a widely used learning-management system in colleges and high schools. From there, it reviews course materials and identifies assigned tasks. The AI can analyze lecture recordings, summarize readings and generate written work that matches the assignment requirements.

The company says the system produces original essays with citations and context-aware discussion posts. It can also track new announcements and upcoming deadlines. In practice, this means a student could enroll in an online course and let Einstein handle much — if not all — of the required work.

Advertisement

The technology builds on advances in generative AI, browser automation and so-called autonomous agents that can take multistep actions on behalf of their human counterpart. While many students already use AI tools to brainstorm ideas or check grammar, Einstein moves beyond assistance into complete automation.

“Our companions aren’t simple chatbots,” Paliwal said. “Each one has access to an entire virtual computer with a persistent file system and internet access, so they can actually do things on your behalf. This makes ChatGPT look like a toy.”

A crossroads for academic integrity?

The release of Einstein comes at a time when schools are still adapting to widespread AI use. Since the arrival of powerful language models, educators have debated how to distinguish legitimate support from academic dishonesty. Most policies focus on whether students are using AI to help draft or edit their work, or do it entirely for them. 

Einstein complicates that conversation. 

Advertisement

If an AI logs in as a student and completes assignments independently, the question shifts from assistance to substitution. Is the tool essentially taking the student’s place? 

Not all in education are sounding the alarm, though. 

“I think the Canvas method of teaching already has a proclivity for cheating. This change, I think, will ultimately be good because it will force educators to redesign classes to not rely on virtual assignments,” said Nicholas DiMaggio, a PhD student at The University of Chicago Booth School of Business and teaching assistant for a course in consumer behavior this quarter. 

DiMaggio said that this may prompt institutions to emphasize in-person work, oral exams or project-based learning instead. Beyond this one tool, schools will have to decide whether to ban such tools outright, integrate them under strict guidelines or rethink how learning is measured in the age of AI.

Advertisement

Read more: How to Use AI to Get Better Grades — Without Cheating

Source link

Advertisement
Continue Reading

Tech

One engineer made a production SaaS product in an hour: here’s the governance system that made it possible

Published

on

Every engineering leader watching the agentic coding wave is eventually going to face the same question: if AI can generate production-quality code faster than any team, what does governance look like when the human isn’t writing the code anymore?

Most teams don’t have a good answer yet. Treasure Data, a SoftBank-backed customer data platform serving more than 450 global brands, now has one, though they learned parts of it the hard way.

The company today officially announced Treasure Code, a new AI-native command-line interface that lets data engineers and platform teams operate its full CDP through natural language, with Claude Code handling creation and iteration underneath. It was built by a single engineer.

The company says the coding itself took roughly 60 minutes. But that number is almost beside the point. The more important story is what had to be true before those 60 minutes were possible, and what broke after.

Advertisement

“From a planning standpoint, we still have to plan to derisk the business, and that did take a couple of weeks,” Rafa Flores, Chief Product Officer at Treasure Data, told VentureBeat. “From an ideation and execution standpoint, that’s where you kind of just blend the two and you just go, go, go. And it’s not just prototyping, it’s rolling things out in production in a safe way.”

Build the governance layer first

Before even a single line of code was written, Treasure Data had to answer a harder question: what does the system need to be prohibited from doing, and how do you enforce that at the platform level rather than hoping the code respects it?

The guardrails Treasure Data built live upstream of the code itself. When any user connects to the CDP through Treasure Code, access control and permission management are inherited directly from the platform. Users can only reach resources they already have permission for. PII cannot be exposed. API keys cannot be surfaced. The system cannot speak disparagingly about a brand or competitor.

“We had to get CISOs involved. I was involved. Our CTO, heads of engineering, just to make sure that this thing didn’t just go rogue,” Flores said.

Advertisement

This foundation made the next step possible: letting AI generate 100% of the codebase, with a three-tier quality pipeline enforcing production standards throughout.

The three-tier pipeline for AI code generation 

The first tier is an AI-based code reviewer also using Claude Code.

The code reviewer sits at the pull request stage and runs a structured review checklist against every proposed merge, checking for architectural alignment, security compliance, proper error handling, test coverage and documentation quality. When all criteria are satisfied it can merge automatically. When they aren’t, it flags for human intervention.

The fact that Treasure Data built the code reviewer in Claude Code is not incidental. It means the tool validating AI-generated code was itself AI-generated, a proof point that the workflow is self-reinforcing rather than dependent on a separate human-written quality layer.

Advertisement

The second tier is a standard CI/CD pipeline running automated unit, integration and end-to-end tests, static analysis, linting and security checks against every change. The third is human review, required wherever automated systems flag risk or enterprise policy demands sign-off.

The internal principle Treasure Data operates under: AI writes code, but AI does not ship code.

Why this isn’t just Cursor pointed at a database

The obvious question for any engineering team is why not just point an existing tool like Cursor at your data platform, or expose it as an MCP server and let Claude Code query it directly.

Flores argued the difference is governance depth. A generic connection gives you natural language access to data but inherits none of the platform’s existing permission structures, meaning every query runs with whatever access the API key allows. 

Advertisement

Treasure Code inherits Treasure Data’s full access control and permissioning layer, so what a user can do through natural language is bounded by what they’re already authorized to do in the platform. 

The second distinction is orchestration. Because Treasure Code connects directly to Treasure Data’s AI Agent Foundry, it can coordinate sub-agents and skills across the platform rather than executing single tasks in isolation: the difference between telling an AI to run an analysis and having it orchestrate that analysis across omni-channel activation, segmentation and reporting simultaneously.

What broke anyway

Even with the governance architecture in place, the launch didn’t go cleanly, and Flores was candid about it.

Treasure Data initially made Treasure Code available to customers without a go-to-market plan. The assumption was that it would stay quiet while the team figured out next steps. Customers found it anyway. More than 100 customers and close to 1,000 users adopted it within two weeks, entirely through organic discovery.

Advertisement

“We didn’t put any go-to-market motions behind it. We didn’t think people were going to find it. Well, they did,” Flores said. “We were left scrambling with, how do we actually do the go-to-market motions? Do we even do a beta, since technically it’s live?”

The unplanned adoption also created a compliance gap. Treasure Data is still in the process of formally certifying Treasure Code under its Trust AI compliance program, a certification it had not completed before the product reached customers.

A second problem emerged when Treasure Data opened skill development to non-engineering teams. CSMs and account directors began building and submitting skills without understanding what would get approved and merged, creating significant wasted effort and a backlog of submissions that couldn’t clear the repository’s access policies.

Enterprise validation and what’s still missing

Thomson Reuters is among the early adopters. Flores said that the company had been attempting to build an in-house AI agent platform and struggling to move fast enough. It connected with Treasure Data’s AI Agent Foundry to accelerate audience segmentation work, then extended into Treasure Code to customize and iterate more rapidly.

Advertisement

The feedback, Flores said, has centered on extensibility and flexibility, and the fact that procurement was already done, removing a significant enterprise barrier to adoption.

The gap Thomson Reuters has flagged, and that Flores acknowledges the product doesn’t yet address, is guidance on AI maturity. Treasure Code doesn’t tell users who should use it, what to tackle first, or how to structure access across different skill levels within an organization.

“AI that allows you to be leveraged, but also tells you how to leverage it, I think that’s very differentiated,” Flores said. He sees it as the next meaningful layer to build.

What engineering leaders should take from this

Flores has had time to reflect on what the experience actually taught him, and he was direct about what he’d change. Next time, he said, the release would stay internal first.

Advertisement

“We will release it internally only. I will not release it to anyone outside of the organization,” he said. “It will be more of a controlled release so we can actually learn what we’re actually being exposed to at lower risk.”

On skill development, the lesson was to establish clear criteria for what gets approved and merged before opening the process to teams outside engineering, not after.

The common thread in both lessons is the same one that shaped the governance architecture and the three-tier pipeline: speed is only an advantage if the structure around it holds. For engineering leaders evaluating whether agentic coding is ready for production, the Treasure Data experience translates into three practical conclusions.

  1. Governance infrastructure has to precede the code, not follow it. The platform-level access controls and permission inheritance were what made it safe to let AI generate freely. Without that foundation, the speed advantage disappears because every output requires exhaustive manual review.

  2. A quality gate that doesn’t depend entirely on humans is not optional at scale.
    Build a quality gate that doesn’t depend entirely on humans. AI can review every pull request consistently, without fatigue, and check policy compliance systematically across the entire codebase. Human review remains essential, but as a final check rather than the primary quality mechanism.

  3. Plan for organic adoption. If the product works, people will find it before you’re ready. The compliance and go-to-market gaps Treasure Data is still closing are a direct result of underestimating that.

“Yes, vibe coding can work if done in a safe way and proper guardrails are in place,” Flores said. “Embrace it in a way to find means of not replacing the good work you do, but the tedious work that you can probably automate.”

Advertisement

Source link

Continue Reading

Tech

X-Ray A PCB Virtually | Hackaday

Published

on

If you want to reverse engineer a PC board, you could do worse than X-ray it.  But thanks to [Philip Giacalone], you could just take a photo, load it into PCB Tracer, and annotate the images. You can see a few of a series of videos about the system below.

The tracer runs in your browser. It can let you mark traces, vias, components, and pads. You can annotate everything as you document it, and it can even call an AI model to help generate a schematic from the net list.

This is one of those things that you could do without. Any photo editor could do the same thing. But having the tool aware of what the photo is showing makes life easier. The built-in features are free, but if you use the AI tool, he says it will cost you about a half-dollar per schematic (paid to the AI company).

Advertisement

Even if you don’t think you need to reverse-engineer anything, you may still find this useful if you are trying to understand a board for repair. We’ve had a good Supercon/Remoticon talk about PCB reverse engineering you can watch. If you want to see what a real X-ray of a board looks like, here you go.

Advertisement

Source link

Continue Reading

Tech

Maynooth launches semiconductor master’s programme

Published

on

The postgrad course in circuit design is the first of its kind in Europe, according to Maynooth.

A new master’s degree in circuit design at Maynooth University aims to deliver skilled workers in the semiconductor sector in alignment with the Irish Government’s ‘Silicon Island’ strategy.

The degree programme – designed in collaboration with MIDAS Ireland, an Irish innovation cluster – is the first dedicated course of its kind anywhere in Europe, according to the university and the Government.

The 15-month programme mixes nine months of classroom learning with a full-time, paid placement in industry for students to gain real-world experience.

Advertisement

Prof Eeva Leinonen, president of Maynooth University, said: “This innovative, new master’s programme reflects Maynooth University’s ongoing commitment to partnering with government and industry to deliver academic programmes that respond directly to Ireland’s strategic skills needs.

“Our graduates will be equipped to contribute immediately to Ireland’s and Europe’s semiconductor ambitions, from advanced chip design to innovation in emerging applications.”

Silicon Island is the Government’s national plan for the Irish semiconductor industry, and is geared towards generating skilled workers, design expertise and co-operation between third-level institutions and companies in line with the European Chips Act – the EU initiative for the bloc’s future around semiconductor sovereignty and independence.

Minister for Enterprise, Tourism and Employment Peter Burke, TD said the new master’s programme would “help Irish-based companies recruit faster and grow smarter, while providing a top quality education and in-demand skills for our next generation of engineers”.

Advertisement

He added: “It strengthens Ireland’s hand as a place where both Irish and international companies can grow, innovate and hire the talent they need, cementing our reputation as a hub for semiconductor activity and innovation.”

Ireland is home to around 130 companies employing 20,000 people in the semiconductor sector. Last week, I-C3, Ireland’s National Competence Centre in Semiconductors, was unveiled as one of 30 such centres across 27 EU countries.

Minister for Further and Higher Education, Research, Innovation and Science James Lawless, TD said that the Chips Act aims to double semiconductor production in Europe by 2030 and to encourage upskilling across the industry, and that the Maynooth master’s course would “help ensure a supply of talented, highly skilled graduates who will strengthen Ireland’s competitiveness in the global semiconductor sector”.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Start Your Surround Sound Journey With $50 off This Klipsch Soundbar

Published

on

If you’re tired of listening to the crackle from the speakers on the back of your TV but aren’t ready for the full subwoofer-boosted suite, I’ve got a good deal for you. The Klipsch Flexus Core 200 is currently marked down by $50 at Amazon, and it’s a great place to start if you’re looking for a soundbar that will give you options down the road.

Klipsch Flexus Core 200, a long black rectangular speaker in front of a large flat-screen tv, sitting on an entertainment system shelf

It has fewer channels built into the sound bar than some of our other favorite picks, notably lacking the side-firing drivers that help with surround effects. That doesn’t keep it from sounding excellent, thanks to its 44-inch wide footprint and 2.25-inch drivers that reach all the way to either end. Our reviewer Ryan Waniata was impressed by the Core 200’s clarity and detail, and in particular called out the very punchy bass response.

While the bar has built-in controls for simple tasks like changing the volume and inputs, you can also use the mobile app to fine tune your audio experience. In addition to the stuff you’d expect, there’s also a three-band equalizer for those who like to fiddle and advanced settings for any extra speakers you add to the setup. With eARC to communicate with your TV, you shouldn’t need to touch the remote or app often anyway.

That’s right, one of the biggest selling points for the Klipsch Flexus Core 200 is the ability to add additional speakers to your setup. Both the Klipsch Flexus Surr 100 bookshelf speakers and Klipsch Flexus Sub 100 connect wirelessly to the Core 200 with a custom dongle, giving you a ton of freedom to stash the extra speakers wherever they’d sound best. If you have your own subwoofer that you like, there’s also an RCA jack on the bar to hook it up. That’s a lot of flexibility for any soundbar, let alone one at this price point.

If you’re ready to get the ball rolling on a proper sound system for your next movie night, you can save $50 on the Flexus Core 200, or meander over to our roundup of the best soundbars we’ve tested to find the best option for you.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025