Connect with us
DAPA Banner

Tech

Google and AWS split the AI agent stack between control and execution

Published

on

The era of enterprises stitching together prompt chains and shadow agents is nearing its end as more options for orchestrating complex multi-agent systems emerge. As organizations move AI agents into production, the question remains: “how will we manage them?”

Google and Amazon Web Services offer fundamentally different answers, illustrating a split in the AI stack. Google’s approach is to run agentic management on the system layer, while AWS’s harness method sets up in the execution layer. 

The debate on how to manage and control gained new energy this past month as competing companies released or updated their agent builder platforms—Anthropic with the new Claude Managed Agents and OpenAI with enhancements to the Agents SDK—giving developer teams options for managing agents. 

AWS with new capabilities added to Bedrock AgentCore is optimizing for velocity—relying on harnesses to bring agents to product faster—while still offering identity and tool management.

Advertisement

Meanwhile, Google’s Gemini Enterprise adopts a governance-focused approach using a Kubernetes-style control plane. Each method offers a glimpse into how agents move from short-burst task helpers to longer-running entities within a workflow. 

Upgrades and umbrellas

To understand where each company stands, here’s what’s actually new. 

Google released a new version of Gemini Enterprise, bringing its enterprise AI agent offerings—Gemini Enterprise Platform and Gemini Enterprise Application—under one umbrella. 

The company has rebranded Vertex AI as Gemini Enterprise Platform, though it insists that, aside from the name change and new features, it’s still fundamentally the same interface. 

Advertisement

“We want to provide a platform and a front door for companies to have access to all the AI systems and tools that Google provides,” Maryam Gholami, senior director, product management for Gemini Enterprise, told VentureBeat in an interview. “The way you can think about it is that the Gemini Enterprise Application is built on top of the Gemini Enterprise Agent Platform, and the security and governance tools are all provided for free as part of Gemini Enterprise Application subscription.”

On the other hand, AWS added a new managed agent harness to Bedrock Agentcore. The company said in a press release shared with VentureBeat that the harness “replaces upfront build with a config-based starting point powered by Strands Agents, AWS’s open source agent framework.” 

Users define what the agent does, the model it uses and the tools it calls, and AgentCore does the work to stitch all of that together to run the agent. 

Agents are now becoming systems

The shift toward stateful, long-running autonomous agents has forced a rethink of how AI systems behave. As agents move from short-lived tasks to long-running workflows, a new class of failure is emerging: state drift.

Advertisement

As agents continue operating, they accumulate state—memory, too, responses and evolving context. Over time, that state becomes outdated. Data sources change, or tools can return conflicting responses. But the agent becomes more vulnerable to inconsistencies and becomes less truthful. 

Agent reliability becomes a systems problem, and managing that drift may need more than faster execution; it may require visibility and control. 

It’s this failure point that platforms like Gemini Enterprise and AgentCore try to prevent. 

Though this shift is already happening, Gholami admitted that customers will dictate how they want to run and control any long-running agent. 

Advertisement

“We are going to learn a lot from customers where they would be using long-running agents, where they just assign a task to these autonomous agents to just go ahead and do,” Gholami said. “Of course, there are tricks and balances to get right and the agent may come back and ask for more input.”

The new AI stack

What’s becoming increasingly clear is that the AI stack is separating into distinct layers, solving different problems.  

AWS and, to a certain extent, Anthropic and OpenAI, optimize for faster deployment. Claude Managed Agents abstracts much of the backend work for standing up an agent, while the Agents SDK now includes support for sandboxes and a ready-made harness. These approaches aim to lower the barrier to getting agents up and running.

Google offers a centralized control panel to manage identity, enforce policies and monitor long-running behaviors. 

Advertisement

Enterprises likely need both. 

As some practitioners see it, their businesses have to have a serious conversation on how much risk they are willing to take. 

“The main takeaway for enterprise technology leaders considering these technologies at the moment may be formulated this way: while the agent harness vs. runtime question is often perceived as build vs. buy, this is primarily a matter of risk management. If you can afford to run your agents through a third-party runtime because they do not affect your revenue streams, that is okay. On the contrary, in the context of more critical processes, the latter option will be the only one to consider from a business perspective,” Rafael Sarim Oezdemir, head of growth at EZContacts, told VentureBeat in an email.

Iterating quickly lets teams experiment and discover what agents can do, while centralized control adds a layer of trust. What enterprises need is to ensure they are not locked into systems designed purely for a single way of executing agents. 

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Aurzen Zip Cyber Foldable Portable Projector: Not Bright, Poor Connectivity, but a Lot of Fun

Published

on

Pros

  • Preem cyberpunk aesthetic
  • Folds up tiny
  • Dongle add-ons greatly increase usability

Cons

  • Dim, though similar to other small portables
  • No HDMI input
  • Connectivity challenges

I’m trying to think of what the twisty Aurzen Zip Cyber most looks like. Perhaps a new ultra-foldable phone. Or a little robot snake. Maybe the Zat gun from Stargate. What it doesn’t look like is a projector. Well, except for the light which blasts from the front of it. With its cyberpunk-inspired decorations, the Aurzen looks quite futuristic.

With its 720p resolution and a claimed 100-lumen brightness, its performance matches its diminutive size. Then again, it’s one of the only projectors I’ve seen that can literally fit in your pocket. With a 5,000-mAh battery, it can give you a TV-sized screen just about anywhere. Anywhere that’s fairly dark.

The main issue with the Zip is its lack of an HDMI input. Some devices can connect to the Zip wirelessly, but are limited to non-copyrighted content (so no Netflix, etc). For that, you’ll also need to get either the CastPlay Pro or CastPlay HDMI wireless dongles. For a pocket-sized PJ, though, the Aurzen Zip Cyber is still pretty neat.

Advertisement

Specs and such

  • Resolution: 720p 
  • Lumens spec: 100 (claimed)
  • Zoom: No
  • Lens shift: No (though you can tilt the sections)
  • Battery: 5,000 mAh, 1.5h claimed playtime
  • Light source type and life: Not listed, likely LED

Cyberpunk is one of my favorite genres of sci-fi, and having recently re-read Gibson’s Sprawl trilogy for the 4th or 5th time, played about 250 hours of Cyberpunk 2077, plus enjoying countless other media, I am certainly, let’s say “predisposed,” to like the aesthetic. The Zip Cyber’s looks are preem, choom, though it’s basically cosmetic with a really good sticker and a different colored power button compared to Aurzen’s regular Zip. The suggested retail price is $30 more for the Cyber, or 7.5%. Personally, I’d pay the extra for the look, but as mentioned, I’m into it.

The Aurzen Zip Cyber projector on a black background.

Geoffrey Morrison/CNET

Stickers aside, it’s the form of the Zip that is unique. This is a squat little box that expands via two hinges that can rotate roughly 90 degrees each. Fully upright, the projector forms a right-angled “Z” or “S” shape depending on your perspective. Adjusting the two non-base segments is how you angle the projector, and automatic keystone correction tries to maintain a rectangular image. This feature can be disabled in the menu.

There are control buttons up top, which can be duplicated in the Aurzen app (which annoyingly requires you to create an account). Next to the power button on one side are volume controls, and on the other is a toggle for the high brightness mode. The latter kicks the fans into overdrive, making them quite noticeable, but results in an about 40% increase in brightness. This sounds like a lot, but subjectively it’s just a bit brighter.

The Aurzen Zip Cyber projector folded closed on a black background.

Geoffrey Morrison/CNET

As you’d probably expect, given the size and price, that brightness isn’t going to set any records. I measured approximately 88 lumens, which, given the differences in measurement techniques, is pretty close to their claims. Also, I wasn’t able to do my usual measurement suite because of the Zip’s main drawback, which is…

Connections

  • HDMI inputs: 0
  • USB port: 1 USB-C
  • Audio output: 2 speakers, 1-watt total
  • Internet: None
  • Streaming interface: None
  • Remote: N/A

There’s no HDMI input, just a single USB-C connection, which is also how you charge the battery. You can cast wirelessly to the Zip, or at least some devices can. Certain devices just can’t. For those devices, Aurzen also sells the CastPlay Pro, a USB-C dongle that connects to a source like your phone or tablet and streams its screen to the Zip. This is also the only way to send DRM-enabled (copy-protected) content like Netflix, Disney+, HBO and so on. Most USB-C iPhones and iPads should work; some Switch tablets work, as do many laptops. If you know your device supports video output from the USB-C connection, it should work. My Pixel 9 Pro, for example, wouldn’t cast to the Zip directly, but worked just fine with the dongle. My TCL tablet wouldn’t work with the dongle, but did cast directly, though not with DRM content.

Advertisement
The Aurzen Zip Cyber projector opened on a black background.

Geoffrey Morrison/CNET

Aurzen also has a CastPlay Wireless HDMI Dongle, which connects to an HDMI source to broadcast to the Zip, but this wasn’t available during my review, and as of this writing is sold out in most regions.

So the Zip is a bit odd to review because, depending on your devices and what accessories you add, you’ll have a radically different experience. I made a chart:

Aurzen Zip Compatibility

System Compatability Result
Zip Most devices that can cast/mirror their display, but not Google Cast-enabled devices No DRM-enabled content (Netflix, Disney Plus, etc)
Zip + CastPlay Pro USB-C Most USB-C devices with video out (DisplayPort Alt Mode) Any content
Zip + CastPlay Wireless HDMI Any device with HDMI Any content

Basically, most modern iOS and non-Google devices should work with the Zip by itself, though you can’t watch DRM-enabled, copyrighted content (like what you get from the major streaming services). YouTube, Instagram, TikTok and similar will work fine, however. To connect other phones and devices, as long as it can output video via USB-C, the CastPlay Pro dongle will let you watch Netflix and other DRM-enabled streaming services (basically anything that’s not user-created). If you want to connect to a gaming console like a PlayStation or a streaming device like Roku, you’ll want the CastPlay HDMI. I think a lot of this confusion would have been solved with the addition of a Micro HDMI input somewhere, but I’m sure that would have added cost. 

Advertisement

Picture quality

Due to the compatibility challenges mentioned above, I wasn’t able to do my full measurements with the Zip. I’m confident these results are close, though, especially since they’re pretty similar to those of other inexpensive portable DLP projectors I’ve measured, like Anker’s Nebula Capsule Air.

The Aurzen Zip Cyber projector and CastPlay USB-C on a black background.

Geoffrey Morrison/CNET

While light output, in the high-brightness mode, was around 88 lumens, it was around 63 in the much quieter, lower-brightness mode. This is within a few lumens of the Capsule and Capsule Air, close enough that you’d be unlikely to notice any difference in light output. These are all small, dim projectors, among the dimmest I’ve tested. That’s fine, as it’s an understandable consequence of the size and price. As long as you keep the projected image to around TV-sized, it’s bright enough to enjoy in a dark room. 

Contrast is also fairly low, but within the same range as the competition. I measured an average contrast of approximately 401:1, which is about the same as the Capsules as well as some larger, more expensive portables like the Mars 3 Air (405:1). This is only slightly less than standouts like the TCL PlayCube (492:1) and even full-size projectors like the Epson Flex Plus (468:1). So while the image doesn’t pop as much as higher-end, and much larger/more expensive projectors, it’s still contrasty enough that it doesn’t look overly washed out. Again, size and price are the main attributes of the Zip, so it’s great to see that it also looks decent, graded on a curve with other small portables.

Advertisement
The TCL PlayCube, Aurzen Zip Cyber, and Anker Capsule Air.

The TCL PlayCube, Aurzen Zip Cyber, and Anker Nebula Capsule Air.

Geoffrey Morrison/CNET

Color is a bit of a mixed bag, however. The overall color temperature is a little on the cool/blue side, but not enough that it’s distracting. Some colors, like blue and cyan, look fine. Greens are quite accurate too, which is a surprise. Most projector companies sacrifice a realistic green for more light output. Anything involving red is a bit off, however, with red itself being quite undersaturated, magentas are somewhat blue, and yellows are rather green. The most noticeable result is that many skin tones look a little pasty, and anything that should have a solid red looks more pastel.

Perhaps the most useful feature in the Zip speaks to how Aurzen expects people to use it. If you lay the Zip on its side, it will rotate the image 90 degrees. This means if you’re primarily watching 9×16 content like TikTok, it will fill the DLP chip, and you can take advantage of the entire 720p resolution. This makes watching vertical content much more satisfying compared with a heavily letterboxed image that only takes up the center portion of the projected image. Flipping it sideways does make it harder to position correctly, since there’s no rotation in the hinges in that direction, but oh well. Easy enough to just prop the front up with whatever’s handy.

The unit’s two tiny speakers don’t play particularly loudly, nor do they have any bass, no surprise there, but as long as you’re sitting close, they get the job done.

Advertisement

Blade running

The Aurzen Zip Cyber projector on a black background.

Geoffrey Morrison/CNET

For the most part, I really like the Aurzen Zip Cyber. It’s a clever design that looks futuristic even without the cyberpunk clothing. It’s one of the smallest projectors I’ve ever tested, and it performs similarly to its slightly larger portable competitors. The colors it produces aren’t great, but they’re better than many small, inexpensive projectors I’ve tested, like the various AAXA models.

My hesitation is with the connectivity. I think I have a worse perspective on this than most people since I have a Pixel phone and a tablet without DisplayPort Alt Mode, so neither works entirely with the Zip. Depending on your gear, you’ll have different luck. The lack of an HDMI input also means that to watch content from the main non-YouTube streaming providers, you have to get one of the dongles, adding $100 to the total price.  

That said, if you’re expecting to watch an endless scroll of TikTok or YouTube videos, and you have a device that will cast without the dongle, the Zip is a cool-looking gadget that can fit in a pocket and give you a TV-size image in rooms, vans or anywhere it’s fairly dark.

Advertisement

Source link

Continue Reading

Tech

Are you paying an AI ‘swarm tax’? Why single agents often beat complex systems

Published

on

Enterprise teams building multi-agent AI systems may be paying a compute premium for gains that don’t hold up under equal-budget conditions. New Stanford University research finds that single-agent systems match or outperform multi-agent architectures on complex reasoning tasks when both are given the same thinking token budget.

However, multi-agent systems come with the added baggage of computational overhead. Because they typically use longer reasoning traces and multiple interactions, it is often unclear whether their reported gains stem from architectural advantages or simply from consuming more resources.

To isolate the true driver of performance, researchers at Stanford University compared single-agent systems against multi-agent architectures on complex multi-hop reasoning tasks under equal “thinking token” budgets.

Their experiments show that in most cases, single-agent systems match or outperform multi-agent systems when compute is equal. Multi-agent systems gain a competitive edge when a single agent’s context becomes too long or corrupted.

Advertisement

In practice, this means that a single-agent model with an adequate thinking budget can deliver more efficient, reliable, and cost-effective multi-hop reasoning. Engineering teams should reserve multi-agent systems for scenarios where single agents hit a performance ceiling.

Understanding the single versus multi-agent divide

Multi-agent frameworks, such as planner agents, role-playing systems, or debate swarms, break down a problem by having multiple models operate on partial contexts. These components communicate with each other by passing their answers around.

While multi-agent solutions show strong empirical performance, comparing them to single-agent baselines is often an imprecise measurement. Comparisons are heavily confounded by differences in test-time computation. Multi-agent setups require multiple agent interactions and generate longer reasoning traces, meaning they consume significantly more tokens.

sas vs mas

Single-agent systems (SAS) vs multi-agent systems (MAS)

Advertisement

Consequently, when a multi-agent system reports higher accuracy, it is difficult to determine if the gains stem from better architecture design or from spending extra compute.

Recent studies show that when the compute budget is fixed, elaborate multi-agent strategies frequently underperform compared to strong single-agent baselines. However, they are mostly very broad comparisons that don’t account for nuances such as different multi-agent architectures or the difference between prompt and reasoning tokens.

“A central point of our paper is that many comparisons between single-agent systems (SAS) and multi-agent systems (MAS) are not apples-to-apples,” paper authors Dat Tran and Douwe Kiela told VentureBeat. “MAS often get more effective test-time computation through extra calls, longer traces, or more coordination steps.”

Revisiting the multi-agent challenge under strict budgets

To create a fair comparison, the Stanford researchers set a strict “thinking token” budget. This metric controls the total number of tokens used exclusively for intermediate reasoning, excluding the initial prompt and the final output.

Advertisement

The study evaluated single- and multi-agent systems on multi-hop reasoning tasks, meaning questions that require connecting multiple pieces of disparate information to reach an answer.

During their experiments, the researchers noticed that single-agent setups sometimes stop their internal reasoning prematurely, leaving available compute budget unspent. To counter this, they introduced a technique called SAS-L (single-agent system with longer thinking).

Rather than jumping to multi-agent orchestration when a model gives up early, the researchers suggest a simple prompt-and-budgeting change.

“The engineering idea is simple,” Tran and Kiela said. “First, restructure the single-agent prompt so the model is explicitly encouraged to spend its available reasoning budget on pre-answer analysis.”

Advertisement

By instructing the model to explicitly identify ambiguities, list candidate interpretations, and test alternatives before committing to a final answer, developers can recover the benefits of collaboration inside a single-agent setup. 

The results of their experiments confirm that a single agent is the strongest default architecture for multi-hop reasoning tasks. It produces the highest accuracy answers while consuming fewer reasoning tokens. When paired with specific models like Google’s Gemini 2.5, the longer-thinking variant produces even better aggregate performance.

The researchers rely on a concept called “Data Processing Inequality” to explain why a single agent outperforms a swarm. Multi-agent frameworks introduce inherent communication bottlenecks. Every time information is summarized and handed off between different agents, there is a risk of data loss.

In contrast, a single agent reasoning within one continuous context avoids this fragmentation. It retains access to the richest available representation of the task and is thus more information-efficient under a fixed budget.

Advertisement

The authors also note that enterprises often overlook the secondary costs of multi-agent systems.

“What enterprises often underestimate is that orchestration is not free,” they said. “Every additional agent introduces communication overhead, more intermediate text, more opportunities for lossy summarization, and more places for errors to compound.”

On the other hand, they discovered that multi-agent orchestration is superior when a single agent’s environment gets messy. If an enterprise application must handle highly degraded contexts, such as noisy data, long inputs filled with distractors, or corrupted information, a single agent struggles. In these scenarios, the structured filtering, decomposition, and verification of a multi-agent system can recover relevant information more reliably.

The study also warns about hidden evaluation traps that falsely inflate multi-agent performance. Relying purely on API-reported token counts heavily distorts how much computation an architecture is actually spending. The researchers found these accounting artifacts when testing models like Gemini 2.5, proving this is an active issue for enterprise applications today.

Advertisement

“For API models, the situation is trickier because budget accounting can be opaque,” the authors said. To evaluate architectures reliably, they advise developers to “log everything, measure the visible reasoning traces where available, use provider-reported reasoning-token counts when exposed, and treat those numbers cautiously.”

What it means for developers

If a single-agent system matches the performance of multiple agents under equal reasoning budgets, it wins on total cost of ownership by offering fewer model calls, lower latency, and simpler debugging. Tran and Kiela warn that without this baseline, “some enterprises may be paying a large ‘swarm tax’ for architectures whose apparent advantage is really coming from spending more computation rather than reasoning more effectively.”

Another way to look at the decision boundary is not how complex the overall task is, but rather where the exact bottleneck lies.

“If it is mainly reasoning depth, SAS is often enough. If it is context fragmentation or degradation, MAS becomes more defensible,” Tran said.

Advertisement

Engineering teams should stay with a single agent when a task can be handled within one coherent context window. Multi-agent systems become necessary when an application handles highly degraded contexts. 

Looking ahead, multi-agent frameworks will not disappear, but their role will evolve as frontier models improve their internal reasoning capabilities.

“The main takeaway from our paper is that multi-agent structure should be treated as a targeted engineering choice for specific bottlenecks, not as a default assumption that more agents automatically means better intelligence,” Tran said.

Source link

Advertisement
Continue Reading

Tech

US Going Deeper Into The Red Now That The IRS Is Sharing Tax Data With ICE

Published

on

from the making-America-late-on-interest-payments-again dept

The government needs more funding than ever, which is kind of hilarious when you realize the Tea Party of the Obama era was the predecessor of this Big Government version of the GOP.

The DHS can’t even get itself a budget at the moment. Sure, it will get some money thrown to it sooner or later and the administration won’t let the lack of tax revenue offsets stop it from feeding billions more into its Bigotry Machine.

But that’s not all. Behold our all-but-officially-declared war in Iran, currently headed by the Department of Defense War Little Excursion, which is adding billions of dollars weekly to the national deficit. After all, as right-leaning libertarians like to point out, the government doesn’t actually “make” anything. The private sector builds the bombs and missiles. And unlike TSA agents, they expect to be paid.

You know who could help this country offset some of its insane expenditures? It’s the same people we’re spending billions to remove from the country:

Advertisement

Immigrants accounted for more US income and generated more revenue for the government because they were, on average, over 12 percentage points more likely to be employed than the US-born population. This means that even if immigrants earn lower hourly wages, they can still account for more total income per capita than the US-born population by working cumulatively more hours. This higher employment rate was driven by the fact that immigrants were, on average, 20 percentage points more likely to be of working age. Immigrants usually arrive in the US as young adults and often leave before retirement.

More succinctly, immigrants out-punch their weight class when it comes to erasing budget deficits:

Accounting for savings on interest payments on the national debt, immigrants saved $14.5 trillion in debt over this 30-year period.

[…]

Without the contributions of immigrants, public debt at all levels would already be above 200 percent of US GDP—nearly twice the 2023 level and a threshold some analysts believe would trigger a debt crisis.

But that help is apparently no longer welcome. The Trump administration has succeeded in eliminating the firewall between the IRS and ICE, allowing ICE agents to use this data to hunt down taxpayers who work harder and pay more taxes than the white, natural-born citizens that this administration pretends make America great.

Advertisement

That’s going to cause even more problems for an administration that is spending far more liberally than any “liberal” it blames its current budget problems on. Here’s how that looks on the ground as Tax Day has come and gone in the United States:

By the time Tax Day rolls around every April 15, accountant María José Solís usually has more to do. More clients. More paperwork. More phones ringing, more emails and WhatsApp messages pinging.

But this year, she said, more than 550 of her regular clients have disappeared. That’s about 15 percent of her customer base at Toro Taxes, the bilingual firm in Wheaton, Maryland, that Solís runs.

There’s your anecdote, albeit one that’s being repeated around the nation. Here’s the data:

The Yale Budget Lab estimates that the IRS stands to lose between $147 billion and $479 billion over the next decade as migration to the U.S. declines, deportations increase and immigrants of various statuses disengage from the formal economy for what some experts say may be an extended period.

That estimate will likely be low if the Trump administration continues to purge migrants at the rate it has since Trump returned to office. It will definitely be lower if another similarly bigoted GOP lawmaker succeeds him as president.

Advertisement

And it’s not just the losses up front. There’s money leaking out the back as well. It’s a double-dip, because migrants with ITINs (individual tax identification numbers) pay taxes for services they can’t actually access, like Social Security and Medicare. They’re actually subsidizing citizens who pay fewer taxes, work fewer hours, and commit more crimes than they do.

This nation continues to become poorer, not just in terms of financial viability, but in heart and spirit. Migrants made this nation great. Now, a bunch of ungrateful people who hate people who aren’t white are not only driving us deeper into debt, but they’re eliminating a source of income that never asked for anything more than a chance to survive.

Filed Under: bigotry, cbp, dhs, ice, immigration, irs, mass deportation, trump administration

Advertisement

Source link

Continue Reading

Tech

Daily Deal: The 2026 Complete Godot Stack Development Bundle

Published

on

from the good-deals-on-cool-stuff dept

Dive into Godot – a rising star in the game engine world – with the 2026 Complete Godot Stack Development Bundle. You’ll learn to create platformers, RPGs, strategy games, FPS games, and more as you master this free and open-source engine with easily expandable systems. Plus, you’ll also explore techniques for game design and game asset creation – giving you the ultimate techniques to customize your projects. It’s on sale for $25.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Filed Under: daily deal

Source link

Advertisement
Continue Reading

Tech

France Keeps Breaking the Internet to Stop Piracy, Even Though It’s Not Working

Published

on

from the but-maybe-this-time-it-will! dept

Back in 2011 and 2012, one of the central technical objections that helped kill SOPA and PIPA was about DNS blocking. Engineers, internet architects, and cybersecurity experts all lined up to explain, in painstaking detail, why blocking at the DNS layer was a terrible idea. It would break the fundamental architecture of how the internet works. It would have massive collateral damage. It would undermine security protocols designed to protect users from exactly the kind of DNS manipulation that the bill proposed. And it wouldn’t even stop piracy, because anyone who actually wanted to get around DNS blocking could do so easily.

Congress, to its rare credit, actually listened to the technical experts (and widespread protests) and shelved the legislation. But the entertainment industry never gave up on the idea. They just went jurisdiction-shopping. And France, which has never met a maximalist copyright enforcement scheme it didn’t love, has been more than happy to oblige.

As recently reported by TorrentFreak, a Paris Court of Appeal validated DNS blocking orders requiring Google, Cloudflare, and Cisco to block access to pirate sites through their own DNS resolvers. This goes beyond traditional ISP resolvers, which France has been ordering blocked for years — this targets third-party resolvers — the ones that millions of people specifically choose to use because they offer better privacy, better security, and better reliability than their ISP’s default DNS.

But, of course, in France (and to the usual crew of Hollywood lobbyists), “better privacy, security, and reliability” can only mean one thing: used for piracy.

Advertisement

The court rejected all five appeals, and in doing so, articulated a legal principle so sweeping that it has no natural stopping point.

In this case, French pay-TV provider Canal+ went to court under Article L. 333-10 of the “French Sport Code,” which lets rightsholders request “all proportionate measures” against “any online entity in a position to help” block access to pirate sites. Canal+ argued that because users were simply switching to third-party DNS resolvers to circumvent ISP-level blocking, those resolvers should be conscripted into the blocking regime too.

Cloudflare and Cisco pushed back, arguing that their DNS resolvers serve a “neutral and passive function” — they translate domain names into IP addresses and that’s it. They compared their role to a phone book. The court’s response boiled down to: we don’t care.

The DNS resolution service allows its users, via the translation of a domain name into an IP address, to access websites on which sports competitions are broadcast in violation of rights-holders’ rights, and in particular to circumvent the blocking of those sites by ISPs.

The court found that the “neutral and passive” nature of DNS resolvers is “simply irrelevant to Article L. 333-10.” The law isn’t about liability at all — it only cares whether a service can help block access to pirate sites, which DNS resolvers clearly can. If you are technically capable of blocking access, you must.

Advertisement

Google, meanwhile, tried a different argument: that DNS blocking through third-party resolvers isn’t effective because users can just switch to a VPN or yet another resolver. The court wasn’t moved by that either:

Any filtering measure can be circumvented, and this possibility does not render the measures in question ineffective.

As long as DNS blocking stops some subset of users from reaching pirate sites, the court ruled, it’s “proportionate.” Under that line of thinking, any measure that inconveniences even a fraction of would-be pirates is legally justified, no matter how much collateral damage it causes for everyone else.

And if you think that principle has any limit, Canal+ has made it quite clear that they don’t think it does:

Canal+ said in a statement that the rulings are “more than a victory,” forming part of “a global approach that will be reinforced by the progressive deployment of complementary measures, including IP blocking.”

Canal+ has already been getting courts to order VPN providers to block as well. So now we have ISP DNS blocking mandated, third-party DNS resolver blocking mandated, VPN blocking mandated — and, per the TorrentFreak article, direct automated IP address blocking is coming too. They will not stop until the entire internet is broken.

Advertisement

Each step reaches further down the internet stack, breaks more of the internet for more people, and stops fewer actual pirates, because the people who are determined to pirate content are always one technical maneuver ahead. The people who get caught in the collateral damage are ordinary users who happen to use Cloudflare’s 1.1.1.1 or Google’s 8.8.8.8 for perfectly legitimate reasons like speed, reliability, and privacy.

Cisco, rather than comply with the original order, simply pulled its OpenDNS service out of France entirely. That’s the kind of collateral damage we’re talking about. French users who relied on OpenDNS for entirely lawful purposes completely lost access to the service. Because a copyright holder decided that the DNS layer was the right place to play whack-a-mole with pirate sites.

When Cisco argued on appeal that implementing geo-targeted DNS blocking would require 64 person-weeks of engineering work, the court waved it off, saying the estimate was “not supported by any objective evidence” and pointing out that Cisco already offers DNS filtering to enterprise customers. The fact that enterprise DNS filtering for corporate networks is a fundamentally different thing than mass geo-targeted blocking of domains at the resolver level for an entire country’s users apparently did not register as a meaningful distinction.

The court’s core reasoning — that any entity technically capable of blocking must do so, that circumvention doesn’t make blocking disproportionate, and that the “neutral and passive” function of an intermediary is irrelevant — creates a legal framework that can reach basically anything. If a DNS resolver can be conscripted because it’s “in a position to help,” what about browsers? What about operating systems? What about CDNs, or cloud hosting providers, or certificate authorities? The logic has no brake pedal. Every layer of the internet stack is, in some sense, “in a position to help” block access to content. The question the court’s reasoning cannot answer is: where does it end?

Advertisement

Under this reasoning, what’s to stop a rightsholder from arguing that browsers should block pirate URLs directly? Or that operating systems should refuse to resolve them at all?

That seems bad!

Of course, this kind of maximalist copyright enforcement is something of a French specialty. This is the same country that brought us HADOPI, the graduated response agency that cost French taxpayers €82 million over a decade while imposing a grand total of roughly €87,000 in fines. A staggering return on investment — if the goal was to light money on fire while accomplishing nothing. France has also been at the forefront of copyright exceptionalism that risks undermining the EU legal system more broadly, pushing interpretations of copyright law so aggressive that they threaten to distort the legal frameworks of neighboring countries.

France keeps doing the same thing over and over again: spend enormous sums, conscript more and more intermediaries, break more and more of the internet’s infrastructure, accomplish almost nothing in terms of actually reducing piracy, and then conclude that what’s really needed is… more of the same, but harder. The entertainment industry’s refusal to learn from twenty years of evidence that enforcement-maximalism doesn’t work is genuinely remarkable. Every study and every natural experiment shows the same thing: the most effective anti-piracy tool ever invented is convenient, reasonably priced legal access to content. But that requires adapting your business model, and it’s apparently much more satisfying to get courts to break the internet for you instead.

Advertisement

The ruling’s real danger is the template it sets. Other countries with similar legal frameworks will look at this appeals court validation and think: we can do that too. The “any entity in a position to help” standard, combined with the “doesn’t have to be perfectly effective” standard, combined with the “we don’t care about your neutral role in the architecture” standard, adds up to a legal toolkit for conscripting nearly any internet infrastructure provider into a copyright enforcement apparatus. And the costs get externalized onto those providers (and their users), while the rightsholders collect the benefits.

The engineers who fought SOPA warned about exactly this: DNS blocking breaks things, creates collateral damage, pushes enforcement into layers of the stack never designed for it — and doesn’t actually stop piracy, because the actual pirates just route around it while everyone else suffers. France apparently decided all of those concerns are, to quote the court, “simply irrelevant.” And now they’ve moving on to IP blocking.

At some point, you run out of layers of the internet to break. But apparently we’re going to have to find out where that point is the hard way.

Filed Under: copyright, dns blocking, france, sopa

Companies: canal plus, cisco, cloudflare, google

Advertisement

Source link

Continue Reading

Tech

New Gas-Powered Data Centers Could Emit More Greenhouse Gases Than Entire Nations

Published

on

An anonymous reader quotes a report from Wired: New gas projects linked to just 11 data center campuses around the US have the potential to create more greenhouse gases than the country of Morocco emitted in 2024. Emissions estimates from air permit documents examined by WIRED show that these natural gas projects — which are being built to power data centers to serve some of the US’s most powerful AI companies, including OpenAI, Meta, Microsoft, and xAI — have the potential to emit more than 129 million tons of greenhouse gases per year. As tech companies race to secure massive power deals to build out hundreds of data centers across the country, these projects represent just the tip of the iceberg when it comes to the potential climate cost of the AI boom.

The infrastructure on this list of large natural gas projects reviewed by WIRED is being developed to largely bypass the grid and provide power solely for data centers, a trend known as behind-the-meter power. As data center developers face long waits for connections to traditional utilities, and amid mounting public resistance to the possibility of higher energy bills, making their own power is becoming an increasingly popular option. These projects have either been announced or are under construction, with companies already submitting air permit application materials with state agencies. […] The emissions projections for the xAI and Microsoft projects, and all the others on WIRED’s list, were pulled directly from publicly-available air permit documents in state databases as well as public air permit materials collected by both Cleanview and Oil and Gas Watch, a database maintained by the Environmental Integrity Project, an environmental enforcement nonprofit. Actual greenhouse gas emissions from power plants are usually lower than what’s on their air permits. Air permit modeling is based on the scenario of a power plant constantly running at full capacity. That’s rarely the reality for grid-connected power plants, as turbines go offline for maintenance or adjust to the ebbs and flows of customer demand.

“Permitted emission numbers represent a theoretical, conservative scenario, not the actual projected emissions,” Alex Schott, the director of communications at Williams Companies, an oil and gas company that is building out three behind-the-meter power plants in Ohio for Meta, told WIRED in an email. Internal modeling done by the company, Schott added, shows that actual emissions could be “potentially two-thirds less than what’s on paper.” The projections involved, however, are still substantial. Even if the actual emissions from these power plants end up being half of the emissions numbers on the permits, they still could create more greenhouse gas emissions than the country of Norway emitted in 2024. This number is, according to the EPA, equivalent to the emissions from more than 153 average-sized natural gas plants. (WIRED’s analysis does not include emissions from backup generators and turbines on the data center campuses themselves, which create smaller amounts of emissions.)
Energy researcher Jon Koomey says the data center boom has created a shortage of the most efficient gas turbines, pushing some developers toward less efficient models that would need to run longer and produce more emissions. “[Data center operators’] belief is that the value being delivered by the servers is much, much more than the cost of running these inefficient power plants all the time,” he said.

Michael Thomas, the founder of clean energy research firm Cleanview, has been tracking gas permits for data centers across the country. He calls behind-the-meter power “a crazy acceleration of emissions.” He added: “It’s almost like we thought we were on the downside of the Industrial Revolution, retiring coal and gas, and now we have a new hump where we’re going to rise. That terrifies me in a lot of ways.”

Advertisement

Source link

Continue Reading

Tech

FCC’s Foreign-Made Router Ban Expands To Portable Wi-Fi Hotspot Devices

Published

on

The FCC has expanded its foreign-made router ban to also cover consumer Wi-Fi hotspots and LTE/5G home-internet devices, though existing products and phones with hotspot features are not affected. PCMag reports: On Wednesday, the FCC updated its FAQ on the ban, clarifying which consumer-grade routers are subject to the restrictions. Portable Wi-Fi hotspots are usually considered a separate category from Wi-Fi home routers. Both offer internet access, but portable Wi-Fi hotspots use a SIM card to connect to a cellular network rather than an Ethernet cable inside a residence. However, the FCC’s FAQ now specifies that “consumer-grade portable or mobile MiFi Wi-Fi or hotspot devices for residential use” are covered under the ban.

The ban also affects “LTE/5G CPE devices for residential use,” which are installed for fixed wireless access and use a carrier’s cellular network to deliver home internet. The FCC didn’t immediately respond to a request for comment about the changes. In the meantime, the FAQ reiterates that the foreign-made router ban only applies to consumer-grade devices, not enterprise products. The document also notes that mobile phones with hotspot features remain outside the restrictions. In addition, the ban only affects new router models that vendors plan to sell, not existing models, as T-Mobile emphasized to PCMag.

Source link

Continue Reading

Tech

Assassin’s Creed Black Flag Resynced ‘delivers a no-compromise experience with advanced ray tracing performance’ on PS5 Pro, along with the latest PSSR 2 tech

Published

on


  • Assassin’s Creed Black Flag Resynced will feature PSSR 2 on PS5 Pro
  • Ubisoft confirms the game will have several modes on PS5 and PS5 Pro
  • The game will also use the latest version of the studio’s Anvil engine

Ubisoft has confirmed Assassin’s Creed Black Flag Resynced will feature Sony‘s upgraded PSSR (PlayStation Spectral Super Resolution) AI upscaling technology.

The remake of the 2013 Assassin’s Creed pirate adventure has finally been revealed, and a new PlayStation Blog post following the world premiere trailer has outlined what players can expect from the PS5 and PS5 Pro versions.

Source link

Advertisement
Continue Reading

Tech

Microsoft will offer voluntary retirement to thousands of employees in a first for tech giant

Published

on

Time to hang it up? Microsoft will be giving some employees that chance. (GeekWire Photo / Todd Bishop)

Microsoft is offering a one-time voluntary retirement program for the first time in its 51-year history, giving thousands of long-serving U.S. employees a chance to leave with a financial payout and extended healthcare as it works to control costs amid a massive buildup in AI infrastructure.

An estimated 7% of Microsoft’s 125,000-person U.S. workforce, or about 8,750 employees, would be eligible based on a formula that takes into account their years at the company and their age.

It’s a highly unusual move in the tech world. Voluntary retirement programs are common in older industries, such as telecom and manufacturing, but the largest tech companies have instead turned to layoffs, stricter performance reviews, and return-to-office policies to thin their ranks. 

Microsoft itself laid off more than 15,000 employees last year and began requiring workers in the Seattle region to return to the office three days a week in February.

The retirement program was outlined Thursday in a memo to employees from Chief People Officer Amy Coleman, who described it as a one-time offering for long-serving workers.

Advertisement

The program is open to U.S. employees at Level 67 — the equivalent of senior director — and below, excluding those on sales incentive plans, whose years of service plus age total 70 or more. Eligible employees will be notified May 7 and will have 30 days to decide.

“Many of these employees have spent years, and in some cases, decades, shaping Microsoft into what it is today,” Coleman wrote. “Our hope is that this program gives those eligible the choice to take that next step on their own terms, with generous company support.”

In the same memo, Coleman outlined changes to Microsoft’s compensation system, reducing the number of pay levels from nine to five. It’s also decoupling stock awards from bonuses, giving managers flexibility to use stock to reward long-term contributors regardless of their latest performance rating.

Microsoft isn’t providing specific details of the retirement package yet, saying eligible employees and their managers will receive more information on May 7. Details of the healthcare component will be significant for employees who are not yet old enough to qualify for Medicare at age 65.

Advertisement

There are not expected to be any restrictions on future employment for those who take the deal. 

The program would take effect in Microsoft’s fiscal fourth quarter, and CFO Amy Hood is expected to discuss it on the company’s earnings call next week.

Source link

Advertisement
Continue Reading

Tech

Fascinating Look at Tandy’s Gobble Man, a Handheld Maze Game That Lit Up Pockets in 1983

Published

on

Tandy Gobble Man Handheld Game 1983
Back in the early 1980s, Radio Shack sold a variety of gadgets that caught the eye of both curious children and tech enthusiasts. Among those goods was Gobble Man, a small handheld game that placed maze chases directly into your palms years before the larger portable consoles that came later. Bandai first released this in Japan during 1981 as a game known as Packri Monster. Tandy then scooped it up, licensing the design for its US stores before selling it as Gobble Man in 1983. To add to the confusion, Tandy sold the exact same units under the titles Hungry Monster and Ogre Eater.



Four AA batteries powered everything from start to finish. A vacuum fluorescent display, which was basically a tiny little screen, made the game feel extremely bright; even in a dark room or on a vehicle journey, you could see every wall and dot in the maze clearly. Inside the handheld was a Hitachi HD38800 chip, a specialized microcontroller designed specifically to operate this game and nothing else.


PlayStation 5 Portal Remote Player – Midnight Black
  • Play Your Game Collection with Remote Play – PlayStation Portal Remote Player can play compatible games you have installed on your PS5 console…
  • Cloud Streaming from the Game Catalog and Classics Catalog – Discover an awesome library of PS5 games on the PlayStation Portal Remote Player with…
  • Cloud Streaming for PS5 Games in Your Library – With PlayStation Plus Premium, stream select digital PS5 games in Your Library from PlayStation Store…

Tandy Gobble Man Handheld Game 1983
Players took a rectangular plastic case about the size of a large paperback book to play with. A four-way joystick on the front let you to direct the yellow monster through the curving courses; the stick itself employed a dual pivot mechanism that required a forceful push to register every turn. The monster continued to move in the same direction until the player altered course or it collided with a wall. The controls included a start button and a power switch, as well as a small orange indicator that signaled when the gadget was turned on.

Tandy Gobble Man Handheld Game 1983
The music came from a very small built-in speaker that played the tunes and effects at the same constant volume, such as beeps and rudimentary melodies, and alerted you when something happened in the game. There was no volume control, so the cacophony was part of the experience, whether you liked it or not. The gameplay took place on a single fixed maze, with green food pellets to chew on. The goal was simple: move the monster so that it ate every pellet before the bogey caught up, and if it did, game over. Red power food would sometimes materialize in specific locations. Eating one of those would transform the monster into a coward who would back direction and flash briefly. For a certain period of time, you might pursue them and get some bonus points.

Tandy Gobble Man Handheld Game 1983
However, the difficulty level increased with each subsequent round. More enemies joined the chase, and speeds rose, giving the maze the sensation of closing in around you when the strain was on. Your score would skyrocket if you timed your power snacks perfectly and wiped the board clear without making a single error. Round after round, the whole thing just kept going until the batteries died or your thumbs gave out in frustration. Anyone who picked up Gobble Man in 1983 would immediately feel as if they understood the game because the rules were so familiar.

Tandy Gobble Man Handheld Game 1983
Gobble Man was never really trying to compete with the big boys, such as Pac-Man, home consoles, and coin-op machines. It was simply a nice little distraction to keep you engaged while on the run, regardless of where you were. Six years later, Nintendo released the Game Boy, which very well defined what portable gaming was all about. But first, Tandy created Gobble Man, which proved that the concept worked and left many people wanting more.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025