High energy costs are forcing AI workloads out of the UK
Cheaper electricity is becoming the deciding factor for AI deployment
The US infrastructure advantage is accelerating the shift of AI workloads
British businesses are paying more than four times as much for electricity as their American counterparts, and the AI industry is taking notice.
According to CUDO Compute, 20% of UK firms have already moved AI workloads out of the country due to high power costs.
The gap between where businesses want to run AI and where they can actually run it is widening rapidly.
Article continues below
Advertisement
Why British AI firms are looking overseas for cheaper power
“What we are seeing is a growing tension between where businesses want to run AI and where they actually can,” said Matt Hawkins, CEO of CUDO Compute.
“If it is cheaper or easier to run workloads elsewhere, they will move, regardless of sovereignty ambitions.”
Advertisement
A third of UK organisations say energy costs are limiting their ability to scale AI operations, according to the survey of over 700 senior AI decision-makers.
When asked which markets look most attractive for new AI cluster capacity, 72% of UK respondents pointed to the United States.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
India followed at 62%, Eastern Europe at 58%, and China at 55%. Western Europe and the Nordics scored lower at 45% and 44%, respectively.
Advertisement
The message is clear: cost and performance still outweigh sovereignty for 43% of organisations when deciding where to deploy their AI tools.
While 46% of UK organisations say geopolitical instability is pushing them to keep workloads within home markets, the economic pressure to relocate is intense.
Nearly one in three UK firms say they are actively considering moving workloads overseas due to geopolitical pressures.
Advertisement
Almost half say data sovereignty, regulatory compliance, or national security concerns are shaping their AI deployment strategy.
Yet 32% of AI-first businesses say they would consider moving workloads overseas due to power costs, compared to 18% of traditional enterprises.
The businesses running the most compute-intensive workloads are also the most likely to look beyond the UK as economic conditions tighten.
The UK has ambitions and policies for AI sovereignty, but there is a clear disconnect between those goals and what is actually available.
Advertisement
Organisations want to build in the UK, but they need the infrastructure to do so. The countries that solve this first will shape the future of AI, and the UK still has a window to lead, but it needs to act quickly.
The research exposes a hard truth for British policymakers. Talk of AI sovereignty means nothing without the power infrastructure to support it.
The United States, with its lower energy prices and aggressive buildout of AI-ready data centres, is already reaping the benefits of the UK’s inaction.
Every week that passes without meaningful progress on energy costs and grid capacity, more British AI workloads will migrate overseas.
Enterprise teams building multi-agent AI systems may be paying a compute premium for gains that don’t hold up under equal-budget conditions. New Stanford University research finds that single-agent systems match or outperform multi-agent architectures on complex reasoning tasks when both are given the same thinking token budget.
However, multi-agent systems come with the added baggage of computational overhead. Because they typically use longer reasoning traces and multiple interactions, it is often unclear whether their reported gains stem from architectural advantages or simply from consuming more resources.
Their experiments show that in most cases, single-agent systems match or outperform multi-agent systems when compute is equal. Multi-agent systems gain a competitive edge when a single agent’s context becomes too long or corrupted.
Advertisement
In practice, this means that a single-agent model with an adequate thinking budget can deliver more efficient, reliable, and cost-effective multi-hop reasoning. Engineering teams should reserve multi-agent systems for scenarios where single agents hit a performance ceiling.
Understanding the single versus multi-agent divide
Multi-agent frameworks, such as planner agents, role-playing systems, or debate swarms, break down a problem by having multiple models operate on partial contexts. These components communicate with each other by passing their answers around.
While multi-agent solutions show strong empirical performance, comparing them to single-agent baselines is often an imprecise measurement. Comparisons are heavily confounded by differences in test-time computation. Multi-agent setups require multiple agent interactions and generate longer reasoning traces, meaning they consume significantly more tokens.
Single-agent systems (SAS) vs multi-agent systems (MAS)
Advertisement
Consequently, when a multi-agent system reports higher accuracy, it is difficult to determine if the gains stem from better architecture design or from spending extra compute.
Recent studies show that when the compute budget is fixed, elaborate multi-agent strategies frequently underperform compared to strong single-agent baselines. However, they are mostly very broad comparisons that don’t account for nuances such as different multi-agent architectures or the difference between prompt and reasoning tokens.
“A central point of our paper is that many comparisons between single-agent systems (SAS) and multi-agent systems (MAS) are not apples-to-apples,” paper authors Dat Tran and Douwe Kiela told VentureBeat. “MAS often get more effective test-time computation through extra calls, longer traces, or more coordination steps.”
Revisiting the multi-agent challenge under strict budgets
To create a fair comparison, the Stanford researchers set a strict “thinking token” budget. This metric controls the total number of tokens used exclusively for intermediate reasoning, excluding the initial prompt and the final output.
Advertisement
The study evaluated single- and multi-agent systems on multi-hop reasoning tasks, meaning questions that require connecting multiple pieces of disparate information to reach an answer.
During their experiments, the researchers noticed that single-agent setups sometimes stop their internal reasoning prematurely, leaving available compute budget unspent. To counter this, they introduced a technique called SAS-L (single-agent system with longer thinking).
Rather than jumping to multi-agent orchestration when a model gives up early, the researchers suggest a simple prompt-and-budgeting change.
“The engineering idea is simple,” Tran and Kiela said. “First, restructure the single-agent prompt so the model is explicitly encouraged to spend its available reasoning budget on pre-answer analysis.”
Advertisement
By instructing the model to explicitly identify ambiguities, list candidate interpretations, and test alternatives before committing to a final answer, developers can recover the benefits of collaboration inside a single-agent setup.
The results of their experiments confirm that a single agent is the strongest default architecture for multi-hop reasoning tasks. It produces the highest accuracy answers while consuming fewer reasoning tokens. When paired with specific models like Google’s Gemini 2.5, the longer-thinking variant produces even better aggregate performance.
The researchers rely on a concept called “Data Processing Inequality” to explain why a single agent outperforms a swarm. Multi-agent frameworks introduce inherent communication bottlenecks. Every time information is summarized and handed off between different agents, there is a risk of data loss.
In contrast, a single agent reasoning within one continuous context avoids this fragmentation. It retains access to the richest available representation of the task and is thus more information-efficient under a fixed budget.
Advertisement
The authors also note that enterprises often overlook the secondary costs of multi-agent systems.
“What enterprises often underestimate is that orchestration is not free,” they said. “Every additional agent introduces communication overhead, more intermediate text, more opportunities for lossy summarization, and more places for errors to compound.”
On the other hand, they discovered that multi-agent orchestration is superior when a single agent’s environment gets messy. If an enterprise application must handle highly degraded contexts, such as noisy data, long inputs filled with distractors, or corrupted information, a single agent struggles. In these scenarios, the structured filtering, decomposition, and verification of a multi-agent system can recover relevant information more reliably.
The study also warns about hidden evaluation traps that falsely inflate multi-agent performance. Relying purely on API-reported token counts heavily distorts how much computation an architecture is actually spending. The researchers found these accounting artifacts when testing models like Gemini 2.5, proving this is an active issue for enterprise applications today.
Advertisement
“For API models, the situation is trickier because budget accounting can be opaque,” the authors said. To evaluate architectures reliably, they advise developers to “log everything, measure the visible reasoning traces where available, use provider-reported reasoning-token counts when exposed, and treat those numbers cautiously.”
What it means for developers
If a single-agent system matches the performance of multiple agents under equal reasoning budgets, it wins on total cost of ownership by offering fewer model calls, lower latency, and simpler debugging. Tran and Kiela warn that without this baseline, “some enterprises may be paying a large ‘swarm tax’ for architectures whose apparent advantage is really coming from spending more computation rather than reasoning more effectively.”
Another way to look at the decision boundary is not how complex the overall task is, but rather where the exact bottleneck lies.
“If it is mainly reasoning depth, SAS is often enough. If it is context fragmentation or degradation, MAS becomes more defensible,” Tran said.
Advertisement
Engineering teams should stay with a single agent when a task can be handled within one coherent context window. Multi-agent systems become necessary when an application handles highly degraded contexts.
Looking ahead, multi-agent frameworks will not disappear, but their role will evolve as frontier models improve their internal reasoning capabilities.
“The main takeaway from our paper is that multi-agent structure should be treated as a targeted engineering choice for specific bottlenecks, not as a default assumption that more agents automatically means better intelligence,” Tran said.
from the making-America-late-on-interest-payments-again dept
The government needs more funding than ever, which is kind of hilarious when you realize the Tea Party of the Obama era was the predecessor of this Big Government version of the GOP.
The DHS can’t even get itself a budget at the moment. Sure, it will get some money thrown to it sooner or later and the administration won’t let the lack of tax revenue offsets stop it from feeding billions more into its Bigotry Machine.
But that’s not all. Behold our all-but-officially-declared war in Iran, currently headed by the Department of DefenseWarLittle Excursion, which is adding billions of dollars weekly to the national deficit. After all, as right-leaning libertarians like to point out, the government doesn’t actually “make” anything. The private sector builds the bombs and missiles. And unlike TSA agents, they expect to be paid.
Immigrants accounted for more US income and generated more revenue for the government because they were, on average, over 12 percentage points more likely to be employed than the US-born population. This means that even if immigrants earn lower hourly wages, they can still account for more total income per capita than the US-born population by working cumulatively more hours. This higher employment rate was driven by the fact that immigrants were, on average, 20 percentage points more likely to be of working age. Immigrants usually arrive in the US as young adults and often leave before retirement.
More succinctly, immigrants out-punch their weight class when it comes to erasing budget deficits:
Accounting for savings on interest payments on the national debt, immigrants saved $14.5 trillion in debt over this 30-year period.
[…]
Without the contributions of immigrants, public debt at all levels would already be above 200 percent of US GDP—nearly twice the 2023 level and a threshold some analysts believe would trigger a debt crisis.
But that help is apparently no longer welcome. The Trump administration has succeeded in eliminating the firewall between the IRS and ICE, allowing ICE agents to use this data to hunt down taxpayers who work harder and pay more taxes than the white, natural-born citizens that this administration pretends make America great.
Advertisement
That’s going to cause even more problems for an administration that is spending far more liberally than any “liberal” it blames its current budget problems on. Here’s how that looks on the ground as Tax Day has come and gone in the United States:
By the time Tax Day rolls around every April 15, accountant María José Solís usually has more to do. More clients. More paperwork. More phones ringing, more emails and WhatsApp messages pinging.
But this year, she said, more than 550 of her regular clients have disappeared. That’s about 15 percent of her customer base at Toro Taxes, the bilingual firm in Wheaton, Maryland, that Solís runs.
There’s your anecdote, albeit one that’s being repeated around the nation. Here’s the data:
The Yale Budget Lab estimates that the IRS stands to lose between $147 billion and $479 billion over the next decade as migration to the U.S. declines, deportations increase and immigrants of various statuses disengage from the formal economy for what some experts say may be an extended period.
That estimate will likely be low if the Trump administration continues to purge migrants at the rate it has since Trump returned to office. It will definitely be lower if another similarly bigoted GOP lawmaker succeeds him as president.
Advertisement
And it’s not just the losses up front. There’s money leaking out the back as well. It’s a double-dip, because migrants with ITINs (individual tax identification numbers) pay taxes for services they can’t actually access, like Social Security and Medicare. They’re actually subsidizing citizens who pay fewer taxes, work fewer hours, and commit more crimes than they do.
This nation continues to become poorer, not just in terms of financial viability, but in heart and spirit. Migrants made this nation great. Now, a bunch of ungrateful people who hate people who aren’t white are not only driving us deeper into debt, but they’re eliminating a source of income that never asked for anything more than a chance to survive.
Dive into Godot – a rising star in the game engine world – with the 2026 Complete Godot Stack Development Bundle. You’ll learn to create platformers, RPGs, strategy games, FPS games, and more as you master this free and open-source engine with easily expandable systems. Plus, you’ll also explore techniques for game design and game asset creation – giving you the ultimate techniques to customize your projects. It’s on sale for $25.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Back in 2011 and 2012, one of the central technical objections that helped kill SOPA and PIPA was about DNS blocking. Engineers, internet architects, and cybersecurity experts all lined up to explain, in painstaking detail, why blocking at the DNS layer was a terrible idea. It would break the fundamental architecture of how the internet works. It would have massive collateral damage. It would undermine security protocols designed to protect users from exactly the kind of DNS manipulation that the bill proposed. And it wouldn’t even stop piracy, because anyone who actually wanted to get around DNS blocking could do so easily.
Congress, to its rare credit, actually listened to the technical experts (and widespread protests) and shelved the legislation. But the entertainment industry never gave up on the idea. They just went jurisdiction-shopping. And France, which has never met a maximalist copyright enforcement scheme it didn’t love, has been more than happy to oblige.
As recently reported by TorrentFreak, a Paris Court of Appeal validated DNS blocking orders requiring Google, Cloudflare, and Cisco to block access to pirate sites through their own DNS resolvers. This goes beyond traditional ISP resolvers, which France has been ordering blocked for years — this targets third-party resolvers — the ones that millions of people specifically choose to use because they offer better privacy, better security, and better reliability than their ISP’s default DNS.
But, of course, in France (and to the usual crew of Hollywood lobbyists), “better privacy, security, and reliability” can only mean one thing: used for piracy.
Advertisement
The court rejected all five appeals, and in doing so, articulated a legal principle so sweeping that it has no natural stopping point.
In this case, French pay-TV provider Canal+ went to court under Article L. 333-10 of the “French Sport Code,” which lets rightsholders request “all proportionate measures” against “any online entity in a position to help” block access to pirate sites. Canal+ argued that because users were simply switching to third-party DNS resolvers to circumvent ISP-level blocking, those resolvers should be conscripted into the blocking regime too.
Cloudflare and Cisco pushed back, arguing that their DNS resolvers serve a “neutral and passive function” — they translate domain names into IP addresses and that’s it. They compared their role to a phone book. The court’s response boiled down to: we don’t care.
The DNS resolution service allows its users, via the translation of a domain name into an IP address, to access websites on which sports competitions are broadcast in violation of rights-holders’ rights, and in particular to circumvent the blocking of those sites by ISPs.
The court found that the “neutral and passive” nature of DNS resolvers is “simply irrelevant to Article L. 333-10.” The law isn’t about liability at all — it only cares whether a service can help block access to pirate sites, which DNS resolvers clearly can. If you are technically capable of blocking access, you must.
Advertisement
Google, meanwhile, tried a different argument: that DNS blocking through third-party resolvers isn’t effective because users can just switch to a VPN or yet another resolver. The court wasn’t moved by that either:
Any filtering measure can be circumvented, and this possibility does not render the measures in question ineffective.
As long as DNS blocking stops some subset of users from reaching pirate sites, the court ruled, it’s “proportionate.” Under that line of thinking, any measure that inconveniences even a fraction of would-be pirates is legally justified, no matter how much collateral damage it causes for everyone else.
And if you think that principle has any limit, Canal+ has made it quite clear that they don’t think it does:
Canal+ said in a statement that the rulings are “more than a victory,” forming part of “a global approach that will be reinforced by the progressive deployment of complementary measures, including IP blocking.”
Canal+ has already been getting courts to order VPN providers to block as well. So now we have ISP DNS blocking mandated, third-party DNS resolver blocking mandated, VPN blocking mandated — and, per the TorrentFreak article, direct automated IP address blocking is coming too. They will not stop until the entire internet is broken.
Advertisement
Each step reaches further down the internet stack, breaks more of the internet for more people, and stops fewer actual pirates, because the people who are determined to pirate content are always one technical maneuver ahead. The people who get caught in the collateral damage are ordinary users who happen to use Cloudflare’s 1.1.1.1 or Google’s 8.8.8.8 for perfectly legitimate reasons like speed, reliability, and privacy.
Cisco, rather than comply with the original order, simply pulled its OpenDNS service out of France entirely. That’s the kind of collateral damage we’re talking about. French users who relied on OpenDNS for entirely lawful purposes completely lost access to the service. Because a copyright holder decided that the DNS layer was the right place to play whack-a-mole with pirate sites.
When Cisco argued on appeal that implementing geo-targeted DNS blocking would require 64 person-weeks of engineering work, the court waved it off, saying the estimate was “not supported by any objective evidence” and pointing out that Cisco already offers DNS filtering to enterprise customers. The fact that enterprise DNS filtering for corporate networks is a fundamentally different thing than mass geo-targeted blocking of domains at the resolver level for an entire country’s users apparently did not register as a meaningful distinction.
The court’s core reasoning — that any entity technically capable of blocking must do so, that circumvention doesn’t make blocking disproportionate, and that the “neutral and passive” function of an intermediary is irrelevant — creates a legal framework that can reach basically anything. If a DNS resolver can be conscripted because it’s “in a position to help,” what about browsers? What about operating systems? What about CDNs, or cloud hosting providers, or certificate authorities? The logic has no brake pedal. Every layer of the internet stack is, in some sense, “in a position to help” block access to content. The question the court’s reasoning cannot answer is: where does it end?
Advertisement
Under this reasoning, what’s to stop a rightsholder from arguing that browsers should block pirate URLs directly? Or that operating systems should refuse to resolve them at all?
That seems bad!
Of course, this kind of maximalist copyright enforcement is something of a French specialty. This is the same country that brought us HADOPI, the graduated response agency that cost French taxpayers €82 million over a decade while imposing a grand total of roughly €87,000 in fines. A staggering return on investment — if the goal was to light money on fire while accomplishing nothing. France has also been at the forefront of copyright exceptionalism that risks undermining the EU legal system more broadly, pushing interpretations of copyright law so aggressive that they threaten to distort the legal frameworks of neighboring countries.
France keeps doing the same thing over and over again: spend enormous sums, conscript more and more intermediaries, break more and more of the internet’s infrastructure, accomplish almost nothing in terms of actually reducing piracy, and then conclude that what’s really needed is… more of the same, but harder. The entertainment industry’s refusal to learn from twenty years of evidence that enforcement-maximalism doesn’t work is genuinely remarkable. Every study and every natural experiment shows the same thing: the most effective anti-piracy tool ever invented is convenient, reasonably priced legal access to content. But that requires adapting your business model, and it’s apparently much more satisfying to get courts to break the internet for you instead.
Advertisement
The ruling’s real danger is the template it sets. Other countries with similar legal frameworks will look at this appeals court validation and think: we can do that too. The “any entity in a position to help” standard, combined with the “doesn’t have to be perfectly effective” standard, combined with the “we don’t care about your neutral role in the architecture” standard, adds up to a legal toolkit for conscripting nearly any internet infrastructure provider into a copyright enforcement apparatus. And the costs get externalized onto those providers (and their users), while the rightsholders collect the benefits.
The engineers who fought SOPA warned about exactly this: DNS blocking breaks things, creates collateral damage, pushes enforcement into layers of the stack never designed for it — and doesn’t actually stop piracy, because the actual pirates just route around it while everyone else suffers. France apparently decided all of those concerns are, to quote the court, “simply irrelevant.” And now they’ve moving on to IP blocking.
At some point, you run out of layers of the internet to break. But apparently we’re going to have to find out where that point is the hard way.
An anonymous reader quotes a report from Wired: New gas projects linked to just 11 data center campuses around the US have the potential to create more greenhouse gases than the country of Morocco emitted in 2024. Emissions estimates from air permit documents examined by WIRED show that these natural gas projects — which are being built to power data centers to serve some of the US’s most powerful AI companies, including OpenAI, Meta, Microsoft, and xAI — have the potential to emit more than 129 million tons of greenhouse gases per year. As tech companies race to secure massive power deals to build out hundreds of data centers across the country, these projects represent just the tip of the iceberg when it comes to the potential climate cost of the AI boom.
The infrastructure on this list of large natural gas projects reviewed by WIRED is being developed to largely bypass the grid and provide power solely for data centers, a trend known as behind-the-meter power. As data center developers face long waits for connections to traditional utilities, and amid mounting public resistance to the possibility of higher energy bills, making their own power is becoming an increasingly popular option. These projects have either been announced or are under construction, with companies already submitting air permit application materials with state agencies. […] The emissions projections for the xAI and Microsoft projects, and all the others on WIRED’s list, were pulled directly from publicly-available air permit documents in state databases as well as public air permit materials collected by both Cleanview and Oil and Gas Watch, a database maintained by the Environmental Integrity Project, an environmental enforcement nonprofit. Actual greenhouse gas emissions from power plants are usually lower than what’s on their air permits. Air permit modeling is based on the scenario of a power plant constantly running at full capacity. That’s rarely the reality for grid-connected power plants, as turbines go offline for maintenance or adjust to the ebbs and flows of customer demand.
“Permitted emission numbers represent a theoretical, conservative scenario, not the actual projected emissions,” Alex Schott, the director of communications at Williams Companies, an oil and gas company that is building out three behind-the-meter power plants in Ohio for Meta, told WIRED in an email. Internal modeling done by the company, Schott added, shows that actual emissions could be “potentially two-thirds less than what’s on paper.” The projections involved, however, are still substantial. Even if the actual emissions from these power plants end up being half of the emissions numbers on the permits, they still could create more greenhouse gas emissions than the country of Norway emitted in 2024. This number is, according to the EPA, equivalent to the emissions from more than 153 average-sized natural gas plants. (WIRED’s analysis does not include emissions from backup generators and turbines on the data center campuses themselves, which create smaller amounts of emissions.)
Energy researcher Jon Koomey says the data center boom has created a shortage of the most efficient gas turbines, pushing some developers toward less efficient models that would need to run longer and produce more emissions. “[Data center operators’] belief is that the value being delivered by the servers is much, much more than the cost of running these inefficient power plants all the time,” he said.
Michael Thomas, the founder of clean energy research firm Cleanview, has been tracking gas permits for data centers across the country. He calls behind-the-meter power “a crazy acceleration of emissions.” He added: “It’s almost like we thought we were on the downside of the Industrial Revolution, retiring coal and gas, and now we have a new hump where we’re going to rise. That terrifies me in a lot of ways.”
The FCC has expanded its foreign-made router ban to also cover consumer Wi-Fi hotspots and LTE/5G home-internet devices, though existing products and phones with hotspot features are not affected. PCMag reports: On Wednesday, the FCC updated its FAQ on the ban, clarifying which consumer-grade routers are subject to the restrictions. Portable Wi-Fi hotspots are usually considered a separate category from Wi-Fi home routers. Both offer internet access, but portable Wi-Fi hotspots use a SIM card to connect to a cellular network rather than an Ethernet cable inside a residence. However, the FCC’s FAQ now specifies that “consumer-grade portable or mobile MiFi Wi-Fi or hotspot devices for residential use” are covered under the ban.
The ban also affects “LTE/5G CPE devices for residential use,” which are installed for fixed wireless access and use a carrier’s cellular network to deliver home internet. The FCC didn’t immediately respond to a request for comment about the changes. In the meantime, the FAQ reiterates that the foreign-made router ban only applies to consumer-grade devices, not enterprise products. The document also notes that mobile phones with hotspot features remain outside the restrictions. In addition, the ban only affects new router models that vendors plan to sell, not existing models, as T-Mobile emphasized to PCMag.
Assassin’s Creed Black Flag Resynced ‘delivers a no-compromise experience with advanced ray tracing performance’ on PS5 Pro, along with the latest PSSR 2 tech
The remake of the 2013 Assassin’s Creed pirate adventure has finally been revealed, and a new PlayStation Blog post following the world premiere trailer has outlined what players can expect from the PS5 and PS5 Pro versions.
Ubisoft said it’s “taking advantage of PS5 Pro to deliver an immersive pirating adventure” to deliver improved visuals and optimized performance.
Article continues below
Advertisement
“We were extremely impressed with the enhanced PSSR. It really redefines the graphics experience in console games,” said Jussi Markkanen, technical director, Ubisoft Singapore. “It allowed us to render our dynamic tropical world full of swaying palm trees, violent storms and rogue waves without visible upscaling artifacts, delivering sharp pixel quality and great image stability.”
Nicolas Lopez, technical architect at Ubisoft Montréal, said that Black Flag Resynced “pushes ray tracing further across all modes” on PS5 and PS5 Pro, and confirmed the PS5 Pro version will offer the latest PSSR 2 tech.
Like most PS5 games, this suggests the game will launch with several graphics modes, like Performance, Balanced, and Quality.
Advertisement
“PlayStation 5 brings more consistent lighting, while PlayStation 5 Pro delivers a no-compromise experience with advanced ray tracing performance and enhanced PSSR,” Lopez said.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
We also now know that the game will use the latest version of Ubisoft’s Anvil engine, which offers more detailed character faces and environments, models, animations, and denser crowds.
The official Black Flag Resynced reveal has been a long time coming. The game was rumored back in 2023, and over the past few months, more and more leaks have slipped through the cracks. It wasn’t until last month that Ubisoft finally confirmed the existence of the game.
Advertisement
Assassin’s Creed Black Flag Resynced is set to launch on July 9, 2026, for PS5, Xbox Series X|S, and PC.
Time to hang it up? Microsoft will be giving some employees that chance. (GeekWire Photo / Todd Bishop)
Microsoft is offering a one-time voluntary retirement program for the first time in its 51-year history, giving thousands of long-serving U.S. employees a chance to leave with a financial payout and extended healthcare as it works to control costs amid a massive buildup in AI infrastructure.
An estimated 7% of Microsoft’s 125,000-person U.S. workforce, or about 8,750 employees, would be eligible based on a formula that takes into account their years at the company and their age.
It’s a highly unusual move in the tech world. Voluntary retirement programs are common in older industries, such as telecom and manufacturing, but the largest tech companies have instead turned to layoffs, stricter performance reviews, and return-to-office policies to thin their ranks.
The retirement program was outlined Thursday in a memo to employees from Chief People Officer Amy Coleman, who described it as a one-time offering for long-serving workers.
Advertisement
The program is open to U.S. employees at Level 67 — the equivalent of senior director — and below, excluding those on sales incentive plans, whose years of service plus age total 70 or more. Eligible employees will be notified May 7 and will have 30 days to decide.
“Many of these employees have spent years, and in some cases, decades, shaping Microsoft into what it is today,” Coleman wrote. “Our hope is that this program gives those eligible the choice to take that next step on their own terms, with generous company support.”
In the same memo, Coleman outlined changes to Microsoft’s compensation system, reducing the number of pay levels from nine to five. It’s also decoupling stock awards from bonuses, giving managers flexibility to use stock to reward long-term contributors regardless of their latest performance rating.
Microsoft isn’t providing specific details of the retirement package yet, saying eligible employees and their managers will receive more information on May 7. Details of the healthcare component will be significant for employees who are not yet old enough to qualify for Medicare at age 65.
Advertisement
There are not expected to be any restrictions on future employment for those who take the deal.
The program would take effect in Microsoft’s fiscal fourth quarter, and CFO Amy Hood is expected to discuss it on the company’s earnings call next week.
Back in the early 1980s, Radio Shack sold a variety of gadgets that caught the eye of both curious children and tech enthusiasts. Among those goods was Gobble Man, a small handheld game that placed maze chases directly into your palms years before the larger portable consoles that came later. Bandai first released this in Japan during 1981 as a game known as Packri Monster. Tandy then scooped it up, licensing the design for its US stores before selling it as Gobble Man in 1983. To add to the confusion, Tandy sold the exact same units under the titles Hungry Monster and Ogre Eater.
Four AA batteries powered everything from start to finish. A vacuum fluorescent display, which was basically a tiny little screen, made the game feel extremely bright; even in a dark room or on a vehicle journey, you could see every wall and dot in the maze clearly. Inside the handheld was a Hitachi HD38800 chip, a specialized microcontroller designed specifically to operate this game and nothing else.
Play Your Game Collection with Remote Play – PlayStation Portal Remote Player can play compatible games you have installed on your PS5 console…
Cloud Streaming from the Game Catalog and Classics Catalog – Discover an awesome library of PS5 games on the PlayStation Portal Remote Player with…
Cloud Streaming for PS5 Games in Your Library – With PlayStation Plus Premium, stream select digital PS5 games in Your Library from PlayStation Store…
Players took a rectangular plastic case about the size of a large paperback book to play with. A four-way joystick on the front let you to direct the yellow monster through the curving courses; the stick itself employed a dual pivot mechanism that required a forceful push to register every turn. The monster continued to move in the same direction until the player altered course or it collided with a wall. The controls included a start button and a power switch, as well as a small orange indicator that signaled when the gadget was turned on.
The music came from a very small built-in speaker that played the tunes and effects at the same constant volume, such as beeps and rudimentary melodies, and alerted you when something happened in the game. There was no volume control, so the cacophony was part of the experience, whether you liked it or not. The gameplay took place on a single fixed maze, with green food pellets to chew on. The goal was simple: move the monster so that it ate every pellet before the bogey caught up, and if it did, game over. Red power food would sometimes materialize in specific locations. Eating one of those would transform the monster into a coward who would back direction and flash briefly. For a certain period of time, you might pursue them and get some bonus points.
However, the difficulty level increased with each subsequent round. More enemies joined the chase, and speeds rose, giving the maze the sensation of closing in around you when the strain was on. Your score would skyrocket if you timed your power snacks perfectly and wiped the board clear without making a single error. Round after round, the whole thing just kept going until the batteries died or your thumbs gave out in frustration. Anyone who picked up Gobble Man in 1983 would immediately feel as if they understood the game because the rules were so familiar.
Gobble Man was never really trying to compete with the big boys, such as Pac-Man, home consoles, and coin-op machines. It was simply a nice little distraction to keep you engaged while on the run, regardless of where you were. Six years later, Nintendo released the Game Boy, which very well defined what portable gaming was all about. But first, Tandy created Gobble Man, which proved that the concept worked and left many people wanting more.
More than half of the outlets listed on its website are reportedly closed
Homegrown supermarket and minimart chain Hao Mart has closed a large share of its outlets as losses deepen, The Straits Times reported.
As of publication, Hao Mart’s website lists 20 outlets across Singapore. However, a check by The Straits Times found that only seven remain in operation after visiting all listed locations over a two-week period in the second half of Mar.
The remaining seven include six regular Hao Mart stores, located in Bedok South Avenue 3, Canberra Link, Potong Pasir Ave 1, Petir Road, Whampoa Drive and Pasir Ris Street 21; and one premium Eccellente outlet in the Marina Square shopping mall.
According to Accounting and Corporate Regulatory Authority (ACRA) records obtained by the publication, the company has been in the red since 2023, and its losses have worsened over the years.
Advertisement
For FY2025, Hao Mart reported a S$49.6 million loss. The amount is up from S$32.8 million in FY2024 and S$23.2 million in FY2023, following two years of profitability. The company also posted a S$2.2 million loss for FY2019, its first year of filings.
FY2025’s figures were filed with ACRA belatedly in Jan. ACRA told The Straits Times in Mar that it has taken enforcement action against Hao Mart for failing to file annual returns within six months of its financial year-end.
While the regulator did not disclose the specific penalty imposed, its website states that late filings can incur fines of S$300 or S$600, depending on the length of the delay.
Separately, Hao Mart is also facing four High Court lawsuits, including a dispute with landlord OG over the termination of its lease for its flagship five-storey Taste Orchard mall on Orchard Road.
Advertisement
Vulcan Post has reached out to Hao Mart about the scaling down of its operations and financial situation.
Some closed stores taken over by rivals
Hao Mart’s first minimart in Whampoa./ Image Credit: T T Teo, Saad Chinoy via Google Reviews
Hao Mart was founded by Dr Tan Kim Yong in 2016, with its first store at Block 74 Whampoa Drive. The chain expanded rapidly over the years, peaking at 51 stores in Dec 2021.
But by Dec 2024, Hao Mart stores had dwindled to just 20, according to its website. Among the outlets that The Straits Times found to be closed, four had been replaced by Hao Mart’s rivals.
Sheng Siong now occupies Hao Mart’s former Punggol East and KINEX Mall spaces, while ValueMart runs the Punggol Walk site near Waterway Point.
Newcomer ACE Signature took one of three Geylang shophouses. Of the remaining two Geylang units, one became a coffeeshop, and the other was demolished in Sep 2024 for a five-storey rebuild. The Redhill Road minimart is now a tuition centre.
Advertisement
Elsewhere, Indian supermarket chain Sri Murugan Supermarket now runs Hao Mart’s former Bayshore Park condo minimart. Its Parksuites condo-facing outlet has been converted into an after-school care centre. The Far East Plaza outlet is now a food court, the Esplanade Xchange space has become a travel agency, and the East Village supermarket has been replaced by a gym.
Read other articles we’ve written on Singaporean businesses here.
You must be logged in to post a comment Login