The Seattle Seahawks perfume is L’essence de victorie. (Image via Seahawks.com)
The release of 2026 NFL team schedules on Thursday night marked the return of the “social media Super Bowl” — and the actual Super Bowl champs were might in the mix.
The Seattle Seahawks used a fun play on perfume ads to spritz their opponents in the face. Tight end AJ Barner — sporting the fur coat and cowboy hat he wore during the team’s championship parade — said Parfum De Seahawks smells of “emotion, power, and Lombardi.”
The ad features cameos by actors/Seahawk fans Josh Lucas, Joel McHale and Pierson Fode. Play-by-play announcer Steve Raible and Seahawks legend Marshawn Lynch also appear, as well as current players Julian Love and Nick Emmanwori. Taima the Seahawk also flies in.
The fake ad takes shots at all of the teams the Seahawks will be facing this season, with assorted insight on what perfumes for those teams might look and smell like.
The New England Patriots, who the Hawks beat in Super Bowl LX, have a fragrance that smells like autumn leaves, Boston baked beans, clam chowda and tears.
The Los Angeles Rams fragrance is called Conversion No. 2 — a nod to the controversial 2-point conversion Seattle registered in a comeback win against the Rams last season.
“It smells of melancholy and what the hell just happened,” McHale says. “When you put this cologne on, you don’t even know you’ve already lost.”
Advertisement
Amazon also announced the scheduled of games for “Thursday Night Football” on Prime Video. The company said last year ranked as the most-watched season ever across the 20-year history of “TNF,”averaging 15.33 million viewers throughout the 15-game campaign.
BYO power for AI bit barns may be the best way to ease the problem, says energy watchdog
Prices in the United States’ largest wholesale power market have nearly doubled in the past year thanks to demand from datacenters. And an independent watchdog predicts things will only get worse without some serious changes.
The PJM Interconnection serves all or parts of 13 states and the District of Columbia in the eastern US, including Northern Virginia, that’s got the densest cluster of datacenters in the world. The surge in wholesale power costs across PJM was outlined on Thursday by Monitoring Analytics, a firm that serves as the official market monitor for the Interconnection, in its Q1 2026 state of the market report.
Advertisement
According to the report, the total cost per megawatt-hour (MWh) of wholesale power rose from $77.78 in the first three months of 2025 to $136.53 in the same period this year, an increase of 75.5 percent year over year. Monitoring Analytics didn’t mince words in its report, identifying datacenter load growth as the main driver of recent capacity market conditions and rising prices in PJM.
“Data center load growth is the primary reason for recent and expected capacity market conditions, including total forecast load growth, the tight supply and demand balance, and high prices,” the report reads. “But for data center growth, both actual and forecast, the capacity market would not have seen the same tight supply demand conditions.”
As for what might come next, the report doesn’t ignore the likely outcome of the current situation, either.
“The price impacts on customers have been very large and are not reversible,” the report states, but the bad news doesn’t stop there. “The price impacts will be even larger in the near term unless the issues associated with data center load are addressed in a timely manner.”
Advertisement
Based on the rest of the report, a timely resolution to the datacenter load issue shouldn’t be expected, at least not in a way that’ll benefit locals.
For starters, Monitoring Analytics found that – like pretty much everywhere right now – power grids aren’t ready for the datacenter boom. PJM has taken steps to upgrade its power commitment and dispatch software to better operate its grid, but planned upgrades have been delayed multiple times with no planned implementation date on the calendar, per the report.
“The current supply of capacity in PJM is not adequate to meet the demand from large data center loads and will not be adequate in the foreseeable future,” Monitoring Analytics asserted.
Current plan: Shift the risk to everyone else
PJM has been planning a one-time backstop auction to procure new power generation for datacenter projects in the region at the request of the Trump administration and the governors of the states it serves, but Monitoring Analytics isn’t convinced the Interconnection is going about the process in the right way.
Advertisement
The currently proposed auction structure, says the watchdog, would “generally shift significant risk to other PJM customers,” which is a temptation the group says “should be resisted.”
“Other PJM customers, whether residential, commercial or industrial, should not be treated as a free source of insurance, or collateral, or financing for data centers,” the report continued. “Yet that is what most of the proposals related to a backstop auction actually do.”
As for what PJM ought to be doing, you probably won’t need to rack your brain to figure that out: Monitoring Analytics says datacenters ought to be required to bring their own power. Such a rule, says the group, should include fast-track options for interconnection for BYOP datacenters, and otherwise a queue that would only connect datacenters when there is adequate capacity to serve them.
“This broad bring-your-own new generation solution to the issues created by the addition of unprecedented amounts of large data center load does not require a continued massive wealth transfer through ongoing shortage pricing,” the analysts argue.
Advertisement
When asked for its response to the problems raised by the Monitoring Analytics report, PJM told us that it was fully aware of the impact of electricity cost increases on its customers.
“PJM is working with states and member companies to address these consumer impacts on multiple fronts, including extending market caps put in place since the 2025/2026 auction, authorizing multiple transmission expansion projects that are now in development, and reforming wholesale electricity market rules,” the Interconnection told us. Monitoring Analytics didn’t respond to questions.
Americans have become increasingly hostile to new datacenter projects driven by the AI boom, with 71 percent of respondents to a Gallup survey saying they opposed DC projects in their neighborhoods. Projects in multiplestates have been abandoned recently due to pushback from locals, many of whom are concerned not only with electrical price increases, noise, and eyesores, but environmental harm as well. ®
Starbucks announced Friday that is laying off 300 additional corporate employees and closing several regional offices after earlier this week providing details on the elimination of 61 tech roles in Seattle.
The cuts aim to “further sharpen focus, prioritize work, reduce complexity, and lower costs,” a spokesperson said by email. The company axed nearly 2,000 corporate roles last year, according to past reports.
Starbucks did not announce any new store closures, but will shutter offices in Atlanta, Burbank, Chicago and Dallas while maintaining its Seattle headquarters and offices in New York, Toronto and Coral Gables, Fla. The company is also opening a new office in Nashville.
The moves are part of the company’s “Back to Starbucks” strategy, launched by CEO Brian Niccol to bolster performance and refocus attention on its coffeehouses and customer service.
On a quarterly earnings call last month, Niccol highlighted several tech innovations aimed at improving coffeehouse efficiency and productivity:
Advertisement
Plans to install automated Mastrena machines that can pull four espresso shots in less than 30 seconds.
Improved use of its Smart Queue system, which uses algorithms to manage the flow of cafe, drive-thru, and mobile orders.
A digital system called the GROW Report that provides insights into coffeehouse performance.
Starbucks, which has 41,129 coffee shops worldwide, previously reported revenue growth of 8% compared to the same period last year.
Anthropic has launched a small business offering, signalling a new front in the AI platform rivalries.
Anthropic, the AI company best known for its Claude models, has unveiled a dedicated product for small businesses – a move that seems to mark a deliberate pivot beyond the large enterprise customers that have driven its success to date.
Claude for Small Business is a toggle-on feature within Claude Cowork, Anthropic’s task-automation platform. Once activated, it gives paying users access to 15 pre-built agentic workflows across finance, operations, sales, marketing, HR and customer service, and can be connected to software that many small businesses already use. Partner integrations include QuickBooks, PayPal, HubSpot, Canva, DocuSign, Google Workspace and Microsoft 365.
It is a strategy that makes sense, particularly in the massive US market. Small businesses account for 44pc of US GDP and employ nearly half the private-sector workforce, according to Anthropic, which added that their AI adoption has lagged behind larger enterprises.
Advertisement
“Small businesses make up nearly half the American economy, but they’ve never had the resources of bigger companies,” said Daniela Amodei, co-founder and president of Anthropic.
“AI is the first technology that can finally close that gap, which is why we’re launching Claude for Small Business, alongside training and partnerships to make sure AI shows up for the entrepreneurs and communities who need it most.”
The launch may signal that the battleground for AI user acquisition is shifting. While Anthropic has had significant success taking on its major rival OpenAI in the enterprise market, the latter is well ahead when it comes to small business, having released an Enterprise ChatGPT tier that included a small-team option back in 2023.
As part of the launch, PayPal and Anthropic have co-created ‘AI Fluency for Small Business’, a free online course teaching owners how to integrate AI into their operations.
Advertisement
“Together, we are equipping business owners and entrepreneurs with the tools, expertise, and trusted infrastructure they need to compete and thrive,” said Amy Bonitatibus, chief corporate affairs officer at PayPal. The course is available on demand; on completion, learners receive a shareable certificate.
In a bid to drive adoption of Claude for Small Business in the huge US market, Anthropic is taking the offering on the road with a 10-city US tour offering free, half-day AI training workshops for small businesses, kicking off in Chicago yesterday (14 May).
The new direction comes at a time when Anthropic is in discussions with investors to raise between $30bn and $50bn in new funding at a valuation of up to $950bn, a deal that would see the Claude maker surpass rival OpenAI as the world’s most valuable artificial intelligence start-up.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
On Thursday, Microsoft shared mitigations for a high-severity Exchange Server vulnerability exploited in attacks that allow threat actors to execute arbitrary code via cross-site scripting (XSS) while targeting Outlook on the web users.
Microsoft describes this security flaw (CVE-2026-42897) as a spoofing vulnerability affecting up-to-date Exchange Server 2016, Exchange Server 2019, and Exchange Server Subscription Edition (SE) software.
While patches aren’t yet available to permanently fix the vulnerability, the company added that the Exchange Emergency Mitigation Service (EEMS) will provide automatic mitigation for Exchange Server 2016, 2019, and SE on-premises servers.
“An attacker could exploit this issue by sending a specially crafted email to a user. If the user opens the email in Outlook Web Access and certain interaction conditions are met, arbitrary JavaScript can be executed in the browser context,” the Exchange Team said.
Advertisement
“Using EM Service is the best way for your organization to mitigate this vulnerability right away. If you have EM Service currently disabled, we recommend you enable it right away. Please note that EM Service will not be able to check for new mitigations if your server is running Exchange Server version older than March 2023.”
EEMS was introduced in September 2021 to provide automated protection for on-premises Exchange servers, securing them against ongoing attacks by applying interim mitigations for high-risk (and likely actively exploited) vulnerabilities.
EEMS runs as a Windows service on Exchange Mailbox servers and is automatically enabled on servers with the Mailbox role. The security feature was added after many hacking groups exploited ProxyLogon and ProxyShell zero-days (which lacked patches or mitigation information) to breach Internet-exposed Exchange servers.
Admins with servers in air-gapped environments can also mitigate the flaw by downloading the latest Exchange on-premises Mitigation Tool (EOMT) version and applying the mitigation by running the script via an elevated Exchange Management Shell (EMS) with one of the following commands:
Advertisement
However, it’s important to note that applying the mitigation measures on vulnerable servers will cause issues, including:
OWA Print Calendar functionality might not work. As a workaround, Microsoft suggested copying the data, taking a screenshot of the calendar you want to print, or using the Outlook Desktop client.
Inline images might not display correctly in the recipients’ OWA reading pane. As a workaround, users are advised to send images as email attachments or use the Outlook Desktop client.
OWA light (OWA URL ending in /?layout=light) does not work properly (this feature was deprecated several years ago and is not intended for regular production use).
Microsoft plans to release patches for Exchange SE RTM, Exchange 2016 CU23, and Exchange Server 2019 CU14 and CU15, but says that updates for Exchange 2016 and 2019 will only be available to customers enrolled in the Period 2 Exchange Server ESU program.
BleepingComputer also reached out to Microsoft with questions about the attacks, but a response was not immediately available.
In October, weeks after Exchange 2016 and 2019 reached the end of support, the Cybersecurity and Infrastructure Security Agency (CISA) and the National Security Agency (NSA) released guidance to help IT admins harden Microsoft Exchange servers against attacks.
Automated pentesting tools deliver real value, but they were built to answer one question: can an attacker move through the network? They were not built to test whether your controls block threats, your detection rules fire, or your cloud configs hold.
This guide covers the 6 surfaces you actually need to validate.
One of the cornerstones of the automotive enthusiast community today is a true manual transmission. Several types exist, ranging from traditional manuals to computerized DCTs that function as de facto automatics. What we’re discussing here is the former, either a classic manual transmission with a third pedal or an automated manual transmission with a computerized clutch. These transmissions are well-regarded for their simplicity and ruggedness, finding use in everything from sports cars to big rigs. But let’s suppose you’ve bought a used, high-mileage example; how much life is left in the transmission?
Generally, more than you’d expect. The average life expectancy for your typical passenger car manual transmission (not counting the clutch or transmission fluid, for instance) is commonly held to be around 150,000 to 200,000 miles. This translates to a real-world lifespan of around a decade of daily driving before anything major might happen, assuming it’s maintained well.
Proper maintenance applies to all transmissions, of course; it goes without saying that following the recommended service interval of any transmission will extend its lifespan. However, the driver’s involvement in gear shifts on a manual gearbox means that there’s more variability than simply keeping on top of fluid changes. Driving habits also play a crucial role in extending transmission life. Let’s dive in and discuss how to maintain and treat these transmissions, and what symptoms might present if one is about to fail.
Advertisement
How to prolong the life of your manual transmission
Madeline Cuccio/SlashGear
Manual transmissions have a reliability advantage over automated transmissions, and that’s just pure arithmetic. There are no computers, solenoids, or hydraulic systems required to change gears. The car doesn’t have to “think” about what gear to be in; that’s your job as the driver. Their inherent simplicity can work to your benefit, provided you don’t regularly strain the transmission components. Therein lies the problem, though, as driver error can shorten a transmission’s lifespan.
To illustrate driver error more clearly, let’s discuss how a transmission synchronizer works. A manual transmission contains two shafts: an input and an output shaft. Those shafts will spin at different speeds depending on what gear you select, and the synchronizers will synchronize one shaft with the other. In older vehicles, these synchronizers can be made of soft metals like brass. If you shift harshly and put a lot of stress on delicate components like these, your transmission could eventually stop shifting properly and need a rebuild.
Advertisement
The occasional stall or mis-shift, however, generally won’t damage your transmission unless you do it constantly. Beyond that, smooth shifting and not abusing your components is key. Basic practices like not riding the clutch pedal when driving, not forcing your car into gear, and shifting into neutral during extended stops help extend your transmission’s life. Moreover, all transmissions have a specific torque rating, and staying away from that limit helps keep the strain to a minimum.
Advertisement
Signs that your transmission is reaching the end of its service life
Zorica Nastasic/Getty Images
Suppose you just bought a high-mileage used manual car. and don’t have an idea of how the car felt when it was new. Do you have to worry about the transmission failing? Generally, no, or at least it’d be pretty obvious if it were. Transmissions are mechanical devices that feature several gears constantly spinning and meshing, which can be both a good and bad thing in this case. The good news is that problems very rarely appear suddenly, and your transmission will tell you if something’s going wrong.
There are several common manual transmission problems, each with its own implications. A slipping or rough-feeling clutch might only require a replacement clutch and not indicate a problem with the transmission itself. Strange noises or difficulty shifting may be due to old transmission fluid. Your car may get stuck in gear or suddenly jump out of a gear, the latter of which is usually a sign of worn-out synchros. These issues don’t necessarily require a rebuild, and you may only need new fluid. And yes, manual transmission fluid does, in fact, have regular service intervals.
Like anything with frequent metal-on-metal contact, however, symptoms such as heat buildup, spontaneous malfunctions, and metal flakes in the fluid indicate serious problems. If these do show up, though, then it’s time to visit the shop and take a closer look.
If you are a certain age, you doubtlessly remember Heathkit. They produced a wide array of electronic kits that were models of completeness and clear instructions. They started with surplus war parts in 1947 and wound up a major player in ham radio and early personal computers. But they made so many other things like TVs, radio control planes, and test equipment. All of it was made for you to build yourself. [Unseen History] released a video with the story of Heathkit from the start to the finish.
The company started out building kit airplanes, but after the war, they built a kit for an oscilloscope using military surplus. The less than $40 scope was still pricey in 1947 when a pound of bacon sold for 64 cents. But a “real” oscilloscope at the time would cost at least $400. The rest is history.
The Heathkit manuals were made simple enough that anyone could build a kit. But they also contained enough detail that you could truly understand what you built. Heathkit gear is still prized today.
Advertisement
Heathkit lost the kit business when Zenith bought the company, partly due to inattention and partly because fewer people cared about electronic kits. This was hastened by a drop in interest and to the availability of inexpensive electronics that you didn’t have to build. The company limped along with educational materials and home automation. By 2012, it was done. At its peak, the company employed over 1,800 people, and by the end, there were six people who lost their jobs.
The recent “internet addiction” verdicts against Apple, Meta, and YouTube drew applause from those eager to see big tech take a hit. But look behind the headlines and the result is something else entirely. These cases won’t help children. They will fuel a litigation plague that raises costs, chills innovation and hits smaller companies the hardest.
The legal theory behind these cases tries to work around Section 230 by shifting the focus from user content to product design. Plaintiffs argue that features like infinite scroll or “like” buttons create harm independent of users’ personal content. It is a creative argument. It is also a slippery slope with no clear limiting principle.
Once product design becomes the hook for liability, any widely used product becomes a target. Newspapers, magazines and even packaged goods design headlines with catchy taglines to capture attention. Platforms do the same with feeds, to deliver value to their users. Labeling these as “addictive” design shouldn’t be seen as a viable path to sidestepping Section 230.
This shift also has broader economic consequences.
Advertisement
Trial lawyer lawsuits do not stay in the courtroom, they are priced into everything. Companies pay more for insurance, more for compliance, and more for legal defense. Those costs flow through to consumers in the form of higher prices and fewer options. At a moment when affordability dominates national conversations, this is a factor we cannot ignore.
These cases are shaped by a litigation system that rewards scale and escalation. They are enormously expensive and often backed by third-party funders, which drives plaintiffs’ lawyers to seek the highest possible damages. In last month’s Los Angeles trial, plaintiffs asked for billions but secured just $6 million, about 0.5% of what was requested. Even that figure is diminished when measured against the cost of bringing the case. And when outcomes fall short, the incentive is to pursue more cases or larger awards to justify the investment.
This burden is uniquely American. U.S. companies face a level of litigation exposure that most global competitors simply do not. That gap acts as an innovation tax on American firms, particularly small and early-stage companies that drive job creation and new ideas. We should be asking how to reduce that burden, not expand it.
Roughly 80% of CTA’s members are small or early-stage companies. They do not have the budgets or legal teams to absorb years of litigation risk. For them, the threat of open-ended lawsuits is not theoretical. It shapes what they build, how they build it, and whether they can exist it at all.
Advertisement
This is how an innovation economy slows without a single vote in Congress. Startups pull back, new features go unbuilt, and investment shifts away from risk. Over time, innovation slows, and momentum shifts from startups to incumbents.
None of this means concerns about children’s online experiences should be dismissed. They should be taken seriously. But lawsuits are blunt instruments that do little to address the underlying issues.
There are better and more effective paths.
Platforms have already invested heavily in tools that give parents real control over how their children use technology. Supervised accounts, screen time limits, content filters, and transparency into usage patterns are improving quickly and becoming easier to use. Industry efforts like NetChoice’s Digital Safety Shield build on that progress by putting parents in charge rather than outsourcing decisions to courts.
Advertisement
Congress also has a clear role. A national privacy law that protects personal data, including children’s information, would provide real safeguards while giving companies a consistent set of rules. What Congress should avoid is layering on vague obligations that invite more litigation. It’s delayed action for years. It should not delay further.
And parents remain central. Technology has changed, but the need for engagement has not. Knowing what children are doing online, setting boundaries and staying involved matters more than any verdict.
Social media is a powerful tool with real benefits and real risks. The right response is to manage those tradeoffs in a practical way that protects children without undermining innovation.
Recent verdicts move us in the opposite direction. They reward litigation, raise costs and make it harder for the next generation of companies to succeed.
Advertisement
We should focus on solutions that help children, not expand a system that is already very good at benefiting trial lawyers.
Michael Petricone is the Senior VP of Government Affairs at the Consumer Technology Association.
One of the key challenges of current multi-agent AI systems is that they communicate by generating and sharing text sequences, which introduces latency, drives up token costs, and makes it difficult to train the entire system as a cohesive unit.
To overcome this challenge, researchers at University of Illinois Urbana-Champaign and Stanford University developed RecursiveMAS, a framework that enables agents to collaborate and transmit information through embedding space instead of text. This change results in both efficiency and performance gains.
Experiments show that RecursiveMAS achieves accuracy improvement across complex domains like code generation, medical reasoning, and search, while also increasing inference speed and slashing token usage.
RecursiveMAS is significantly cheaper to train than standard full fine-tuning or LoRA methods, making it a scalable and cost-effective blueprint for custom multi-agent systems.
Advertisement
The challenges of improving multi-agent systems
Multi-agent systems can help tackle complex tasks that single-agent systems struggle to handle. When scaling multi-agent systems for real-world applications, a big challenge is enabling the system to evolve, improve, and adapt to different scenarios over time.
Prompt-based adaptation improves agent interactions by iteratively refining the shared context provided to the agents. By updating the prompts, the system acts as a director, guiding the agents to generate responses that are more aligned with the overarching goal. The fundamental limitation is that the capabilities of the models underlying each agent remain static.
A more sophisticated approach is to train the agents by updating the weights of the underlying models. Training an entire system of agents is difficult because updating all the parameters across multiple models is computationally non-trivial.
Even if an engineering team commits to training their models, the standard method of agents communicating via text-based interactions creates major bottlenecks. Because agents rely on sequential text generation, it causes latency as each model must wait for the previous one to finish generating its text before it can begin its own processing.
Advertisement
Forcing models to spell out their intermediate reasoning token-by-token just so the next model can read it is highly inefficient. It severely inflates token usage, drives up compute costs, and makes iterative learning across the whole system painfully slow to scale.
How RecursiveMAS works
Instead of trying to improve each agent as an isolated, standalone component, RecursiveMAS is designed to co-evolve and scale the entire multi-agent system as a single integrated whole.
The framework is inspired by recursive language models (RLMs). In a standard language model, data flows linearly through a stack of distinct layers. In contrast, a recursive language model reuses a set of shared layers that processes the data and feeds it back to itself. By looping the computation, the model can deepen its reasoning without adding parameters.
RecursiveMAS architecture (source: arXiv)
Advertisement
RecursiveMAS extends this scaling principle from a single model to a multi-agent architecture that acts as a unified recursive system. In this setup, each agent functions like a layer in a recursive language model. Rather than generating text, the agents iteratively pass their continuous latent representations to the next agent in the sequence, creating a looped hidden stream of information flowing through the system.
This latent hand-off continues down the line through all the agents. When the final agent finishes its processing, its latent outputs are fed directly back to the very first agent, kicking off a new recursion round.
This structure allows the entire multi-agent system to interact, reflect, and refine its collective reasoning over multiple rounds entirely in the latent space, with only the very last agent producing a textual output in the final round. It is like the agents are communicating telepathically as a unified whole and the last agent provides the final response as text.
The architecture of latent collaboration
To make continuous latent space collaboration possible, the authors introduce a specialized architectural component called the RecursiveLink. This is a lightweight, two-layer module designed to transmit and refine a model’s latent states rather than forcing it to decode text.
Advertisement
A language model’s last-layer hidden states contain the rich, semantic representation of its reasoning process. The RecursiveLink is designed to preserve and transmit this high-dimensional information from one embedding space to another.
To avoid the cost of updating every parameter across multiple large language models, the framework keeps the models’ parameters frozen. Instead, it optimizes the system by only training the parameters of the RecursiveLink modules.
Recursive learning process (source: arXiv)
To handle both internal reasoning and external communication, the system uses two variations of the module. The inner RecursiveLink operates inside an agent during its reasoning phase. It takes the model’s newly generated embeddings and maps them directly back into its own input embedding space. This allows the agent to continuously generate a stream of latent thoughts without generating discrete text tokens.
Advertisement
The outer RecursiveLink serves as the bridge between agents. Because agents in a real-world system might use different model architectures and sizes, their internal embedding spaces have entirely different dimensions. The outer RecursiveLink includes an additional layer designed to match the embeddings from one agent’s hidden dimension with the next agent’s embedding space.
During training, first, the inner links are trained independently to warm up each agent’s ability to think in continuous latent embeddings. Then, the system enters outer-loop training, where the diverse, frozen models are chained together in a loop, and the system is evaluated based on the final textual output of the last agent.
The only thing that gets updated in the training process is the RecursiveLink parameters and the original model weights remain unchanged, similar to low-rank adaptation (LoRA). Another advantage of this system comes into effect when you have multiple agents on top of the same backbone model.
If you have a multi-agent system where two agents are built on the exact same foundation model acting in different roles, you do not need to load two copies of the model into your GPU memory, nor do you train them separately. The agents will share the same backbone as the brain and use the RecursiveLink as the connective tissue.
Advertisement
RecursiveMAS in action
The researchers evaluated RecursiveMAS across nine benchmarks spanning mathematics, science and medicine, code generation, and search-based question answering. They created a multi-agent system using open-weights models including Qwen, Llama-3, Gemma3, and Mistral. These models were assigned roles to form different agent collaboration patterns such as sequential reasoning and mixture-of-experts collaboration.
RecursiveMAS improves inference speed by 1.2-2.2X (source: GitHub)
RecursiveMAS was compared to baselines under identical training budgets, including standalone models enhanced with LoRA or full supervised fine-tuning, alternative multi-agent frameworks like Mixture-of-Agents and TextGrad, and recursive baselines like LoopLM. It was also compared to Recursive-TextMAS, which uses the same recursive loop structure as RecursiveMAS but forces the agents to explicitly communicate via text.
RecursiveMAS achieved an average accuracy improvement of 8.3% compared to the strongest baselines across the benchmarks. It excelled particularly on reasoning-heavy tasks, outperforming text-based optimization methods like TextGrad by 18.1% on AIME2025 and 13% on AIME2026.
Advertisement
RecursiveMAS reduces token consumption by up to 75% (source: GitHub)
Because it avoids generating text at every step, RecursiveMAS achieved 1.2x to 2.4x end-to-end inference speedup. RecursiveMAS is also much more token efficient than the alternative. Compared to the text-based Recursive-TextMAS, it reduces token usage by 34.6% in the first round of the recursion, and by round three, it achieves 75.6% token reduction. RecursiveMAS also proved remarkably cheap to train. Because it only updates the lightweight RecursiveLink modules, which consist of roughly 13 million parameters or about 0.31% of the trainable parameters of the frozen models, it requires the lowest peak GPU memory and cuts training costs by more than half compared to full fine-tuning.
Enterprise adoption
The efficiency gains — lower token consumption, reduced GPU memory requirements, and faster inference — are intended to make complex multi-step agent workflows viable in production environments without the compute overhead that limits enterprise agentic deployments. The researchers have released the code and trained model weights under the Apache 2.0 license.
Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.
Need some help with today’s Mini Crossword? I’ll tell you, 6-Across was a completely new fact to me, and I even had those particular pets when I was a kid. Read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.
If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.
An anonymous reader quotes a report from The Guardian: Wood heating is reintroducing lead into the air of local communities and homes, a systematic investigation by academics has found. Overwhelming evidence of lead’s neurotoxicity meant the metal was banned as an additive in petrol more than 25 years ago. The research by academics from the University of Massachusetts Amherst began by analysing samples of particle pollution from five suburban and rural towns in the north-east US. They looked for tiny particles of potassium that are given off when wood is burned and also particles containing lead. Samples from seven winters revealed associations between potassium and lead. When there were more wood burning particles in a daily sample, there was more lead in the air, with clear straight-line relationships in four of the five towns.
The project was extended to 22 other towns across the US. The relationships between lead and potassium varied from place to place, being strongest in the Rocky Mountains. By factoring in the effects of temperature, moderate to strong associations in their analysis strengthened the conclusion that the extra lead came from wood burning. The lead concentrations were less than the US legal limits, but any exposure to the metal is harmful. […] Although less than legal limits, lead particles are routinely measured in UK cities in winter when people are also burning wood. This is normally attributed to waste wood covered with old lead paint, but the Umass Amherst study suggests the metal is coming from the wood itself. This means that any wood burning could increase exposure in neighborhoods and at home. Tricia Henegan, a PhD student at Umass Amherst and the first author on the research, said: “The most logical answer [to the question of how lead ends up in wood] is that it comes from uptake in the soil, probably riding along with the nutrients and water that trees need. Once in the tree, it deposits in the tree’s tissues and remains until that tree is burned.” Other research has found that it can then become part of the smoke.
“The use of wood as an energy source is a relic of the past, one that should not be relived if given a choice. Although wood fuel use can feel nostalgic, it does have negative consequences on air quality, and therefore public health.”
You must be logged in to post a comment Login