Anthropic on Tuesday released Claude Sonnet 4.6, a model that amounts to a seismic repricing event for the AI industry. It delivers near-flagship intelligence at mid-tier cost, and it lands squarely in the middle of an unprecedented corporate rush to deploy AI agents and automated coding tools.
The model is a full upgrade across coding, computer use, long-context reasoning, agent planning, knowledge work, and design. It features a 1M token context window in beta. It is now the default model in claude.ai and Claude Cowork, and pricing holds steady at $3/$15 per million tokens — the same as its predecessor, Sonnet 4.5.
That pricing detail is the headline that matters most. Anthropic’s flagship Opus models cost $15/$75 per million tokens — five times the Sonnet price. Yet performance that would have previously required reaching for an Opus-class model — including on real-world, economically valuable office tasks — is now available with Sonnet 4.6. For the thousands of enterprises now deploying AI agents that make millions of API calls per day, that math changes everything.
Anthropic’s computer use scores have nearly quintupled in 16 months. The company’s latest model, Sonnet 4.6, scored 72.5 percent on the OSWorld-Verified benchmark, up from 14.9 percent when the capability first launched in October 2024. (Source: Anthropic)
Advertisement
Why the cost of running AI agents at scale just dropped dramatically
To understand the significance of this release, you need to understand the moment it arrives in. The past year has been dominated by the twin phenomena of “vibe coding” and agentic AI. Claude Code — Anthropic’s developer-facing terminal tool — has become a cultural force in Silicon Valley, with engineers building entire applications through natural-language conversation. The New York Times profiled its meteoric rise in January. The Verge recently declared that Claude Code is having a genuine “moment.” OpenAI, meanwhile, has been waging its own offensive with Codex desktop applications and faster inference chips.
The result is an industry where AI models are no longer evaluated in isolation. They are evaluated as the engines inside autonomous agents — systems that run for hours, make thousands of tool calls, write and execute code, navigate browsers, and interact with enterprise software. Every dollar spent per million tokens gets multiplied across those thousands of calls. At scale, the difference between $15 and $3 per million input tokens is not incremental. It is transformational.
The benchmark table Anthropic released paints a striking picture. On SWE-bench Verified, the industry-standard test for real-world software coding, Sonnet 4.6 scored 79.6% — nearly matching Opus 4.6’s 80.8%. On agentic computer use (OSWorld-Verified), Sonnet 4.6 scored 72.5%, essentially tied with Opus 4.6’s 72.7%. On office tasks (GDPval-AA Elo), Sonnet 4.6 actually scored 1633, surpassing Opus 4.6’s 1606. On agentic financial analysis, Sonnet 4.6 hit 63.3%, beating every model in the comparison, including Opus 4.6 at 60.1%.
These are not marginal differences. In many of the categories enterprises care about most, Sonnet 4.6 matches or beats models that cost five times as much to run. An enterprise running an AI agent that processes 10 million tokens per day was previously forced to choose between inferior results at lower cost or superior results at rapidly scaling expense. Sonnet 4.6 largely eliminates that trade-off.
Advertisement
In Claude Code, early testing found that users preferred Sonnet 4.6 over Sonnet 4.5 roughly 70% of the time. Users even preferred Sonnet 4.6 to Opus 4.5, Anthropic’s frontier model from November, 59% of the time. They rated Sonnet 4.6 as significantly less prone to over-engineering and “laziness,” and meaningfully better at instruction following. They reported fewer false claims of success, fewer hallucinations, and more consistent follow-through on multi-step tasks.
Anthropic’s Sonnet 4.6, a mid-tier model, matches or approaches the performance of the company’s flagship Opus line across most benchmark categories — and frequently outperforms rival models from Google and OpenAI. (Source: Anthropic)
How Claude’s computer use abilities went from ‘experimental’ to near-human in 16 months
One of the most dramatic storylines in the release is Anthropic’s progress on computer use — the ability of an AI to operate a computer the way a human does, clicking a mouse, typing on a keyboard, and navigating software that lacks modern APIs.
When Anthropic first introduced this capability in October 2024, the company acknowledged it was “still experimental — at times cumbersome and error-prone.” The numbers since then tell a remarkable story: on OSWorld, Claude Sonnet 3.5 scored 14.9% in October 2024. Sonnet 3.7 reached 28.0% in February 2025. Sonnet 4 hit 42.2% by June. Sonnet 4.5 climbed to 61.4% in October. Now Sonnet 4.6 has reached 72.5% — nearly a fivefold improvement in 16 months.
Advertisement
This matters because computer use is the capability that unlocks the broadest set of enterprise applications for AI agents. Almost every organization has legacy software — insurance portals, government databases, ERP systems, hospital scheduling tools — that was built before APIs existed. A model that can simply look at a screen and interact with it opens all of these to automation without building bespoke connectors.
Jamie Cuffe, CEO of Pace, said Sonnet 4.6 hit 94% on their complex insurance computer use benchmark, the highest of any Claude model tested. “It reasons through failures and self-corrects in ways we haven’t seen before,” Cuffe said in a statement sent to VentureBeat. Will Harvey, co-founder of Convey, called it “a clear improvement over anything else we’ve tested in our evals.”
The safety dimension of computer use also got attention. Anthropic noted that computer use poses prompt injection risks — malicious actors hiding instructions on websites to hijack the model — and said its evaluations show Sonnet 4.6 is a major improvement over Sonnet 4.5 in resisting such attacks. For enterprises deploying agents that browse the web and interact with external systems, that hardening is not optional.
Enterprise customers say the model closes the gap between Sonnet and Opus pricing tiers
The customer reaction has been unusually specific about cost-performance dynamics. Multiple early testers explicitly described Sonnet 4.6 as eliminating the need to reach for the more expensive Opus tier.
Advertisement
Caitlin Colgrove, CTO of Hex Technologies, said the company is moving the majority of its traffic to Sonnet 4.6, noting that with adaptive thinking and high effort, “we see Opus-level performance on all but our hardest analytical tasks with a more efficient and flexible profile. At Sonnet pricing, it’s an easy call for our workloads.”
Ben Kus, CTO of Box, said the model outperformed Sonnet 4.5 in heavy reasoning Q&A by 15 percentage points across real enterprise documents. Michele Catasta, President of Replit, called the performance-to-cost ratio “extraordinary.” Ryan Wiggins of Mercury Banking put it more bluntly: “Claude Sonnet 4.6 is faster, cheaper, and more likely to nail things on the first try. That combination was a surprising combination of improvements, and we didn’t expect to see it at this price point.”
The coding improvements resonate particularly given Claude Code’s dominance in the developer tools market. David Loker, VP of AI at CodeRabbit, said the model “punches way above its weight class for the vast majority of real-world PRs.” Leo Tchourakov of Factory AI said the team is “transitioning our Sonnet traffic over to this model.” GitHub’s VP of Product, Joe Binder, confirmed the model is “already excelling at complex code fixes, especially when searching across large codebases is essential.”
Brendan Falk, Founder and CEO of Hercules, went further: “Claude Sonnet 4.6 is the best model we have seen to date. It has Opus 4.6 level accuracy, instruction following, and UI, all for a meaningfully lower cost.”
Advertisement
In a simulated business environment, Sonnet 4.6 nearly tripled the earnings of its predecessor over the course of a year, suggesting sharply improved decision-making in complex, long-horizon tasks. (Source: Anthropic, Vending-Bench Arena)
A simulated business competition reveals how AI agents plan over months, not minutes
Buried in the technical details is a capability that hints at where autonomous AI agents are heading. Sonnet 4.6’s 1M token context window can hold entire codebases, lengthy contracts, or dozens of research papers in a single request. Anthropic says the model reasons effectively across all that context — a claim the company demonstrated through an unusual evaluation.
The Vending-Bench Arena tests how well a model can run a simulated business over time, with different AI models competing against each other for the biggest profits. Without human prompting, Sonnet 4.6 developed a novel strategy: it invested heavily in capacity for the first ten simulated months, spending significantly more than its competitors, and then pivoted sharply to focus on profitability in the final stretch. The model ended its 365-day simulation at approximately $5,700 in balance, compared to Sonnet 4.5’s roughly $2,100.
This kind of multi-month strategic planning, executed autonomously, represents a qualitatively different capability than answering questions or generating code snippets. It is the type of long-horizon reasoning that makes AI agents viable for real business operations — and it helps explain why Anthropic is positioning Sonnet 4.6 not just as a chatbot upgrade, but as the engine for a new generation of autonomous systems.
Advertisement
Anthropic’s Sonnet 4.6 arrives as the company expands into enterprise markets and defense
This release does not arrive in a vacuum. Anthropic is in the middle of the most consequential stretch in its history, and the competitive landscape is intensifying on every front.
On the same day as this launch, TechCrunch reported that Indian IT giant Infosys announced a partnership with Anthropic to build enterprise-grade AI agents, integrating Claude models into Infosys’s Topaz AI platform for banking, telecoms, and manufacturing. Anthropic CEO Dario Amodei told TechCrunch there is “a big gap between an AI model that works in a demo and one that works in a regulated industry,” and that Infosys helps bridge it. TechCrunch also reported that Anthropic opened its first India office in Bengaluru, and that India now accounts for about 6% of global Claude usage, second only to the U.S. The company, which CNBC reported is valued at $183 billion, has been expanding its enterprise footprint rapidly.
Meanwhile, Anthropic president Daniela Amodei told ABC News last week that AI would make humanities majors “more important than ever,” arguing that critical thinking skills would become more valuable as large language models master technical work. It is the kind of statement a company makes when it believes its technology is about to reshape entire categories of white-collar employment.
The competitive picture for Sonnet 4.6 is also notable. The model outperforms Google’s Gemini 3 Pro and OpenAI’s GPT-5.2 on multiple benchmarks. GPT-5.2 trails on agentic computer use (38.2% vs. 72.5%), agentic search (77.9% vs. 74.7% for Sonnet 4.6’s non-Pro score), and agentic financial analysis (59.0% vs. 63.3%). Gemini 3 Pro shows competitive performance on visual reasoning and multilingual benchmarks, but falls behind on the agentic categories where enterprise investment is surging.
Advertisement
The broader takeaway may not be about any single model. It is about what happens when Opus-class intelligence becomes available for a few dollars per million tokens rather than a few tens of dollars. Companies that were cautiously piloting AI agents with small deployments now face a fundamentally different cost calculus. The agents that were too expensive to run continuously in January are suddenly affordable in February.
Claude Sonnet 4.6 is available now on all Claude plans, Claude Cowork, Claude Code, the API, and all major cloud platforms. Anthropic has also upgraded its free tier to Sonnet 4.6 by default. Developers can access it immediately using claude-sonnet-4-6 via the Claude API.
SpaceX has reportedly filed confidential paperwork for an initial public offering in which the company would raise $75 billion at a $1.75 trillion valuation. And according to CEO Elon Musk, orbital data centers will be a big part of SpaceX’s future.
On the latest episode of TechCrunch’s Equity podcast, Kirsten Korosec, Sean O’Kane, and I discussed Musk’s vision, as well as other companies that are pursuing similar goals.
It will take significant tech development and massive capital spending to make orbital data centers a reality, but as Sean noted, with “opposition happening around the country to data centers in general,” executives like Musk and Jeff Bezos may be thinking, “The engineering challenge may be less than the social challenge back here” on Earth.
Advertisement
Read a preview of our conversation, edited for length and clarity, below.
Sean: This has been a trend — I would say a rapidly forming trend — over the last half year to a year, and we have different examples of it. We have SpaceX; I feel like in some ways, Elon Musk was late on this trend. And for the moment, let’s set aside the actual mechanics and the viability of data centers in space. We could talk about that in a second if we want, but —
Kirsten: We have a really good story we’ll link to in the show notes, by the way. One of our most recent hires, Tim Fernholz, is amazing. He writes all about the physics and the constraints of that.
Sean: Yeah, I think it’s a really interesting engineering challenge. It’s a really interesting physics challenge. It’s a really interesting orbital mechanics challenge. But it’s something that clearly a bunch of companies and people are going to try and chase. [There’s] going to be SpaceX doing it, with a kind of variance of what they’re already working on with their Starlink network.
Advertisement
Techcrunch event
San Francisco, CA | October 13-15, 2026
Advertisement
There’s a startup that had come out of Y Combinator, originally called Starcloud, that was really one of the first ones out there trying to build a huge business around this, that just raised $170 million this week, their valuation [on] that tipped them over into a unicorn status.
Jeff Bezos is trying to go after this as well. This is a next generation version of the competition that we’ve seen happening between Starlink and Amazon’s Leo satellite network, and Blue Origin has its own satellite network coming online as well in the next couple of years.
So there’s going to be a whole bunch of this happening, and it feels like it wasn’t happening a year ago. I know the way that Elon Musk pitches it is — we know he’s allergic to red tape, he’s built a data center in Memphis, too. Maybe now he knows the challenges and the risks you have to take to sidestep that red tape.
There’s a lot of opposition happening around the country to data centers in general. And these people say, “We have access to space, so let’s just try and do it up there.” The engineering challenge may be less than the social challenge back here on our [planet].
Advertisement
Kirsten: And it also creates excitement, right? If a company is about to go [public] and they’re working on data centers in space, this is something that people can have expectations about in a positive way and ignore the constraints. It feels like a company that is working on something that’s not old and outdated, but signals the future. And it’s really a great strategy when you think about it.
Anthony: Not that Elon Musk is the only one who does this, but it seems like he’s incredibly successful at being like, “Don’t judge my companies based on how much money they’re making now, judge them based on these grand visions that I can spin out about what will happen in the future.”
And going back to a point that Sean was making, I think that part of what’s interesting is to [ask]: How does this fit in with the broader data center rollout? How does it fit in with opposition and the idea that maybe people are not going to be able to build as many data centers as they want to?
I don’t think any of us are engineers who can really assess the viability of these plans. It does certainly have a tinge of fantasy to it, but even when they do lay out these plans, it feels like just a drop in the bucket in terms of compute capabilities compared to what they want to build out on Earth. So it feels like there’s not a scenario where this replaces a whole bunch of new data centers on Earth. It’s just sort of a […] supplement to it.
Advertisement
Sean: The last two things I’ll point out that are really front and center for me is, one, we’ve seen a backing off in some ways [from] data centers — not just because of opposition, but because maybe we don’t need as much, right? We see a bunch of jockeying from some of the AI labs about, “Well, maybe we don’t need to lease this much from this company,” or whatever. And if that becomes a thing that is more true than it was five months ago, do you all of a sudden lose all that momentum to do something as crazy as putting the data centers in space? Providing that it works, even.
The other thing is that the idea of building these massive data centers in space, with all these satellites that make up the quote unquote “data center,” is business for SpaceX. And I think this is unique to them compared to these other companies: They are a launch company primarily, even though they generate a bunch of revenue from Starlink. They are the vehicle that gets the data centers to space. They get to book that as revenue for SpaceX.
And so it becomes this thing where, of course [Musk] wants — whether or not it works, he would eventually have to prove it — but of course he wants to send more and more satellites into space because it’s more revenue for SpaceX. And that makes SpaceX look better as a public company. And then you just kind of tumble down the path until he finds something else to pitch the investors on.
Prof Neil Rowan sits down with SiliconRepublic.com to chat life, work and advice for students.
As he describes it, Prof Neil Rowan was thrust into the world rather prematurely. Coming in at less than a kilogram at birth, Rowan tells me he spent months in an incubator, sure that he wasn’t supposed to make it.
Later, he wonders if that’s what gave him the drive – a sort of “accelerator button” on his life, firmly pressed, “always”.
From breaking regional sprinting records at the local athletic club as a teenager, to being awarded a higher doctorate of science some four decades later, Prof Rowan has achieved more than many – especially for a boy from a middle-class family from Coosan in Athlone.
Advertisement
Among his very large list of accomplishments, Rowan is an expert in medtech, food security, environmental sustainability and bioeconomy, and the inaugural director of the Bioscience Research Institute at Technological University of the Shannon (TUS). He is ranked number one in the world for decontamination research.
He is also on a United Nations panel on the effects of nuclear war, as well as on the new National Science Advisory panel and a new scientific committee for the Food Safety Authority of Ireland. He also works closely with the European Commission and Research Ireland on various innovation programmes. I will refrain from adding to this list, lest I overwhelm the reader.
“The funny thing is, I’m colourblind,” Rowan tells me. “I only ever saw green … so I was constantly ‘going’.”
One of five children, Rowan was the first from any generation in his family to have attended university. His parents never finished school, he tells me.
Advertisement
“My dad was 52 years working with the one company [fixing weighing scales],” he says. “Going to college [at the time] would have been very expensive.”
A football scholarship led Rowan to the University of Galway in the 1980s, after which the young academic made his way to the University of Strathclyde in Glasgow on a different scholarship to take the first-ever course at the institution looking at biotechnology.
10 papers and a PhD later, Rowan was appointed as a lecturer – and then a senior lecturer at the University. He was 29 years old.
Rewarded for decades of research
Rowan made headlines earlier this year for being recognised with a higher doctorate of science by the University of Strathclyde – a first for any academic working in TUS.
Advertisement
A higher doctorate of science is above a PhD. It is the highest academic degree in the Irish and UK university systems, awarded to scholars who demonstrate a significant contribution to their field over several decades. Fewer than 10 higher doctorates are awarded per year in Ireland. Rowan describes the degree as a ‘black swan’, a definite rare occurrence.
“Every day is a school day”, according to Rowan, who has published a minimum of six to seven research papers every year for the past 30 years. His higher doctorate thesis comprises 150 peer-reviewed journal papers presented in two volumes, totalling approximately 1,600 pages.
The submission covers his research from 1995 to the present day, delving into his work in advancing the fields of disease prevention and control that cross-cut medtech, food safety and food security globally.
Rowan’s still surprised at his achievements. He tells me that he is “constantly surprised, pleasantly surprised” at any recognition, even after all these years.
Advertisement
Throughout our chat, the professor made several mentions of ‘firsts’ in his various fields of research.
According to Rowan, he created the first-ever toxigenic-mould growth prediction model for the built environment in the early 1990s. It was the first to use computer simulations and algorithms for elucidating biological solutions to inform improvements in sustainable building design that subsequently became a European reference model.
More recently, Neil leads the first ever bio-economy demonstrator facility at scale using freshwater fish in peatlands, now used in Ireland and to be replicated across Europe.
Prof Rowan introduced the first PhDs in biomedical sciences, health and sterilisation science in TUS, and also reported on the first use of several disinfection technologies such as pulsed light, pulsed electric fields and pulsed plasma for disease prevention and food security, including from a underpinning mechanistic perspective.
Advertisement
An educator and an enabler
When asked how he describes his work, Rowan says he’s both an “educator and an enabler”.
“I think I enable people to help themselves,” he explains.
He has supervised around 120 undergraduate projects, as well as around 40 PhDs with industrial applications – such as a new vaporised hydrogen peroxide terminal sterilisation method, or a new classification system for medical device features and cleaning for improved patient safety.
Rowan says he loves to teach and that he resonates with his students. He says his work has a “lasting legacy”, given his students’ creative footprints on society.
Advertisement
Through our chat, the professor highlighted the importance of being an “agile listener”.
“I was always an active listener and prepared to spend considerable time studying to understanding to get to the root of things.”
However, also important is having ambition, he says. “I was always brave and ambitious. I was always not afraid of taking on very grand challenges. I was always trying. I was never not afraid to do things.”
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
But once you’ve set up your device, Heatbit will track and file your mining revenue to your phone, even if you don’t yet have a bitcoin wallet set up. After you reach the transfer minimum of 100,000 satoshis, or one thousandth of one bitcoin ($66 at April 2026 prices), you can transfer this to your wallet and, presumably, spend it. Heatbit’s app is compatible with the Lightning networks and most major exchanges (Coinbase, Binance, OKX, BitFinex).
Unlike many air purifiers that activate only when there are air quality issues, the Heatbit continually pushes air through its HEPA filter while the miner and heater are active. While Heatbit recommends filter replacement once every six months, in practice, the app showed that the filter was being used up by about 1 percent a day. For whatever reason, my Heatbit app refused to believe that I was not in Seattle, and so my exterior air quality readings were all tied to King County, Washington.
Early quirks aside, the ease of onramp is admirable for a device not aimed at crypto-loving engineers. At current prices, if I run my heater/miner nonstop, this would net me about a $70 rebate on my heating bills once every two months. Pretty cool, right?
Why the Math on Heatbit Doesn’t Pencil
Heatbit via Matthew Korfhage
But here’s the problem with that math. I’d also need to pay at least $1,500 upfront (the current discounted price of the Maxi Pro) before I get access to these savings. This is about $1,350 more than the best space heaters I’ve tested. It’s also around $900 or $1,000 more than a combination purifier–heater from Dyson. So your money-saving math needs to take this upfront cost into account.
Advertisement
At this rate, assuming my energy costs and bitcoin prices stayed constant, it would take me between five and eight years to “make my money back” in bitcoin if I ran this thing 24/7 for four months a year. That’s on a device with a one-year warranty. (Heatbit’s founders say there has been a failure rate only in the “low single digits” after three years for the first-generation Heatbit Trio.)
These numbers assume I would otherwise run a space heater nonstop at full blast for months on end as a primary heat source—which is not how most people use space heaters. I tend to turn on a space heater only when I’m in a room, and direct it toward myself. For heating a whole house, natural gas or a heat pump are both far more cost-effective options, if available.
But let’s say you have only electricity for heat. And you would always be running a space heater. And let’s assume the Heatbit keeps running at the same efficiency for at least five years. Is the Heatbit now the best choice, economically? Well, still maybe not.
Every Crypto Miner Is a Heater
Every crypto mining device will heat your house, whether or not its makers advertise it as a space heater. Each miner will release heat with 100 percent efficiency, according to how much power it uses. That’s because one way or another, all power waste will eventually get converted to heat.
Advertisement
The most efficient combination space heater and bitcoin miner will always be the one that mines bitcoin most efficiently. At that point, you could just pick up a Canaan Avalon Q ($1,900) and get a 50 percent better hash rate and produce about the same amount of heat. Newfangled ASIC Application-Specific Integrated Circuit) miners might net you even better efficiency. Pretty much anything you use, with the same amount of power, will release this much heat.
The use of drones and autonomous vehicles is continuing to rise. In a military capacity, this has allowed people in the service to access areas that would otherwise be incredibly difficult and dangerous to reach. While the public may be more familiar with how the U.S. military is developing new ways to deal with drones on the battlefield, this idea pertains to the depths of the ocean, too. This is why the U.S. Navy is so interested in the capacities of the Dive-XL, an autonomous submarine developed by Anduril.
In April 2025, the U.S. Navy announced the beginning of its Combat Autonomous Maritime Platform program. Its aim is to find partners that can deliver an unmanned and autonomous vessel capable of diving as far as over 650 feet below the surface. The vessel should be sizeable-enough to have the ability to dispatch payloads at these considerable depths, and boast a range of around 1,000 nautical miles. It was specified, too, that this autonomous submarine should be capable of handling equipment of varying sizes. This is particularly important because the possible scope of such a vessel’s mission notably varies from reconnaissance to something considerably more offensive.
Advertisement
Anduril’s Dive-XL has been selected to fulfill this role. Stretching 27 feet long with a 7-foot beam, it’s far from the tiny, stealthy drone some militaries are so used to wielding in the air. Let’s see exactly why this sub might be so important to the future of naval warfare.
Advertisement
Why the Dive-XL sub could be right for the U.S. Navy
With a range of about 2,000 nautical miles and the capacity to reach depths of about 20,000 feet, the Dive-XL definitely meets the requirements established by the Combat Maritime Autonomous Platform. It’s also designed to support single, double, triple, and extended payload configurations with a modular body. Powered with an all-electric power train that allows for long missions without surfacing, the Dive-XL can serve a variety of missions. An overly-specialized machine can only suit a niche role, while something like the Dive-XL is more versatile.
It’s also designed to accommodate and deploy tech such as Anduril’s Seabed Sentry, which is essentially individual Lattice AI-enabled cogs in an underwater communication and surveillance network. It also works with the Copperhead drone, an autonomous weapon that can be equipped with an explosive and is available in different capacities. The latter was specifically built to be deployed by a system like Dive-XL.
Anduril boasts that the capacity to stay underwater for long periods using pure electric power dramatically boosts the model’s ability to “operate undetected, extend its range, and deliver payloads in contested maritime environments.” This, according to the manufacturer, will be key to performing its role in an environment, and in a future, where it is unlikely to be the only autonomous vessel of its type.
Advertisement
The practicality of deploying the Anduril Dive-XL
The U.S. Navy has wielded many deadly attack submarines, and this Anduril model will be a formidable, though far from conventional, addition to the ranks. The variable hull design will make it less costly as well as more versatile, all of which will help to accomplish the main goal of quickly accumulating an autonomous force that can dispense numerous drones. This can in turn ease the pressure on already-strained sailors of crewed vessels.
Anduril also claims that the system “enables warfighters to launch, employ, and recover the system flexibly at sea or ashore with minimal infrastructure and heavy equipment.” To help with that, it’s first got to be easy to get it to where it needs to be. That is why Anduril designed it to be launched and collected by ship or pier and transported via aircraft or truck. It can also be carried across water in a shipping crate.
Advertisement
There’s no doubt that drones and autonomous vehicles will be an increasingly prominent part of warfare and defense in the future. The U.S. Navy clearly considers Anduril’s Dive-XL to be a significant part of that equation, but how it will continue to evolve and which different functions it will be able to fulfill beneath the surface remain to be seen.
It’s a yearly delight to feel the weather warm up as spring approaches, but this season of renewal does come with some downsides. One of the most annoying and dangerous is the road pothole, which manifests itself as a small dent, a massive hole, or something in between on the roadway. These become especially frequent sights throughout the spring season, thanks largely to the transition from winter to spring. Temperatures going from freezing to warm and snow and ice melting into water, freezing, and remelting ultimately lead to potholes being a common issue.
The formation of a pothole begins with the accumulation and subsequent melting of snow and ice during winter. This water makes its way into the dirt below the pavement via small cracks and holes. Freezing temps then turn that water into ice, which expands to lift and move the soil around it. As a result, the pavement above moves around, too, and when that ice melts in the warming spring, it leaves weak spots in those areas. Combine this weakened state with frequent driving, and it’s only a matter of time before the pavement breaks apart into a pothole.
While the squiggly road lines known as tar snakes often prevent some potholes from forming, plenty manage to take shape all the same. Potholes can mean serious trouble on the road. That’s why it’s crucial to practice safe driving habits and even take action should you encounter them.
Advertisement
How to take action when potholes form
Mikhail Yakovlev/Getty Images
When potholes have formed on the road, it’s key to drive safely in their presence. It can be difficult to tell just how big and deep they are from the driver’s seat, and hitting potholes could mean a guaranteed trip to the mechanic, be it for new tires or suspension parts, so you want to exercise caution. Don’t drive right over them, skirt around them when you can, and if they’re bad enough, safely change lanes to avoid them if possible. If you have little choice other than to drive your vehicle over one, be sure to do so at a low speed to prevent unnecessary wear.
Once you’re off the road, you can still take action against the potholes in your area. While more often than not, towns and cities will eventually get around to filling potholes, especially those in traffic-heavy areas, sometimes those on side streets will be overlooked. Oftentimes, you can go online and bring awareness to them by filling out a pothole repair request form or using other methods to get in touch with those responsible for repairing them. Doing so will benefit your vehicle’s health in the long run and the wider community as well.
Advertisement
There is no shortage of dangers and obstacles on the road, but few are more jarring than the pothole. That’s why, as spring approaches, it’s in every driver’s best interest to be extra careful while driving and, if they feel strongly enough, speak up to get something done about them.
He’s Apple’s Chief Operating Officer who became the CEO — but he’s not Tim Cook. Instead, this was how Michael Spindler replaced John Sculley, and made himself ill trying to save the company in the 1990s.
Apple CEO Michael Spindler — image credit: Apple
Michael Scott was the first Apple CEO, brought in by Mike Markkula, who became the second CEO when Scott was shown the door. Markkula was then responsible along with Steve Jobs for recruiting John Sculley, until he was also shown the exit sign. But while it was Sculley who made Spindler Chief Operating Officer, and then it was the board that made him CEO, Markkula was again behind all of this. It was Markkula who recruited Spindler to join Apple in September 1980. Continue Reading on AppleInsider | Discuss on our Forums
When you step into the 2026 Mercedes EQS, you feel as if you’ve entered another dimension. Nothing connects the mechanical workings of the car anymore. Instead, you have a steering-by-wire system. This means that all of your steering wheel movements are detected by sensors and relayed to control units, which then instruct the actuators on how to make the wheels respond to your commands.
It operates just like a sports car with variable gear ratios that can change instantly based on the speed at which you travel. At low speeds, there is a faster reaction, which will be helpful if you need to maneuver quickly in and out of crowded places such as parking lots. High speeds ensure a smooth ride for you while you travel on highways. The whole system works automatically, with the software deciding what is necessary for you without requiring any interference from your side. This will ensure that the steering effort required is minimal since even slight adjustments will require little force.
MERCEDES-AMG PETRONAS F1 TEAM BUILDING SET – LEGO Speed Champions Mercedes-AMG PETRONAS F1 Team Race Car vehicle building set for boys and girls…
DRIVER MINIFIGURE – This car playset includes a driver minifigure wearing a Mercedes-AMG PETRONAS F1 Team outfit and a winged helmet for kids to…
AUTHENTIC DETAILS – The F1 race car features design details from the real-life 2024 version, including sponsor stickers and wider rear tires…
It’s also fascinating how the wheel’s design is actually rather flat compared to what you’d assume would be on a futuristic EV; it’s a yoked wheel with plenty of legroom and clear visibility of the screens on the dash board. You’ll find it easier to get into and out of the vehicle now too, while ngineers have even had to develop their own airbags for it, resulting in further safety safeguards.
Before deciding to put this technology into production, the development team tested it for approximately a million kilometers on real roads, proving grounds, and simulators. They’ve also added rear axle steering, which works in unison with the front system to make the car turn more tightly while remaining silky smooth at high speeds. Everything adds up to a combination that simply makes driving appear more pleasant and stable. [Source]
‘Dead game’ is a term thrown around loosely now. You’ll often hear players say it whenever a game drops a few spots in the Steam concurrent players chart, gets a bad balance update, or makes a change that angers the community. But that’s not what actually makes a game dead.
Dead games usually disappear twice. First when the players leave, and then again when people stop talking about them. The games on this list never really managed the second part.
Not all of these games are “dead” in the exact same way. Some are officially gone. Some are technically still playable but functionally abandoned. Some survive through tiny, stubborn communities that refuse to let go. But with the momentum gone and their future in question, all you’re left with is a strong sense of what could have been. And yet, I still miss them all.
Anthem
What was it about?
Advertisement
Anthem had one of the coolest core fantasies I have ever seen wasted. Flying around in a Javelin felt incredible. The movement had speed, weight, and that rare kind of freedom that instantly made you think, “Okay, this is the fantasy.”
Even now, when people talk about Anthem, that is usually the first thing they bring up. Not the loot. Not the missions. The flying.
Why did it fail?
Advertisement
Because everything around the power fantasy could not support it, Anthem’s trailer had many wondering if it was a narrative-driven story game, but it was released as a live-service game that never really understood the kind of game it wanted to be. The content loop was weak, the gameplay got repetitive fast, and the game never found the long-term support it needed to build on its best idea. Anthem is easy to remember because the foundation is so cool. Though it is a painful reminder that a concept alone is never enough.
Deceive Inc.
What was it about?
In a sea full of multiplayer shooters, Deceive Inc. felt genuinely fresh in a market that rarely rewards experimentation. The whole spy-social-stealth concept was clever, stylish, and different in a way that made it stand out immediately. It was a game with an actual personality instead of the usual formula that revolved around battle royales and hero shooters.
Advertisement
Why did it fail?
Players being clever isn’t always enough to survive. Deceive Inc. never felt like it found the player base it deserved. For multiplayer games, a bit of momentum and a dedicated community are what make it thrive. So once you lose both, recovery gets brutally hard. It also lived in that awkward space where people that played it often seemed to love the idea, but not enough people showed up to keep that idea alive. “How did it never catch on?” is the question we’ve been left with.
Gigantic
What was it about?
Gigantic was one of the best ‘Hero Shooters’ out there. It had style and substance. It looked alive in a way a lot of team-based multiplayer games never do. The art direction, character design, and scale of the matches were all expressive and full of energy. Apart from my uncontested favorite in the genre, this came as a close second. Even the remaster reminds people how distinct the game’s identity really was.
Advertisement
A hero and guardian in Gigantic: Rampage Edition.Gearbox Entertainment
Why did it fail?
Timing, support, and bad luck all seemed to work against it. Gigantic always came across as the game people admired, but from a distance. That is the cruel thing about games like this. A game can be original, stylish, and easy to root for, and the market can still shrug it off. Unfortunately, the Gigantic: Rampage Edition was a relaunch that aimed to bring back the interest, but people had already moved on, and as my friend once put it, “the spark is just not there anymore.”
Titanfall 2
What was it about?
Titanfall 2 is a game that still feels better than half the shooters that came after it. Even as gamers were complaining about the shifting focus of Call of Duty into a movement shooter, the fatigue of this meta helped create a game that leaned heavily into this. A game with in-depth movement mechanics and style. The movement was fast and fluid, the Titans added real spectacle, and the campaign had one of the best level designs of its era. To date, it feels like a game that people bring up with a mix of admiration and frustration since it got so much right.
Respawn Entertainment
Why did it fail?
While its story is a bit similar to the rest of the games on this list, the issues were more nuanced here. Respawn Entertainment released the game between two colossal video game franchise releases, which overshadowed it on launch. Its gruelling mechanics had many of the casual players quit in favor of simpler titles. What made matters worse was that the game was held hostage for years by hackers. There was no support from the studio, which shifted most of its focus to its real money-maker, Apex Legends.
Advertisement
Paladins: Champions of the Realm
Paladins is different from other games on this because I did not just admire it from a distance. I lived in it. I put nearly 3,000 hours into that game, hit the top ranks, and spent enough time with it to see both its brilliance and its mess up close. What made Paladins special was that it always felt more flexible, more chaotic, and honestly, more creative than people gave it credit for.
The champions had personality, the card and loadout system let you shape your playstyle in ways other hero shooters did not. The whole thing had scrappy energy that made it feel alive even when it was barely being held together. This game is also the reason I decided to make this list of all the great games we’ve lost.
Why did it fail?
Advertisement
Paladins was never allowed to be as great as it could have been. It was plagued by bugs, weird balancing, uneven support, and the constant uphill battle of living in the shadows of Overwatch. But what hurts that most is that Paladins did not die because nobody cared; it faded while people still cared. The small but strong community held out as Hi-Rez suffered from severe mismanagement. Over time, the controversial changes, lack of support, and bugs forced many players to quit.
(Shout out to GreatDivide for the Cassie clip.)
The game still gets around 2000 players on a good day, with the community supporting it and carrying it longer than most dead games ever get carried. All of these games stay with me for different reasons. Some were wasted potential. Some were mistimed. Some just never found enough people.
Advertisement
A dead game does not stay in your head this long, unless it got something very right.
A new report dubbed “BrowserGate” warns that Microsoft’s LinkedIn is using hidden JavaScript scripts on its website to scan visitors’ browsers for installed extensions and collect device data.
According to a report by Fairlinked e.V., which claims to be an association of commercial LinkedIn users, Microsoft’s platform injects JavaScript into user sessions that checks for thousands of browser extensions and links the results to identifiable user profiles.
The author claims that this behavior is used to collect sensitive personal and corporate information, as LinkedIn accounts are tied to real identities, employers, and job roles.
“LinkedIn scans for over 200 products that directly compete with its own sales tools, including Apollo, Lusha, and ZoomInfo. Because LinkedIn knows each user’s employer, it can map which companies use which competitor products. It is extracting the customer lists of thousands of software companies from their users’ browsers without anyone’s knowledge,’ the report says.
Advertisement
“Then it uses what it finds. LinkedIn has already sent enforcement threats to users of third-party tools, using data obtained through this covert scanning to identify its targets.”
BleepingComputer has independently confirmed part of these claims through our own testing, during which we observed a JavaScript file with a randomized filename being loaded by LinkedIn’s website.
This script checked for 6,236 browser extensions by attempting to access file resources associated with a specific extension ID, a known technique for detecting whether extensions are installed.
This fingerprinting script was previously reported in 2025, but it was only detecting approximately 2,000 extensions at that time. A different GitHub repository from two months ago shows 3,000 extensions being detected, demonstrating that the number of detected extensions continues to grow.
Advertisement
Snippet of the list of extensions scanned for by LinkedIn’s script Source: BleepingComputer
While many of the extensions that are scanned for are related to LinkedIn, the script also strangely detected language and grammar extensions, tools for tax professionals, and other seemingly unrelated features.
The script also collects a wide range of browser and device data, including CPU core count, available memory, screen resolution, timezone, language settings, battery status, audio information, and storage features.
Gathering information about visitors’ devices Source: BleepingComputer
BleepingComputer could not verify the claims in the BrowserGate report about the use of the data or whether it is shared with third-party companies.
However, similar fingerprinting techniques have been used in the past to build unique browser profiles, which can enable tracking users across websites.
LinkedIn denies data use allegations
LinkedIn does not dispute that it detects specific browser extensions, telling BleepingComputer that the info is used to protect the platform and its users.
However, the company claims the report is from someone whose account was banned for scraping LinkedIn content and violating the site’s terms of use.
Advertisement
“The claims made on the website linked here are plain wrong. The person behind them is subject to an account restriction for scraping and other violations of LinkedIn’s Terms of Service.
To protect the privacy of our members, their data, and to ensure site stability, we do look for extensions that scrape data without members’ consent or otherwise violate LinkedIn’s Terms of Service.
Here’s why: some extensions have static resources (images, javascript) available to inject into our webpages. We can detect the presence of these extensions by checking if that static resource URL exists. This detection is visible inside the Chrome developer console. We use this data to determine which extensions violate our terms, to inform and improve our technical defenses, and to understand why a member account might be fetching an inordinate amount of other members’ data, which at scale, impacts site stability. We do not use this data to infer sensitive information about members.
For additional context, in retaliation for this website owner’s account restriction, they attempted to obtain an injunction in Germany, alleging LinkedIn had violated various laws. The court ruled against them and found their claims against LinkedIn had no merit, and in fact, this individual’s own data practices ran afoul of the law.
Advertisement
Unfortunately, this is a case of an individual who lost in the court of law, but is seeking to re-litigate in the court of public opinion without regard for accuracy.”
❖ LinkedIn
LinkedIn claims the BrowserGate report stems from a dispute involving the developer of a LinkedIn-related browser extension called “Teamfluence,” which LinkedIn says it restricted for violating the platform’s terms.
In documents shared with BleepingComputer, a German court denied the developer’s request for a preliminary injunction, finding that LinkedIn’s actions did not constitute unlawful obstruction or discrimination.
Advertisement
The court also found that automated data collection alone could infringe upon LinkedIn’s terms of use and that it was entitled to block the accounts to protect its platform.
LinkedIn argues the BrowserGate report is an attempt to re-litigate that dispute publicly.
Regardless of the reasons for the report, one point is undisputed.
LinkedIn’s site uses a fingerprinting script that detects over 6,000 extensions running in a Chromium browser, along with other data about a visitor’s system.
Advertisement
This is not the first time that companies have used aggressive fingerprinting scripts to detect programs running on a visitor’s device.
While eBay never confirmed why they were using these scripts, it was widely believed that they were used to block fraud on compromised devices.
It was later discovered that numerous other companies were using the same fingerprinting script, including Citibank, TD Bank, Ameriprise, Chick-fil-A, Lendup, BeachBody, Equifax IQ connect, TIAA-CREF, Sky, GumTree, and WePay.
Advertisement
Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.
This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.
Satya Nadella in November 2016, in his honeymoon period as Microsoft CEO. (GeekWire File Photo)
[Editor’s Note: We’re excited to welcome Mary Jo Foley as a GeekWire contributor. Mary Jo has been one of the sharpest watchers of Microsoft for many years, currently as Editor in Chief at Directions on Microsoft, an IT planning and advisory service. She’ll be offering her take for GeekWire periodically on the latest developments in Redmond, starting with this piece.]
Reorgs are a way of life at Microsoft. But the pace of them over the last couple of months has led many to wonder what the heck is happening in Redmond — especially when coupled with the company’s stock price having its worst quarter in years.
During the past couple of months, Microsoft has made a noticeable number of organizational changes:
Is this just the usual Microsoft fiscal-year-end housekeeping, or is something different? A blip that will pass, or a new AI-centric reality for the Satya Nadella era?
It’s a mix of both, I’d argue.
The current wave of churn, at least in part, can be attributed to Microsoft’s corporate calendar. Its fourth quarter ends June 30 and new fiscal year kicks off on July 1. Microsoft often reorgs and does layoffs in the months leading up to this as a way to reset for the coming year.
Advertisement
The company also is taking actions to reduce hierarchy and make the corporate structure flatter, as are a number of tech companies, in the hopes of becoming nimbler.
A year ago, Chief Financial Officer Amy Hood proclaimed that Microsoft was “increasing our agility by reducing layers with fewer managers.” With moves like replacing 35-year veteran Executive Vice President Jha with a new gang of four, rather than just another single uber-boss, Microsoft is following through on those promises.
It’s not all mundane matters at play, however.
Thanks to AI, the way companies are prioritizing and following through on their strategies is different. Microsoft isn’t immune to the market’s jitters around capex overspending on AI when ROI still remains questionable. Its no-longer-exclusive partnership with OpenAI has people inside and outside the company worried, too, as does the fact that a whopping 45 percent of its unfulfilled Azure backlog last quarter was attributable to OpenAI.
Advertisement
Investor pressure on the company to keep its Azure business growing during a time of admitted capacity challenges also can’t be dismissed as contributing to the current churn. As a result, Microsoft travel budgets, new-hire spending, and investments in unproven areas are all on the chopping block.
Almost nothing (except towels, maybe) is immune from scrutiny with the goal of freeing up more dollars to pay for AI and cloud build-out.
But those reasons alone may not be enough to explain why Microsoft is looking like the least magnificent of the so-called Magnificent Seven tech leaders right now.
Microsoft continues to struggle in the consumer space, and not just with Xbox. Most of the company’s revenues have been and continue to be from sales to commercial customers. That consumer weakness is especially apparent when it comes to AI.
Advertisement
Microsoft recently disclosed only 3 percent of its Microsoft 365 customers are paying for Microsoft 365 Copilot. But its adoption rate for its consumer Copilot is even worse, and far lower than the rates for OpenAI’s ChatGPT and Google Gemini.
Suleyman’s reassignment came later than some expected (and hoped), given the starts and stops with Microsoft’s consumer AI efforts. Mico, a ghost-like Clippy wannabe, seems to be in limbo. Microsoft’s push to make voice one of the main ways users interact with AI on their PCs, when people don’t talk to PCs like they do phones, seems to be falling flat.
Meanwhile, the Windows organization is trying to right the ship by backing out of some of its over-zealous AI plans. Instead of trying to force AI into Notepad and Photos, execs said they instead will focus on some top consumer requests, ranging from taskbar customization, to adding the ability to pause updates at will.
Advertisement
Microsoft shows no signs of giving up on the consumer space. Maybe new blood will find new ways to harness the company’s enterprise tactics to boost its consumer share? If not, there’s always the next reorg. …
You must be logged in to post a comment Login