Connect with us
DAPA Banner

Tech

Cloudflare’s new Dynamic Workers ditch containers to run AI agent code 100x faster

Published

on

Web infrastructure giant Cloudflare is seeking to transform the way enterprises deploy AI agents with the open beta release of Dynamic Workers, a new lightweight, isolate-based sandboxing system that it says starts in milliseconds, uses only a few megabytes of memory, and can run on the same machine — even the same thread — as the request that created it.

Compared with traditional Linux containers, the company says Dynamic Workers is roughly 100x faster to start and between 10x and 100x more memory efficient.

Cloudflare has spent months pushing what it calls “Code Mode,” the idea that large language models often perform better when they are given an API and asked to write code against it, rather than being forced into one tool call after another.

The company says converting an MCP server into a TypeScript API can cut token usage by 81%, and it is now positioning Dynamic Workers as the secure execution layer that makes that approach practical at scale.

Advertisement

For enterprise technical decision makers, that is the bigger story. Cloudflare is trying to turn sandboxing itself into a strategic layer in the AI stack. If agents increasingly generate small pieces of code on the fly to retrieve data, transform files, call services or automate workflows, then the economics and safety of the runtime matter almost as much as the capabilities of the model. Cloudflare’s pitch is that containers and microVMs remain useful, but they are too heavy for a future where millions of users may each have one or more agents writing and executing code constantly.

The history of modern isolated runtime environments

To understand why Cloudflare is doing this, it helps to look at the longer arc of secure code execution. Modern sandboxing has evolved through three main models, each trying to build a better digital box: smaller, faster and more specialized than the one before it.

The first model is the isolate. Google introduced the v8::Isolate API in 2011 so the V8 JavaScript engine could run many separate execution contexts efficiently inside the same process. In effect, a single running program could spin up many small, tightly separated compartments, each with its own code and variables.

In 2017, Cloudflare adapted that browser-born idea for the cloud with Workers, betting that the traditional cloud stack was too slow for instant, globally distributed web tasks. The result was a runtime that could start code in milliseconds and pack many environments onto a single machine. The trade-off is that isolates are not full computers. They are strongest with JavaScript, TypeScript and WebAssembly, and less natural for workloads that expect a traditional machine environment.

Advertisement

The second model is the container. Containers had been technically possible for years through Linux kernel features, but the company Docker turned them into the default software packaging model when it popularized them in 2013.

Containers solved a huge portability problem by letting developers package code, libraries and settings into a predictable unit that could run consistently across systems. That made them foundational to modern cloud infrastructure. But they are relatively heavy for the sort of short-lived tasks Cloudflare is talking about here. The company says containers generally take hundreds of milliseconds to boot and hundreds of megabytes of memory to run, which becomes costly and slow when an AI-generated task only needs to execute for a moment.

The third model is the microVM. Popularized by AWS Firecracker in 2018, microVMs were designed to offer stronger machine-like isolation than containers without the full bulk of a traditional virtual machine. They are attractive for running untrusted code, which is why they have started to show up in newer AI-agent systems such as Docker Sandboxes. But they still sit between the other two models: stronger isolation and more flexibility than an isolate, but slower and heavier as well.

That is the backdrop for Cloudflare’s pitch. The company is not claiming containers disappear, or that microVMs stop mattering. It is claiming that for a growing class of web-scale, short-lived AI-agent workloads, the default box has been too heavy, and the isolate may now be the better fit.

Advertisement

Cloudflare’s case against the container bottleneck

Cloudflare’s argument is blunt: for “consumer-scale” agents, containers are too slow and too expensive. In the company’s framing, a container is fine when a workload persists, but it is a bad fit when an agent needs to run one small computation, return a result and disappear. Developers either keep containers warm, which costs money, or tolerate cold-start delay, which hurts responsiveness. They may also be tempted to reuse a live sandbox across multiple tasks, which weakens isolation.

Dynamic Worker Loader is Cloudflare’s answer. The API allows one Worker to instantiate another Worker at runtime with code provided on the fly, usually by a language model. Because these dynamic Workers are built on isolates, Cloudflare says they can be created on demand, run one snippet of code, and then be thrown away immediately afterward. In many cases, they run on the same machine and even the same thread as the Worker that created them, which removes the need to hunt for a warm sandbox somewhere else on the network.

The company is also pushing hard on scale. It says many container-based sandbox providers limit concurrent sandboxes or the rate at which they can be created, while Dynamic Workers inherit the same platform characteristics that already let Workers scale to millions of requests per second. In Cloudflare’s telling, that makes it possible to imagine a world where every user-facing AI request gets its own fresh, isolated execution environment without collapsing under startup overhead.

Security remains the hardest part

Cloudflare does not pretend this is easy to secure. In fact, the company explicitly says hardening an isolate-based sandbox is trickier than relying on hardware virtual machines, and notes that security bugs in V8 are more common than those in typical hypervisors. That is an important admission, because the entire thesis depends on convincing developers that an ultra-fast software sandbox can also be safe enough for AI-generated code.

Advertisement

Cloudflare’s response is that it has nearly a decade of experience doing exactly that. The company points to automatic rollout of V8 security patches within hours, a custom second-layer sandbox, dynamic cordoning of tenants based on risk, extensions to the V8 sandbox using hardware features like MPK, and research into defenses against Spectre-style side-channel attacks. It also says it scans code for malicious patterns and can block or further sandbox suspicious workloads automatically. Dynamic Workers inherit that broader Workers security model.

That matters because without the security story, the speed story sounds risky. With it, Cloudflare is effectively arguing that it has already spent years making isolate-based multi-tenancy safe enough for the public web, and can now reuse that work for the age of AI agents.

Code Mode: from tool orchestration to generated logic

The release makes the most sense in the context of Cloudflare’s larger Code Mode strategy. The idea is simple: instead of giving an agent a long list of tools and asking it to call them one by one, give it a programming surface and let it write a short TypeScript function that performs the logic itself. That means the model can chain calls together, filter data, manipulate files and return only the final result, rather than filling the context window with every intermediate step. Cloudflare says that cuts both latency and token usage, and improves outcomes especially when the tool surface is large.

The company points to its own Cloudflare MCP server as proof of concept. Rather than exposing the full Cloudflare API as hundreds of individual tools, it says the server exposes the entire API through two tools — search and execute — in under 1,000 tokens because the model writes code against a typed API instead of navigating a long tool catalog.

Advertisement

That is a meaningful architectural shift. It moves the center of gravity from tool orchestration toward code execution. And it makes the execution layer itself far more important.

Why Cloudflare thinks TypeScript beats HTTP for agents

One of the more interesting parts of the launch is that Cloudflare is also arguing for a different interface layer. MCP, the company says, defines schemas for flat tool calls but not for programming APIs. OpenAPI can describe REST APIs, but it is verbose both in schema and in usage. TypeScript, by contrast, is concise, widely represented in model training data, and can communicate an API’s shape in far fewer tokens.

Cloudflare says the Workers runtime can automatically establish a Cap’n Web RPC bridge between the sandbox and the harness code, so a dynamic Worker can call those typed interfaces across the security boundary as if it were using a local library. That lets developers expose only the exact capabilities they want an agent to have, without forcing the model to reason through a sprawling HTTP interface.

The company is not banning HTTP. In fact, it says Dynamic Workers fully support HTTP APIs. But it clearly sees TypeScript RPC as the cleaner long-term interface for machine-generated code, both because it is cheaper in tokens and because it gives developers a narrower, more intentional security surface.

Advertisement

Credential injection and tighter control over outbound access

One of the more practical enterprise features in the release is globalOutbound, which lets developers intercept every outbound HTTP request from a Dynamic Worker. They can inspect it, rewrite it, inject credentials, respond to it directly, or block it entirely. That makes it possible to let an agent reach outside services while never exposing raw secrets to the generated code itself.

Cloudflare positions that as a safer way to connect agents to third-party services requiring authentication. Instead of trusting the model not to mishandle credentials, the developer can add them on the way out and keep them outside the agent’s visible environment. In enterprise settings, that kind of blast-radius control may matter as much as the performance gains.

More than a runtime: the helper libraries matter too

Another reason the announcement lands as more than a low-level runtime primitive is that Cloudflare is shipping a toolkit around it. The @cloudflare/codemode package is designed to simplify running model-generated code against AI tools using Dynamic Workers. At its core is DynamicWorkerExecutor(), which sets up a purpose-built sandbox with code normalization and direct control over outbound fetch behavior. The package also includes utility functions to wrap an MCP server into a single code() tool or generate MCP tooling from an OpenAPI spec.

The @cloudflare/worker-bundler package handles the fact that Dynamic Workers expect pre-bundled modules. It can resolve npm dependencies, bundle them with esbuild, and return the module map the Worker Loader expects. The @cloudflare/shell package adds a virtual filesystem backed by a durable Workspace using SQLite and R2, with higher-level operations like read, write, search, replace, diff and JSON update, plus transactional batch writes.

Advertisement

Taken together, those packages make the launch feel much more complete. Cloudflare is not just exposing a fast sandbox API. It is building the surrounding path from model-generated logic to packaged execution to persistent file manipulation.

Isolates versus microVMs: two different homes for agents

Cloudflare’s launch also highlights a growing split in the AI-agent market. One side emphasizes fast, disposable, web-scale execution. The other emphasizes deeper, more persistent environments with stronger machine-like boundaries.

Docker Sandboxes is a useful contrast. Rather than using standard containers alone, it uses lightweight microVMs to give each agent its own private Docker daemon, allowing the agent to install packages, run commands and modify files without directly exposing the host system. That is a better fit for persistent, local or developer-style environments. Cloudflare is optimizing for something different: short-lived, high-volume execution on the global web.

So the trade-off is not simply security versus speed. It is depth versus velocity. MicroVMs offer a sturdier private fortress and broader flexibility. Isolates offer startup speed, density and lower cost at internet scale. That distinction may become one of the main dividing lines in agent infrastructure over the next year.

Advertisement

Community reaction: hype, rivalry and the JavaScript catch

The release also drew immediate attention from developers on X, with reactions that captured both excitement and skepticism.

Brandon Strittmatter, a Cloudflare product lead and founder of Outerbase, called the move “classic Cloudflare,” praising the company for “changing the current paradigm on containers/sandboxes by reinventing them to be lightweight, less expensive, and ridiculously fast.”

Zephyr Cloud CEO Zack Chapple called the release “worth shouting from the mountain tops.”

But the strongest caveat surfaced quickly too: this system works best when the agent writes JavaScript. Cloudflare says Workers can technically run Python and WebAssembly, but that for small, on-demand snippets, “JavaScript will load and run much faster.”

Advertisement

That prompted criticism from YouTuber and ThursdAI podcast host Alex Volkov, who wrote that he “got excited… until I got here,” reacting to the language constraint.

Cloudflare’s defense is pragmatic and a little provocative. Humans have language loyalties, the company argues, but agents do not. In Cloudflare’s words, “AI will write any language you want it to,” and JavaScript is simply well suited to sandboxed execution on the web. That may be true in the narrow sense the company intends, but it also means the platform is most naturally aligned with teams already comfortable in the JavaScript and TypeScript ecosystem.

The announcement also triggered immediate competitive positioning. Nathan Flurry of Rivet used the moment to contrast his Secure Exec product as an open-source alternative that supports a broader range of platforms including Vercel, Railway and Kubernetes rather than being tied closely to Cloudflare’s own stack.

That reaction is worth noting because it shows how quickly the sandboxing market around agents is already splitting between vertically integrated platforms and more portable approaches.

Advertisement

Early use cases: AI apps, automations and generated platforms

Cloudflare is pitching Dynamic Workers for much more than quick code snippets. The company highlights Code Mode, AI-generated applications, fast development previews, custom automations and user platforms where customers upload or generate code that must run in a secure sandbox.

One example it spotlights is Zite, which Cloudflare says is building an app platform where users interact through chat while the model writes TypeScript behind the scenes to build CRUD apps, connect to services like Stripe, Airtable and Google Calendar, and run backend logic. Cloudflare quotes Zite CTO and co-founder Antony Toron saying Dynamic Workers “hit the mark” on speed, isolation and security, and that the company now handles “millions of execution requests daily” using the system.

Even allowing for vendor framing, that example gets at the company’s ambition. Cloudflare is not just trying to make agents a bit more efficient. It is trying to make AI-generated execution environments cheap and fast enough to sit underneath full products.

Pricing and availability

Dynamic Worker Loader is now in open beta and available to all users on the Workers Paid plan. Cloudflare says dynamically loaded Workers are priced at $0.002 per unique Worker loaded per day, in addition to standard CPU and invocation charges, though that per-Worker fee is waived during the beta period. For one-off code generation use cases, the company says that cost is typically negligible compared with the inference cost of generating the code itself.

Advertisement

That pricing model reinforces the larger thesis behind the product: that execution should become a small, routine part of the agent loop rather than a costly special case.

The bigger picture

Cloudflare’s launch lands at a moment when AI infrastructure is becoming more opinionated. Some vendors are leaning toward long-lived agent environments, persistent memory and machine-like execution. Cloudflare is taking the opposite angle. For many workloads, it argues, the right agent runtime is not a persistent container or a tiny VM, but a fast, disposable isolate that appears instantly, executes one generated program, and vanishes.

That does not mean containers or microVMs go away. It means the market is starting to split by workload. Some enterprises will want deeper, more persistent environments. Others — especially those building high-volume, web-facing AI systems — may want an execution layer that is as ephemeral as the requests it serves.

Cloudflare is betting that this second category gets very large, very quickly. And if that happens, Dynamic Workers may prove to be more than just another Workers feature. They may be Cloudflare’s attempt to define what the default execution layer for internet-scale AI agents looks like.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Videos Catch Amazon Delivery Drones Dropping Packages From 10 Feet in the Air

Published

on

There’s been a few complaints about Amazon’s drone delivery service. “The automated mailmen are dropping off packages from 10 feet in the air,” reports the New York Post, “rendering the contents of each box susceptible to crashing and smashing.”

One example? Tamara Hancock filmed a drone delivering a bottle of Torani flavoring syrup to her home in Arizona (as a test of how Amazon handled fragile items). It was delivered it in a plastic bottle — not glass — but the massive drone drops the drone from so high that the impact cracked the bottle’s cap. (In the video Hancock opens her delivery to find leaked flavoring syrup “everywhere.”)

The delivery was hard to film, Hancock says, because “If the drone sees me in the back yard, it will not drop, because it is worried about hurting humans or animals.” The Post notes Amazon’s “AI-charged fleet” of drones are “Outfitted with industry-leading ‘sense and avoid’ technology, the aerodynamic machines are equipped to drop off eligible items, weighing a maximum of five pounds, at designated areas in 60 minutes or less.”
The high-tech, however, apparently does not ensure gentle landings. Collisions, including a recent crash-and-burn into a Texas building, as well as several mid-flight malfunctions in rainy weather, have abounded since the drones’ inaugural launch….

Tasha, a separate Amazon user, spotted the drone plunging a package near the paved driveway of a neighbor’s yard. Unfortunately, its propellers caused other, previously delivered parcels to blow away, sending one into the street… In a statement to The Post, Amazon said it apologized for one of the “rare instances when products don’t arrive as expected.”
Amazon’s drone fleet has been running since late 2024, the Post adds, and are now offering “ultra-fast” shipping in U.S. states including Arizona, Florida, Michigan, Kansas and Texas.

Advertisement

The machines do seem massive. I’m surprised neighbors aren’t complaining about the noise

Source link

Advertisement
Continue Reading

Tech

Palantir Posts Bond Villain Manifesto On X

Published

on

DeanonymizedCoward writes: Engadget reports that Palantir has posted to X a summary of CEO Alex Karp and Nicholas W. Zamiska’s 2025 book, The Technological Republic, which reads like a utopian idealist doodled on a Bond villain’s whiteboard. While the post makes some decent points, it also highlights the Big-AI attitude that the AI surveillance state is in fact a good thing, and strongly implies that the Good Guys need to do war crimes before the Bad Guys get around to it. “The ability of free and democratic societies to prevail requires something more than moral appeal,” one of the 22 points states. “It requires hard power, and hard power in this century will be built on software.”

The book is billed as “a passionate call for the West to wake up to our new reality,” and other excerpts in the social media post include assertions such as: “Free email is not enough. The decadence of a culture or civilization, and indeed its ruling class, will be forgiven only if that culture is capable of delivering economic growth and security for the public”; “National service should be a universal duty”; “The postwar neutering of Germany and Japan must be undone”; and “Some cultures have produced vital advances; others remain dysfunctional and regressive.”

The statement criticizes the West’s resistance to “defining national cultures in the name of inclusivity,” as well as the treatment of billionaires and the “ruthless exposure of the private lives of public figures.”

Source link

Advertisement
Continue Reading

Tech

Podcast: QUAD ESL 2912X Electrostatic Speakers at AXPONA 2026

Published

on

Recorded from the show floor at AXPONA 2026, this episode brings together Cornelius and Jamie O’Callaghan of the IAG Hi-Fi Division for a deep dive into the legacy and future of QUAD’s electrostatic loudspeakers, including the ESL 2912X. We break down what makes electrostatic panel speakers fundamentally different from traditional designs, why QUAD has remained committed to the technology for decades, and how the latest generation improves on transparency, dispersion, and real world usability. The conversation also explores how these iconic speakers fit into a modern hi-fi landscape increasingly dominated by compact and wireless solutions, and why QUAD continues to attract listeners who care more about realism than convenience.

This episode was recorded on April 10, 2026 (the first day of AXPONA 2026).

Where to listen:

On the Panel:

QUAD ESL 2912X Electrostatic Speakers at AXPONA 2026
QUAD ESL 2912X Electrostatic Speakers at AXPONA 2026

Credits:

Advertisement. Scroll to continue reading.
Advertisement

Source link

Continue Reading

Tech

Seattle-area billboard takes a page from Bay Area playbook: ‘Startup energy should be more visible’

Published

on

A billboard for Bellevue, Wash., startup Summation, visible from SR 520 in Bellevue. (Photo courtesy of Summation)

A Bellevue, Wash.-based startup that came out of stealth last fall is really trying to get noticed now, taking a page out of a playbook that’s more prevalent in Silicon Valley.

Summation is an AI platform that helps enterprise leaders draw insights from large volumes of internal data. A bright orange billboard visible from SR 520 doesn’t say that, but it does put the company’s name in sight of drivers — many of whom potentially work in tech — heading east along the highway.

“We’re building Summation here in Bellevue, and wanted to do something a little bold and a little playful — for recruiting, for awareness, and because startup energy should be more visible around here,” CEO Ian Wong told GeekWire.

Wong is the former CTO of real estate giant Opendoor and Square’s first data scientist. He co-founded Summation in 2024 with Ramachandran “RC” Ramarathinam, who led Opendoor’s core transaction platform.

Summation raised $35 million in funding from Benchmark and Kleiner Perkins in October.

Advertisement

Tech company billboards are a big part of the landscape in the San Francisco Bay Area. Signs advertise a whole new era of AI-focused startup names and products. Last summer, The New York Times published a fun quiz challenging readers to decode what some of the billboards were even selling around Silicon Valley.

Wong said capturing a slice of that energy was part of the point with his company’s billboard in Bellevue, which went up about two weeks ago near the Burgermaster restaurant along Northup Way.

“In SF, startup ambition is just visible — on 101, on the sides of buildings, in every coffee shop,” he said. “The Seattle/Bellevue area has world-class technical talent, but the scene here has always been understated. We wanted to put up a small signal that ambitious things are being built on this side of the lake, too — and if you want to work on one of them, come find us.”

Bellevue-based startup Stasig used a reverse tactic back in 2024 when it launched an aggressive campaign to spread its name across the Bay Area with more than 200 billboards and posters at transit shelters and stations.

Advertisement

Summation employs about 35 people right now and is hiring across engineering, product, and go-to-market.

Summation’s platform sits on top of data systems and runs massive calculations automatically, testing different scenarios and using AI agents to explore different questions in parallel. The software also automates financial reconciliations, variance analysis, and management reporting.

The advertising lines up with what Wong called “a big product release” coming next week.

“Always be hiring,” he said. “And selling.”

Advertisement

Source link

Continue Reading

Tech

When it comes to leadership, do companies know what they are doing?

Published

on

Robert Walters research suggests that many Irish organisations are lacking a clear leadership succession plan.

Leadership often defines an organisation and Robert Walters has published data indicating that a number of companies are not as prepared for upcoming changes as they should be. 

The report found that, of those who contributed their data, just 16pc of organisations have a leadership succession plan in place. More than 40pc of Irish companies have no plan in place whatsoever and 7pc are unsure whether one currently exists or not. At the same time, 72pc of Irish leaders said they have a shortage of senior talent, with half describing the shortage as significant.

“There is a clear gap between how concerned organisations are about senior talent shortages and how prepared they are for leadership change,” said Suzanne Feeney, the country manager at Robert Walters Ireland.

Advertisement

She added: “In many organisations, succession planning has historically been handled informally. But they are now operating in a far more complex environment than they were even a few years ago. 

“Advances in artificial intelligence, geopolitical uncertainty and economic pressures are all contributing to more frequent leadership transitions. With only one in five businesses having an established succession plan, many are leaving themselves exposed to significant operational risk.”

Pipeline pressures

Securing and retaining skilled professionals is a key issue for employers in 2026. The recent Data Salaries & Job Sentiment Analysis 2026 report, published by Analytics Institute and SAS, highlighted the growing challenges being experienced by organisations looking to expand their data capabilities. 

The report found that 64pc of organisations have future plans to increase the size of their data teams, whereas 70pc of professionals explained that they are unlikely to change employers this year. 

Advertisement

Commenting on the Robert Walters report, Adam Gordon, the global head of talent development at Robert Walters, said: “Leadership continuity can be a challenge for organisations of every size, from SMEs to the world’s most recognised brands.

“Senior talent is one of the hardest resources to replace and finding the right long-term successor can take time. Interim leaders can play a valuable role here by maintaining stability and ensuring critical decisions continue to move forward while organisations assess their long-term options.”

Robert Walters’ research also points to challenges in the development of future leaders, with the report suggesting that nearly two-fifths (38pc) of participants are struggling to identify and develop strong successors within their business. 

Feeney said: “Many organisations have talented people internally, but identifying future leaders early and giving them the right development opportunities takes deliberate effort.

Advertisement

“At its core, succession planning is about future-proofing the organisation, building a strong leadership pipeline comprising internal progression and external hiring to ensure organisations have the resilience they need for the long term.”

Undoubtedly, the working landscape for modern-day employees is evolving quickly in 2026. An earlier report from Robert Walters, at the start of the year, found that changes in remote and in-person arrangements could compel skilled employees to increase their engagement in the workplace. 

More than half (59pc) of contributing Irish employees said that they want their place of employment to adopt a microshifting schedule, with Feeney noting that microshifting has the potential to increase engagement, accountability and even time spent in the office.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

North Korea hackers blamed for $290M crypto theft

Published

on

Over the weekend, hackers stole more than $290 million in cryptocurrency from Kelp DAO, a protocol that allows users to earn yields on idle crypto investments. 

By Monday, LayerZero, one of the projects affected by the hack, accused North Korea of carrying out the heist. The hack is now the largest crypto theft of the year so far, following an earlier hack at crypto exchange Drift in April netted hackers around $285 million.

Per its post on X, LayerZero said the hackers exploited Kelp DAO via its LayerZero bridge, which allows different blockchains to send instructions to each other. The hackers then took advantage of Kelp’s own security configuration, which did not require multiple verifications before approving transactions. That allowed the hackers to siphon off the funds with fraudulent transactions.

The company cited “preliminary indicators” that point to North Korea as the culprit, in particular its hacking group that targets crypto known as TraderTraitor

Advertisement

Kelp DAO responded to LayerZero blaming it for the theft instead. 

In the last few years, North Korean hackers working for Kim Jong Un’s regime have become highly successful at stealing crypto. Last year, North Korean hackers stole more than $2 billion in crypto. Overall, since 2017, the total amount of stolen crypto by North Korea is said to be around $6 billion.

Source link

Advertisement
Continue Reading

Tech

Allbirds’ Move To AI Has Echoes of the Dot-Com Frenzy

Published

on

An anonymous reader quotes a report from Bloomberg, written by writer Austin Carr: Allbirds is pivoting to artificial intelligence. The San Francisco brand, whose wool running shoes were once the sneaker du jour among the tech crowd, announced last week that it was expanding into AI computing infrastructure. The bizarre strategic shift was immediately greeted with a surprising frenzy on Wall Street, where shares of Allbirds soared 582% last Wednesday before dropping the next day. […] Of course, the absurdity of Allbirds’ situation echoed familiar Silicon Valley tropes — from the endless startup pivots of the 2010s to the more recent boom-and-bust cycles of arbitrarily valued crypto coins. But it immediately reminded me of the marketing ploys of the dot-com crash. After all, some of the more iconic fails ended up being retailers such as Pets.com, Webvan, etc., riding the web wave with little to show for it beyond terrible margins.

One particular comparison from that period stands out as relevant to Allbirds: Zap.com. The holding company behind it, Zapata Corp., had a long and convoluted history, but was essentially selling fish-oil products by the time it decided to reinvent itself as an internet portal. It amassed a variety of web properties — in media, e-commerce, gaming and so on — and even once tried to acquire the search engine Excite. Spoiler alert: Zap flopped. Jen Heck, then a young employee at one of Zap’s up-and-coming portfolio entities, remembers how quickly the hype of that web 1.0 turned to hell. As absurd as Zapata’s pivot sounds today, it seemed feasible during the excitement of the internet revolution. “We went from like, ‘Wow, this life thing is just so easy,’ to it all ending so suddenly,” Heck recalls. The ones who survived that tech bubble, she says, actually had differentiated products and the right creative thinkers building them — and weren’t just cynically jumping on the latest hot trend. “‘Internet’ was the magic word then, and ‘AI’ is the magic word now,” Heck says.

Source link

Continue Reading

Tech

SaaS is not dead. You are just being sold the funeral

Published

on

The “AI has killed software” narrative has a handful of very loud beneficiaries and a lot of quiet evidence against it. The companies that will survive the next five years are the ones that refuse to treat the hyperscalers as the new gods.

Whenever I make an affirmation, I like to do my research first, and not to sound like a LinkedIn post. I wish more people in this industry did the same, as there is a prevailing mood where we think that big numbers are the whole story.


When the Black Death came among us, people probably thought it was the end. When wars came to our societies, people thought it was the end. Yet, in a strange way, we have a natural power to overcome obstacles and turn change to our advantage.

When AI started to infiltrate our work, and later our personal lives, a large group of people declared that “AI will replace people,” that this technology, not even particularly new, would conquer our brains, hearts, and work, and lead us where it wanted.

Advertisement

Yet we are still working; people are still writing, thinking, creating, building.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

In the last two years, more and more people have been saying that “SaaS is dead.” Of course, this phrase came from someone’s mouth, someone with enough influence to shape general opinion, and everybody was already in black, ready for the funeral.

Advertisement

In August 2024, Klarna’s chief executive, Sebastian Siemiatkowski, sat on an earnings call and mentioned, almost in passing, that the Swedish fintech had “shut down Salesforce.” Workday was next.

Klarna would build its own AI-driven replacements, a lightweight stack unshackled from the bloat of traditional enterprise software. The quote moved markets. Articles followed with headlines about the death of SaaS. Salesforce’s Marc Benioff, on stage at Dreamforce, was asked to respond to a customer who had apparently decided the future was AI and the past was his product. He looked, by his own admission, embarrassed.

Six months later, Siemiatkowski quietly clarified what had actually happened. Klarna had not replaced Salesforce with AI. It had replaced Salesforce with other SaaS: Deel for HR, third-party tools for CRM, the Swedish graph database Neo4j for data consolidation.

Klarna still uses Slack, which is still a Salesforce product. Siemiatkowski himself admitted on X that he was “tremendously embarrassed” by how the story had spiralled.

Advertisement

“No,” he wrote, “we did not replace SaaS with an LLM.”

This is the single most instructive story in enterprise software of the past two years. The distance between what was said and what was done reveals the mechanics of the entire “SaaS is dead” narrative. The headline travelled. The correction did not.

An industry of analysts, venture capitalists, and foundation model CEOs built a year of marketing on the louder half.

Start by asking who gains from the story that software-as-a-service is being replaced by artificial intelligence, because the answer is surprisingly narrow. The hyperscalers do, because AI workloads justify the $660 to $690 billion in capital expenditure the five largest US cloud and technology companies have committed for 2026, according to Futurum Group analysis, nearly double the previous year.

Advertisement

The foundation model labs benefit, because every dollar of enterprise software spend redirected to their APIs validates valuations that are otherwise difficult to defend. OpenAI ended 2025 at around $20 billion in annual recurring revenue. Anthropic crossed $9 billion in January 2026. These are genuinely large numbers. They are also, respectively, about three per cent and a little over one per cent of the hyperscaler capex being spent to serve them.

The venture capitalists benefit because their portfolio repricing depends on the narrative that AI-native companies will outrun the incumbents they once funded. And Nvidia, supplier and financier of the boom, benefits until it no longer does.

In March 2026, CEO Jensen Huang confirmed that his recent investments in OpenAI and Anthropic would likely be the last. The circular financing, Nvidia invests in OpenAI, OpenAI buys Nvidia chips, had reached the point where even the chipmaker was ready to stop calling it a virtuous cycle.

MIT’s Michael Cusumano, quoted by Bloomberg, put the arithmetic bluntly: “Nvidia is investing $100 billion in OpenAI stock, and OpenAI is saying they are going to buy $100 billion or more of Nvidia chips.”

Advertisement

You could call that demand. You could also call it bookkeeping.

The 95% number that should have ended the hype

The harder question is whether any of this is producing business results. Here the data is less generous than the pitch decks.

In July 2025, MIT’s Project NANDA published “The GenAI Divide: State of AI in Business 2025”, based on 150 executive interviews, 350 survey responses, and analysis of 300 public AI deployments. Its headline finding: despite roughly $30 to $40 billion in enterprise generative AI spending, 95% of pilots delivered no measurable impact on profit and loss. Only 5% reached production.

The response from the industry was not to recalibrate. It was to argue that the wrong metric was being used. UC Berkeley published a rebuttal suggesting ROI was an “industrial-era” measurement unsuited to a “cognitive-era transformation.”

Advertisement

This is what every hype cycle says in its late phase, that profit is a distraction, that what is being built is too large for ordinary standards. The same argument was made about WeWork, the metaverse, and blockchain.

Each time, the underlying assumption was that the people with capital and megaphones understood the future better than the people actually trying to run a business.

The 5% of AI projects that did succeed, MIT found, shared specific traits. They were built by specialised vendors, not attempted internally. They focused on back-office automation rather than sales theatre. They integrated deeply with existing workflows. Over half of enterprise AI budgets, meanwhile, were going to sales and marketing tools where ROI was lowest.

This is not a revolution sweeping through the enterprise. It is a lot of companies buying demo-friendly products that do not produce returns, while a minority does the unglamorous integration work that quietly extracts value.

Advertisement

The collapse that did not collapse

Stil, I have to admit that there are genuine signs of stress in the SaaS market. In February 2026, roughly $285 billion in market value evaporated from software stocks in a single trading session, what Wall Street christened the “SaaSpocalypse.”

ServiceNow fell 7%. Intuit dropped 11%. LegalZoom lost nearly 20%. Salesforce is down approximately 30% year-to-date. The business rationale, that per-seat pricing starts to collapse when one employee with AI tools can do the work of five, is not wrong.

But Bain & Company, looking at the broader record, has offered a useful correction: technological transitions rarely produce extinction.

They produce heterogeneity. Desktop survived mobile. Cloud did not kill on-premise so much as push it into specialised niches. The history of software is a history of layers accumulating, not replacing.

Advertisement

SaaS vendors are becoming agent-orchestration platforms. Salesforce has Agentforce. HubSpot has AI tools. Snowflake partners with Anthropic. The incumbents are being forced to adapt, but adaptation is not death.

IDC’s European practice framed it precisely in February: “SaaS is not dead, but it is metamorphosing.”

Pricing shifts towards outcomes. Interfaces become more agent-driven. But the real business logic, the auditing, versioning, compliance, and data gravity, remains where it was. The transformation is real. The extinction event is marketing.

The new gods are not new

Every major technology wave produces a brief period in which the companies at its centre are treated as reinventors of reality. For the cloud, it was AWS. For mobile, Apple. Before that, Microsoft.

Advertisement

The rhetoric around big techs like Nvidia, OpenAI, Anthropic, Meta, and xAI has the same cadence: they are building the new infrastructure of civilisation, rewriting how humans work, inevitable. There is a grain of truth in it. AI, and agentic AI in particular, is a real technological step. 

The companies most likely to thrive are the ones already disciplined enough to recognise the pattern. Every enterprise that survived the dot-com crash, the mobile transition, and the cloud migration did so by adopting what was useful and ignoring what was hyped, by measuring outcomes against costs, by refusing to treat platform vendors as infallible.

The companies that went under bought the whole story: that their customers would wait while they rebuilt, that the new paradigm would reward early and total commitment.

We reported in February on a pattern now visible across dozens of SaaS companies between $20 million and $80 million in ARR: shipping AI features while net revenue retention quietly collapses.

Advertisement

Eighteen months after going “AI-first,” one company watched its NRR drop from 108% to 94% and lost $2.8 million in renewals, not because the product got worse, but because everyone was building the future and nobody was watching the present. The AI features were legitimately good. The existing customers churned anyway.

None of this is an argument against AI. Previous AI cycles ended with research freezes, shuttered startups, and survivors who had been quietly doing useful work while everyone else claimed the moon. This cycle will likely end similarly.

Some hype will turn out to be real. Most revenue projections will not. A handful of current “AI-native” startups will become durable businesses. Many will be absorbed or exposed as wrappers.

The companies that come through refuse both extremes. They do not miss the trend, because dismissing AI in 2026 is as serious a strategic error as dismissing mobile was in 2010. And they do not drown in it. They do not empty their engineering teams into AI-first rebrands while their existing revenue base walks out the door. They do not treat the big tech companies as gods, but as what they are: very large commercial entities with very specific interests in what you believe about the future.

Advertisement

Klarna, for the record, is still paying for SaaS. It is also still paying OpenAI. This is probably the honest shape of the future: not the death of anything, but a quieter rearrangement in which the winners are the operators who kept their feet on the ground while everyone else was watching the sky.

The funeral for SaaS has been extremely well-attended. The corpse, on closer inspection, is still breathing.

Source link

Advertisement
Continue Reading

Tech

NSA Using Anthropic’s Mythos Despite Blacklist

Published

on

Axios reports that the NSA is using Anthropic’s restricted Mythos Preview model despite the Pentagon insisting the company poses a “supply chain risk.” Axios reports: The government’s cybersecurity needs appear to be outweighing the Pentagon’s feud with Anthropic. The department moved in February to cut off Anthropic and force its vendors to follow suit. That case is ongoing. The military is now broadening its use of Anthropic’s tools while simultaneously arguing in court that using those tools threatens U.S. national security.

Two sources said the NSA was using Mythos, while one said the model was also being used more widely within the department. It’s unclear how the NSA is currently using Mythos, but other organizations with access to the model are using it predominantly to scan their own environments for exploitable security vulnerabilities.

Anthropic restricted access to Mythos to around 40 organizations, contending that its offensive cyber capabilities were too dangerous to allow for a wider release. Anthropic only announced 12 of those organizations. One source said the NSA was among the unnamed agencies with access. The NSA’s counterparts in the U.K. have said they have access to the model through the country’s AI Security Institute. Anthropic’s CEO met with top U.S. officials on Friday to discuss “opportunities for collaboration,” according to a White House spokesperson, “as well as shared approaches and protocols to address the challenges associated with scaling this technology.”

Source link

Advertisement
Continue Reading

Tech

Typing with your brain might soon be as simple as wearing a beanie

Published

on


Silicon Valley startup Sabi is the latest entrant to suggest using the brain as an interface device. The company is developing a noninvasive device that translates internal speech into text. Rather than relying on implanted hardware, Sabi is building a wearable device – initially in the form of a beanie,…
Read Entire Article
Source link

Continue Reading

Trending

Copyright © 2025