New VB Pulse data shows Microsoft and OpenAI leading enterprise agent orchestration, but Anthropic’s first measurable foothold points to a larger fight over who controls the infrastructure where AI agents run.
For the last two years, the enterprise AI race has mostly been framed as a model war: OpenAI’s GPT series versus Anthropic’s Claude versus Google’s Gemini, with smaller and open-source alternatives also coming in from the U.S. and China.
But the next strategic fight may not be over which model answers a prompt best. It may be over who controls the layer where agents plan, call tools, access data, run workflows and prove to security teams that they did not do anything they were not supposed to do.
New VB Pulse survey data suggests the category is already taking shape. Our independent Enterprise Agentic Orchestration tracker, a survey that records the preferences of qualified, verified technical-decision maker respondents at enterprises at regular intervals, found that Microsoft Copilot Studio and Azure AI Studioled with 38.6% primary-platform adoption in February, up from 35.7% in January.
Advertisement
OpenAI’s Assistants and Responses API held second place, rising from 23.2% to 25.7%.
Anthropic remained far smaller, but it made its first appearance in the tracker: moving from 0% in January to 5.7% in February for Anthropic tool use and workflows.
VP Pulse Enterprise Agentic Orchestration change in respondents’ primary agent orchestration platform from Jan-Feb 2026. Credit: VentureBeat
The underlying move is small — four respondents out of a total 70 in this cohort, with more to come — but strategically interesting because it marks the first sign in this tracker of Claude usage moving from the model layer into native orchestration.
Advertisement
That distinction matters. Enterprises are not merely choosing chatbots. They are deciding where the live operational machinery of AI work will sit: inside Microsoft’s stack, inside OpenAI’s API layer, inside Anthropic’s managed runtime, inside an open framework, or across a hybrid mix of all of them.
“This is the convergence moment for enterprise AI,” said Tom Findling, CEO and cofounder of AI cybsersecurity startup Conifers, in a statement to VentureBeat. “Models and agent frameworks have matured enough together that enterprises are now shifting focus beyond model quality to the control plane around it. In security operations, we’re seeing the competitive advantage move toward platforms that can orchestrate agents, leverage enterprise context, and provide governance and auditability across customer environments.”
Anthropic’s number is still small to start — but the increase is not
The Anthropic number, by itself, should not be overread. A move from zero to 5.7% is not a juggernaut. It is not proof that Anthropic has captured enterprise orchestration.
It is not even enough to say Anthropic has a durable lead in any part of this market. Microsoft owns the early enterprise distribution advantage, and OpenAI has a much larger installed base in orchestration than Anthropic.
Advertisement
But small numbers can matter when they appear at the start of a new market structure. Anthropic’s emergence in orchestration comes as the broader VB Pulse data shows Claude also gaining massive enterprise adoption at the model layer.
In our VB Pulse Q1 Foundation Models and Intelligence Platforms tracker, Anthropic rose from 23.9% in January to 28.6%in February and then even more dramatically to 56.2% in March among qualified enterprise respondents, with the March reading flagged as directional only, because the sample was only 16 respondents.
VB Pulse Foundation Models and Intelligence Platforms comparison chart Jan-March 2026. Credit: VentureBeat
The story, then, is not that Anthropic is winning orchestration today. It is that Anthropic’s model momentum may be starting to spill into the orchestration layer.
Advertisement
That is where the strategic stakes get higher.
A model is easier to swap than an agent runtime
A model is relatively easy to swap, at least in theory. A company can route one workload to Claude, another to GPT, another to Gemini and another to a smaller open model.
In fact, the VB Pulse Foundation Models tracker over the same Q1 period shows that multi-model strategy is the enterprise consensus: respondents increasingly report adopting multiple models and building orchestration layers that route across them by task, cost and risk profile.
An agent runtime is different. Once a company’s workflows, tool permissions, credentials, audit logs, memory, sandboxed execution and operational monitoring live inside one provider’s environment, switching providers becomes less like changing models and more like changing infrastructure.
Advertisement
That is the real reason Anthropic’s 5.7% foothold is worth watching
Anthropic has already made clear that it wants to provide more than the model. Its Claude Managed Agents documentation describes a public beta for a managed agent harness with secure sandboxing, built-in tools and API-run sessions, while Anthropic’s engineering post frames the architecture around decoupling the model from the surrounding agent machinery: the session, the harness and the sandbox.
In plain English, Anthropic is trying to host the environment where Claude agents remember context, use tools, run code, operate inside sandboxes and persist across long-running workflows. That is no longer just inference. That is operational infrastructure.
The pitch is obvious: most enterprises do not want to stitch together their own agent stack from scratch. They want agents that can act, but they also want permission boundaries, audit trails, workflow reliability and ways to stop the system when something goes wrong.
Security is becoming the buying criterion
The VB Pulse orchestration tracker shows that buyers are prioritizing exactly those concerns. Security and permissions ranked as the top orchestration platform selection criterion in both January and February, at 39.3% and 37.1%.
Advertisement
VB Pulse Enterprise Agentic Orchestration, Q1 2026 chart of top selection criteria for agent orchestration solutions. Credit: VentureBeat
Control over agent execution rose from 17.9% to 22.9%, while flexibility across models and tools fell from 35.7% to 25.7%. The market appears to be shifting from optionality toward governance.
That shift is not surprising. A chatbot can be wrong and still remain mostly contained. An agent that can send emails, modify documents, query databases, call APIs or execute workflows has a much larger blast radius. The enterprise question is not only whether the agent is smart enough.
It is who gave it permission, what it touched, what it changed, whether those actions were logged, and whether the company can unwind the damage if something goes wrong.
Advertisement
Ev Kontsevoy, cofounder and CEO of Teleport, an identity and digital infrastructure solutions company, argues that the industry is still putting too much emphasis on orchestration itself and not enough on identity: “The race to own the agent orchestration layer is real,” Kontsevoy said. “It’s also solving the wrong problem first. Orchestration without identity only multiplies chaos. Without identity, you don’t know what an agent can access, what it actually did, or how to revoke its access when it operates outside policy. A unified identity layer is a prerequisite to deploying agents — one or many — in infrastructure.”
Syam Nair, Chief Product Officer at the intelligent data infrastructure company NetApp, believes data management is key in all cases to secure AI agent orchestration across the enterprise. As he said in a statement to VentureBeat: “Effective agent management requires built-in intelligence and a continuously updated understanding of both data and, critically, its metadata. This visibility allows organizations to define and enforce clear policies so data is used only by the right agents, for the right purposes. Making this work at scale is a crossfunctional effort. Security, storage, and data science teams must work together to implement policies that safeguard company data, while creating a strong data foundation for AI.”
He continued: “The CIOs and technology leaders that are successful are the ones who take the input, policies, and vision from all these teams into account as they build a data infrastructure that minimizes risk and drives business value.”
Microsoft has the distribution edge
That is why Microsoft’s early lead makes sense. Copilot Studio and Azure AI Studio sit inside an enterprise stack many companies already use: Microsoft 365, Teams, Entra ID, Azure and existing procurement relationships.
Advertisement
The VB Pulse Orchestration Tracker for Q1 2026 describes Microsoft as the enterprise default, with no other platform within 13 percentage points in February.
David Weston, CVP, AI Security, Microsoft, provided some insight on why, writing in a statement to VentureBeat: “Without a unified control layer, you start to see fragmentation – agents operating in silos, inconsistent governance, and gaps in security. What customers are asking for is a way to bring order to that complexity. With Agent 365, we’re providing a single control plane to observe, govern, and secure agents across Microsoft, partner, and third-party ecosystems, all grounded in enterprise data and identity.”
OpenAI’s second-place position is also unsurprising. Its Assistants and Responses API gave developers an early way to build agent-like systems using OpenAI’s models and tooling. In the orchestration tracker, OpenAI is not surging, but it is still ticking up steadily: 23.2% in January to 25.7% in February.
Anthropic is the newcomer at the orchestration layer. But its timing may be favorable. The VB Pulse Foundation Models tracker for Q1 2026 suggests enterprises increasingly see Claude as a fit for higher-stakes workloads where safety, instruction following, long context and governance matter.
Advertisement
The orchestration tracker suggests those same buyers are now moving from agent experiments toward production workflows, where security, permissions and task reliability become the gating issues.
That creates a possible path for Anthropic: not to beat Microsoft as the default enterprise platform, at least not immediately, but to become the agent runtime for companies that already trust Claude for sensitive or complex workloads.
The orchestration tracker found that a hybrid control plane — combining provider-native orchestration with external orchestration — was the leading expected architecture, holding around 35% to 36% across the two substantive waves.
Provider-managed-only approaches grew modestly but remained a minority. The report’s conclusion is blunt: enterprises are not willing to give full orchestration control to any single provider.
It makes total sense as enterprises seek to leverage the “best-in-breed” models, harnesses, and tools from multiple vendors, especially as their needs differ widely across sector, business, and size.
“Most enterprises will operate in a multi-model, multi-agent environment, which makes an independent control plane essential,” agreed Felix Van de Maele, CEO of Collibra, a company specializing in unified governance for data and AI, in a statement to VentureBeat. “That is why we built AI Command Center: to give organizations the visibility, governance, and real-time oversight needed to manage AI systems and agents across the full lifecycle.”
Advertisement
That caution shows up in the risk data. When asked about risks if agent control lives inside a model provider platform, respondents cited security and permissioning limitations as the top concern. Vendor lock-in was the second-largest concern and the only one that increased from January to February, rising from 23.2% to 25.7%.
VB Pulse Enterprise Agentic Orchestration Q1 2026 chart of top concerns over the period. Credit: VentureBeat
This is the tension at the heart of the agent market. Enterprises want managed infrastructure because building reliable agents is hard. But the more a provider manages, the more it may own.
Dr. Rania Khalaf, chief AI officer at WSO2 — the subsidiary of EQT that offers open source, customizable AI stacks for enterprises — said enterprises will need an agent control plane that sits apart from individual frameworks, harnesses and runtimes because agents combine the unpredictability of LLMs with the ability to take actions that have consequences.
Advertisement
“Teams want the freedom to use the best model and framework for each job — Claude for coding, Gemini for writing, LangGraph or CrewAI for dynamic modular behavior — and that heterogeneity makes consistent governance untenable in integrated platforms that lock into one ecosystem,” Khalaf said.
From LLMOps to Agent Ops
Khalaf said the industry is also moving from MLOps to LLMOps to “Agent Ops,” where governance has to cover the whole agent, not just the model call.
“A guardrail on an LLM call can catch hallucination or toxic output, but it will not catch an agent thrashing in an unbreakable, costly loop, which is why governance now has to extend out from the LLM interaction to the scope of the agent,” she said.
The practical implication is that enterprises need to separate policy and control from the agent logic itself. Khalaf pointed to the recent example of an agent deleting a production database despite being told not to, arguing that the failure showed the limits of relying on prompt-level instructions where hard identity and access controls are needed.
Advertisement
“Pulling guardrails, evals, policies, bindings, and agent identity out of the core agent logic allows them to be configured per deployment and per environment, owned by the appropriate teams in security, product, and compliance, without fragmenting the governance layer as different teams choose different models and frameworks,” Khalaf said.
MCP is open. The runtime may still be sticky
That is where Anthropic’s Model Context Protocol, or MCP, complicates the story. MCP is not a walled garden; Anthropic introduced it as an open standard for connecting AI systems to data and tools, and Anthropic’s documentation describes MCP as an open-source standard for connecting AI applications to external systems.
But openness at the protocol layer does not automatically eliminate lock-in at the runtime layer. An enterprise could use an open protocol to connect tools while still becoming dependent on a provider’s managed sessions, logs, sandboxes, permissions model, workflow state and deployment environment. In other words, MCP may reduce integration friction, while managed agent infrastructure could still increase switching costs.
Khalaf said Microsoft’s lead likely reflects its M365 and Azure distribution, while Anthropic’s emerging foothold could reflect a different architectural bet around open protocols such as MCP. But she argued the long-term direction is not a single-provider stack.
Advertisement
“Enterprises serious about running agents in production will end up multi-vendor across these layers,” Khalaf said, “which is why the open and interoperable control plane matters more than the current percentages might suggest.”
The next cycle may be cross-vendor collaboration
That same tension — between provider-native convenience and cross-vendor reality — is where Arick Goomanovsky, CEO and cofounder of universal AI agent orchestrator startup BAND, sees the next competitive cycle forming.
“Enterprises now run agents everywhere: individual assistants and coding agents, multi-agent systems in production, agents embedded in Agentforce and ServiceNow, and third-party agents consumed as agent-as-a-service,” Goomanovsky said. “None of them collaborate across those boundaries by default.”
Goomanovsky argues that the missing layer is not just orchestration inside a single model provider, but a cross-vendor collaboration layer that lets agents from different ecosystems act together.
Advertisement
“What’s emerging in parallel is demand for an agentic collaboration harness – an interaction layer that lets agents from Microsoft, OpenAI, Anthropic, and internal teams operate as one workforce,” he said. “Orchestration inside any single vendor is still a walled garden so the next competitive cycle is cross-vendor agent collaboration.”
Independent frameworks face an enterprise packaging problem
There is also a warning sign for independent orchestration frameworks. LangChain and LangGraph fell from 5.4% to 1.4% as the primary orchestration platform in the qualified enterprise sample.
External orchestration abstracted entirely from model providers also fell from 8.9% to 2.9%.
Scott Likens, Global Chief AI Engineer at professional services giant PwC, has a front row seat to this trend as the company spearheads and assists clients with their AI transformations.
Advertisement
As he told VentureBeat in a statement: “Right now, most enterprises are still operating in fragmented environments, with orchestration spread across platforms, business applications, and internally developed tooling. Over time, the market will likely move toward more unified orchestration models, but interoperability, governance and security will remain critical because enterprises are unlikely to standardize on a single agent ecosystem.”
The report argues that fully independent orchestration frameworks may not yet have the enterprise packaging — security certifications, support, compliance documentation and vendor accountability — that procurement teams require.
That does not mean open frameworks are irrelevant. It does suggest that enterprise buyers may increasingly consume open or developer-first orchestration through managed products, cloud-provider partnerships or internal control planes rather than as standalone frameworks.
The agent market starts to look like cloud infrastructure
This is where the agent market starts to look less like the early chatbot market and more like enterprise cloud infrastructure. The winning vendors will not only have capable models. They will have identity integration, permission controls, audit logs, observability, workflow tooling, sandboxing, evaluation and a credible answer to who owns the control plane.
Advertisement
Indeed, the orchestration layer is but one part of the stack that the enterprise must fill in, and enterprises may actually decide to have different orchestration layers for agents working in different departments and functions.
As Nithya Lakshmanan, Chief Product Officer at revenue team AI orchestration startup Outreach.ai wrote in a statement to VentureBeat: “General-purpose orchestration platforms coordinate agent activity well, but they don’t carry the workflow-specific context that determines whether an agent’s action is correct for a given situation. In revenue workflows, an agent acting on incomplete deal history or missing buyer context will underperform and erode trust with users. The teams getting the most out of multi-agent systems are treating domain-specific data as the governance layer, with orchestration sitting on top. Most enterprises have chosen their orchestration stack, and what they’re now figuring out is how those platforms get access to the workflow context they need to make agents useful inside specific business functions.”
That is why Anthropic — which is increasingly launching its own domain-specific agents for finance and design, among other categories — is worth following closely. The company does not need to win the entire orchestration market tomorrow for its strategy to matter. It only needs to persuade a growing set of Claude enterprise customers to let Anthropic handle more of the surrounding machinery: tools, workflows, memory, execution and governance.
If it succeeds, Claude becomes more than a model in a multi-model portfolio. It becomes part of the infrastructure where enterprise work gets done.
Advertisement
That would put Anthropic in a more direct fight with OpenAI and Microsoft — not just over model quality, but over the operating layer of AI agents.
The narrow but important read
The safe interpretation of the VB Pulse data is narrow but important: Anthropic is not yet a major enterprise orchestration platform. Microsoft is. OpenAI is much closer. But Anthropic has registered its first measurable foothold at the orchestration layer, just as the market is deciding who should control agent execution.
For enterprise buyers, that may be the question that matters most in 2026. Not which model is best, but which provider gets to run the agent — and how hard it will be to leave once the agent is running.
One of the most entertaining moments in VC this week was a piece of rage-bait marketing from General Catalyst.
In a now-viral post on X that parodies the old Mac vs. PC commercials, the venture firm — better known as GC — posted a “VC vs GC” video on Wednesday. The VC was played by a tall actor in a baggy shirt and vest with a distinctly large, bald head — an apparent dig at Andreessen Horowitz co-founder Marc Andreessen. (But the real Andreessen never looks that disheveled).
The GC character was played by a man with a thick head of dark hair, white kicks, and a tendency to stare deeply into the camera. He was clearly supposed to represent actor Justin Long’s cooler, “hipper” Mac character from the original commercials, in contrast to John Hodgman’s straight-laced “square” PC persona.
GC asks VC about his robotic dog.
Advertisement
VC explains, “This is Woof AI” and then extols the virtues of the artificial companion (you don’t need to walk it or break the news to the kids when it dies!) and declares, “You’ll never want a real dog after this.” VC mentions that his firm is leading the seed round and pitches GC to join the cap table.
GC explains how people like real dogs and remarks, “I’d love to hear more, but we actually have a really high bar around responsibility for these things.”
Then VC kicks the AI dog and the dog chases him off the screen. The post has now been viewed 2.4 million times with hundreds of shares and comments, and thousands of likes.
I’d have to read so far between the lines that I’d be off the page and peering into another book to unpack this, but I’ll try anyway. The message, roughly: Other VCs, and a16z in particular, will fund anything. GC won’t. (I asked about this. GC hasn’t responded.)
Advertisement
It’s a pointed argument if so, and not entirely without basis. Andreessen’s firm frequently invests in companies that are considered controversial, like the surveillance startup Flock Safety, AI notetaker Cluely, and Adam Neumann’s Flow. But the same measure could just as easily be applied to General Catalyst. GC’s portfolio includes Anduril,Percepta, and Polymarket.
My takeaway is that GC wanted to show an a16z-type character kicking a dog, without anyone actually kicking an actual dog because that would be a major problem.
Many of the comments on the video seemed to find the video, and the choice to post it, cringe. Plenty liked and loved it, too.
Compulsive X user Andreessen himself couldn’t resist responding, many, many times. He said it made GC look “smarmy” and said, “Stay tuned for our upcoming ad campaign, ‘We’re the VC who doesn’t sneer at your idea.’” He kept going from there. My personal favorite was: “The thing they got right is the relative heights.”
Advertisement
As others noted, you know you’ve hit the right rage bait when the target takes it.
There were plenty of a16z partners and staffers who came to Andreessen’s defense, too. So much so that their reactions drew lots of comments. My personal favorite in this category was from VSC Ventures VC Jay Kapoor: “GC vs. A16Z beef is like Kendrick vs. Drake for people who know what a 409A valuation is.”
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Investors can’t seem to get enough of RJ Scaringe or his ideas.
In less than a decade, the serial entrepreneur best known for his EV company Rivian has raised more than $12.3 billion from venture capital firms, as well as from strategic and institutional investors for his three — and counting — startups. If the latest $400 million raise for his new venture Mind Robotics is an indicator, investors are still happily piling in.
Outsized raises for newly minted startups have become more common in recent years. But those hundred-million-plus seed rounds have generally been reserved for buzzy defense tech startups or AI companies founded by former OpenAI or Anthropic employees.
Those supersized seeds certainly weren’t flowing toward something as niche as an electric micromobility startup. And yet in 2025, Scaringe raised $105 million for exactly that — a startup called Also, which he founded that same year. The total has since surpassed $300 million, with DoorDash among its backers.
Advertisement
Jiten Behl, partner at Eclipse and former chief growth officer at Rivian, has spent years watching and learning from Scaringe. His firm is now one of Scaringe’s biggest backers, leading rounds in both Also and Mind Robotics — Scaringe’s industrial AI and robotics startup that he also founded last year.
Storytelling and communication are one of his superpowers, according to Behl, who joined Rivian when the company had just a handful of employees.
“When RJ explains a certain issue, topic, opportunity, vision, he just has this very unique ability to communicate it so effectively, and it comes across so credible,” Behl said. “He’s not trying to undersell the difficulty or oversell the opportunity, and that’s an art.”
Scaringe isn’t the only serial entrepreneur to repeatedly attract massive amounts of capital, but founders who can raise billions across multiple ventures remain rare. A self-professed car enthusiast who earned his doctorate in mechanical engineering from MIT, Scaringe joins a small cadre of entrepreneurs that includes Tesla CEO and SpaceX co-founder Elon Musk, OpenAI CEO Sam Altman, Anduril and Oculus founder Palmer Luckey, and Jack Dorsey, who founded Square (now called Block) and Twitter.
Advertisement
The difference, at least in the view of some investors TechCrunch spoke to, is that he is able to separate selling the idea from selling himself. “He is very comfortable and confident in his own personality, and he’s not trying to be an Elon,” Behl said, noting that many have tried to make the comparison over the years.
“It’s not about him,” another insider familiar with Scaringe’s companies told TechCrunch. “When you talk to him, he has enthusiasm about the product that is completely external.”
Of course, there is confidence and even a little ego, the same source mused, but “it doesn’t weigh on you.” The source also added that Scaringe has a unique ability to make you feel like the most special person in the room — a sentiment others echoed.
Giving that kind of undivided attention to an investor, supplier, or exec at a manufacturer is a challenge at the scale Scaringe is attempting. He is running three companies, often traveling between Palo Alto, Irvine, Rivian’s factory in Normal, Illinois, and a second factory soon to open in Georgia. And then there is family — Scaringe has three sons with his ex-wife.
Advertisement
Joe Fath, another partner at Eclipse, credits his open-mindedness and collaborative nature for helping him attract investment and juggle these connected, yet disparate businesses.
He noted that Scaringe also “has the rare combination of being a truly great engineer while also having an exceptional instinct for product design,” said Fath, who previously worked at a major Rivian backer, T. Rowe Price. “Very few founders can operate at that level technically while also understanding what resonates emotionally with customers — both consumers and commercial buyers. That combination is incredibly uncommon and has clearly been part of what makes Rivian’s products, and now Also and Mind’s, so differentiated.”
The pace of Scaringe’s fundraising over the past eight years is particularly notable and doesn’t seem to be slowing.
More than $11 billion, and by far the largest slice of VC and strategic capital, went into Rivian — most of it between 2018 and its blockbuster IPO in 2021. That’s a startling timeline, especially considering the company, initially called Mainstream Motors, had existed since 2009. For years, Rivian operated as a small, unknown entity until its breakout moment in late 2018 at the Los Angeles Auto Show, when it revealed prototypes of its all-electric R1T truck and R1S SUV.
Advertisement
The money soon flowed, and from every direction. In early 2019 and just a couple of months after that reveal, Rivian raised a $700 million funding round led by Amazon. U.S. automaker Ford would invest $500 million and make plans to collaborate on a since-scrapped future EV program. Cox Automotive contributed $350 million. Rivian would close out the year with a $1.3 billion round — its fourth in 2019 — led by funds and accounts advised by T. Rowe Price Associates, with additional participation from Amazon, Ford, and funds managed by BlackRock.
In July 2020, Rivian raised $2.5 billion and another $2.65 billion six months later. As whispers of an IPO got louder, Rivian closed another $2.5 billion private funding round led by Amazon’s Climate Pledge Fund, D1 Capital Partners, Ford Motor, and funds and accounts advised by T. Rowe Price Associates Inc. Third Point, Fidelity Management and Research Company, Dragoneer Investment Group, and Coatue also participated.
Then the IPO came. Rivian raised nearly $12 billion in gross proceeds after locking in $78 per share. Its market cap hit $100 billion when it debuted on Nasdaq in November 2021. Today, it stands at $18.2 billion, a significant comedown that also reflects the broader struggles of the EV sector.
The ability to raise that much capital, despite those headwinds, is exceptional. But Scaringe didn’t stop with Rivian. If anything, the pace has accelerated. Also and Mind Robotics have together raised more than $1.3 billion so far, with Mind Robotics moving especially fast: $115 million in its first year, $500 million in March, and another $400 million just this week.
Advertisement
Rivian also continues to land notable backers through high-profile deals like the $5.8 billion joint venture with Volkswagen Group and a robotaxi partnership valued at up to $1.25 billion with Uber.
“Now, the big question is, how much can he do?” Behl said. “That’s a question [that] already assumes that he’s reaching his limit. The thing is, he doesn’t look at it that way. His perspective is that there is huge value to be created, there is huge impact to be created, and I just have to do it.”
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Watch the 2026 Eurovision Grand Final tonight at 8pm (May 16) to see 25 acts compete for the glitziest crown in music. The winner will, as ever, be chosen by wildly unfair judging based on ancient ties between nations and the prevailing political climate.
But after two pulsating semi-finals, who exactly is in the mix to come out on tops. The general consensus is that will probably be Finland (Linda Lampenius & Pete Parkkonen with “Liekinheitin”), Greece (Akylas and “Ferto”) or Denmark (Søren Torpegaard’s “Før vi går hjem”).
Australian contestant Delta Goodrem, however, might “Eclipse” them all. Sorry, I’ll get my coat. Here’s how to watch the 2026 Eurovision Grand Final online and for free.
Advertisement
Latest Videos From
How to watch the 2026 Eurovision Grand Final for FREE
Australian viewers can watch for FREE on SBS on Demand. And selected countries can also watch Eurovision free on YouTube.
UK viewers can watch the 2026 Eurovision Grand Final live on BBC iPlayer on Thursday, May 14 from 8pm GMT (TV licence required).
Traveling abroad?Use this VPN (and save 75%) to access you usual streaming services from anywhere – including Spain, Ireland and Netherlands.
Advertisement
How to watch the 2026 Eurovision Grand Final from anywhere
For those away from home looking to watch the 2026 Eurovision Grand Final, you’ll be unable to watch the show as normal if you’re outside the UK due to regional restrictions. Downloading a VPN allows you to stream geo-blocked services online – our favorite is NordVPN
Use a VPN to watch the 2026 Eurovision Grand Final from anywhere:
How to watch the 2026 Eurovision Grand Final in UK
Advertisement
How to watch the 2026 Eurovision Grand Final online around the world
How to watch the 2026 Eurovision Grand Final online in the US
(Image credit: Other)
Eurovision fans in the United States can watch the 2026 Eurovision Grand Final on Peacock in the U.S.
Tip: get Peacock for free when you sign up to Walmart Plus ($1 for first 30 days).
If you don’t have access to Peacock, you can also watch along on the official Eurovision YouTube channel.
Advertisement
Brit abroad? Anyone from the UK travelling overseas who wants to watch their free usual streaming service from abroad can do so by using a VPN.
How to watch the 2026 Eurovision Grand Final in Australia
(Image credit: free)
In Australia, watch the 2026 Eurovision Grand Final on Saturday, May 16 for FREE on SBS and SBS On Demand.
Advertisement
SBS is “Australia’s exclusive home of the 2026 Eurovision Song Contest”. Expect live streams, behind-the-scenes extras, music videos and more.
Not at home? Anyone busy traveling overseas who wants to watch their free usual streaming service from abroad can do so by using a VPN.
Can I watch the Eurovision 2026 Grand Final in Ireland, Italy, Spain?
Irish state broadcaster RTE has confirmed that it will not broadcast this year’s Eurovision Song Contest.RTE joins Spain, Netherlands, Italy and Slovenia in boycotting the event due to Israel’s inclusion. However, you can use this VPN to watch your usual free streams from abroad – like SBS in Australia.
Advertisement
Who is in the 2026 Eurovision Song Contest Grand Final?
And here is the running order for Saturday’s Grand final…
Denmark: Søren Torpegaard Lund — “Før Vi Går Hjem”
Germany: Sarah Engels — “Fire”
Israel: Noam Bettan — “Michelle”
Belgium: ESSYLA — “Dancing on the Ice”
Albania: Alis — “Nân”
Greece: Akylas — “Ferto”
Ukraine: LELÉKA — “Ridnym”
Australia: Delta Goodrem — “Eclipse”
Serbia: — LAVINA — “Kraj Mene”
Malta: AIDAN — “Bella”
Czechia: — Daniel Zizka — “CROSSROADS”
Bulgaria: DARA — “Bangaranga”
Croatia: LELEK — “Andromeda”
United Kingdom: LOOK MUM NO COMPUTER — “Eins, Zwei, Drei”
France: Monroe — “Regarde !”
Moldova: Satoshi — “Viva, Moldova!”
Finland: Linda Lampenius x Pete Parkkonen — “Liekinheitin”
Poland: ALICJA — “Pray”
Lithuania: Lion Ceccah — “Sólo Quiero Más”
Sweden: FELICIA — “My System”
Cyprus: Antigoni — “JALLA”
Italy: Sal Da Vinci — “Per Sempre Sì”
Norway: JONAS LOVV — “YA YA YA”
Romania: Alexandra Căpitănescu — “Choke Me”
Austria: COSMÓ — “Tanzschein”
You may also enjoy…
We test and review VPN services in the context of legal recreational uses. For example: 1. Accessing a service from another country (subject to the terms and conditions of that service). 2. Protecting your online security and strengthening your online privacy when abroad. We do not support or condone the illegal or malicious use of VPN services. Consuming pirated content that is paid-for is neither endorsed nor approved by Future Publishing.
BrianFagioli writes: Kioxia and Dell Technologies say they have built a 2U server configuration capable of scaling to 9.8PB of flash storage, which is the sort of density that would have sounded impossible just a few years ago. The setup combines a Dell PowerEdge R7725xd Server with 40 Kioxia LC9 Series 245.76TB NVMe SSDs and AMD EPYC processors. According to Kioxia, matching the same capacity with more common 30.72TB SSDs would require seven additional servers and another 280 drives.
The companies are pitching the hardware squarely at AI and hyperscale workloads, where storage is rapidly becoming a bottleneck alongside compute. Kioxia claims the denser configuration can dramatically reduce power consumption and rack space requirements while remaining air cooled. The announcement also highlights how quickly enterprise storage capacities are escalating as organizations race to support larger AI models, massive datasets, and increasingly demanding data pipelines.
Unfettered design disappeared from the automotive world decades ago. Safety requirements, governmental regulations, and advances in aerodynamics have reduced what was once an artistic discipline into an engineering discipline. As a result, many modern cars are beginning to look similar. The amorphous crossover, the standard pickup truck, and the bland sedan come in shades of grey, navy, and black. Once upon a time, there were fewer restrictions, and manufacturers had a little more freedom to exercise creativity.
As a result, plenty of the classic cars and pickups of yesteryear lure automotive fans with unique aesthetics that are impossible to replicate today. In honor of the good ol’ days, we check out the history and performance behind designs that have gained iconic status five decades after they hit the market. These aren’t necessarily the best pickup trucks to come out of the 1960s, but they are definitely amongst the coolest looking.
Advertisement
1960 Studebaker Champ
Studebaker Automobiles isn’t the first manufacturer that comes to mind when you think pickup trucks. Founded in 1852, the brand got its start building wagons before entering the automobile space. By 1960, however, the once-proud brand was entering its final decade.
Pickup trucks were undergoing a transformation during the 1960s. Once purely utilitarian, by the late 1950s, manufacturers were turning toward car-like designs, with more comfortable interiors and smoother rides. The Studebaker Champ is one example of this evolutionary stage of pickup design.
Advertisement
The Studebaker Champ pickup truck debuted in 1960, but it wasn’t an all-new design. It saved money by using components and sheet metal from the pre-existing Studebaker Lark compact, essentially hitching a pickup bed to the Lark’s front end. With a pair of engine options, including 170- and 245-cubic-inch six-cylinders making 90 and 118 horsepower, respectively, the bubble-fendered pickup came in ½- and ¾-ton models.
Not only was the Champ a warmed-over Frankenstein of parts, but its nameplate was reminiscent of the Studebaker Champion sedan produced from 1939 to 1958. Alas, the Champ was not enough to save Studebaker, which went out of business in 1966. But we still have the unique looks and lines of the short-lived but distinctive Champ.
Advertisement
1963 Ford Falcon Ranchero
Mark Roger Bailey/Shutterstock
Ford got into the car-truck combination business with the Ranchero in 1958. Ultimately overshadowed by the Chevrolet El Camino that arrived in 1959, the Ranchero nonetheless holds a special place in the classic pickup portion of our hearts.
Inspiration for the Ranchero came from the Land Down Under. The Australian market was nuts for what was called coupe-utility vehicles, or utes. Ford wanted to capitalize on its success with the so-called utes in North America. It tapped its car division, which built the Ford Falcon, to build the Ranchero. The Ranchero was produced for seven generations between 1957 and 1979. The second generation arrived for the 1960 model year, retaining a certain straitlaced ’50s aesthetic that marks a transition between ’50s and ’60s design mores.
The Ranchero could hold more payload than the El Camino despite its 144-cubic-inch six-cylinder engine being smaller than Chevrolet’s V8 options. With pickup trucks increasingly skewing toward lane-filling behemoths, maybe Ford can look into bringing back the car-truck combo. Except, as of 2026, it doesn’t sell a single traditional sedan to convert.
Advertisement
1965 Chevrolet C10
Gestalt Imagery/Shutterstock
The Chevrolet C10 may be the most quintessential pickup truck in history. Its 39-year career began when it debuted in 1960. Set up to compete with Ford’s successful (and even longer-running) F-line, it was a competitive unit that put to use everything Chevrolet had learned building pickups since 1918.
In 1965, the C10 was still in its first generation. It was only available with a standard cab, though buyers could choose between 6.5- and 8-foot beds. It was more farm truck than highway cruiser, with inline six and V8 engine options ranging from 135 to 220 horsepower. An odd overbit hood contains signal lamps underneath, which the grille is plastered with from headlight to headlight, almost making it look like it’s smiling. A trim cabin and flat lines running to the bed (except for the gorgeous sidestep models — another characteristic missing from modern pickups) give it a look that suggests it was once as comfortable in the dirt as it is now on the pedestal at car shows.
The first-gen C10 retains a distinctive Americana vibe, evoking greasers and drive-in movies. Chevrolet wanted to differentiate its new C10 line from its 1950s products, taking a clean-sheet approach to introduce radical design changes. The resulting truck is certainly outdated now, but it holds a place in history as a bygone era of American manufacturing.
Advertisement
1965 Jeep Forward Control Series
Savage Camper/YouTube
Jeep recently re-entered the pickup truck game by resurrecting its Gladiator nameplate in 2020, but it’s not the Gladiator we’re looking back at. Jeep once was a major player in pickups, and its 1966 Forward Control (FC) series was a friendly little pickup truck designed as a practical hauler.
Cab-forward design allowed truck makers to maximize the available space of the wheelbase by placing the engine beneath the cab rather than under a long hood. Volkswagen, Ford, and Chevrolet all got in on the action, but our favorite interpretation belongs to Jeep. The Jeep Forward Control Series hit the market in 1957 and had essentially run its course by 1966. It offered two wheelbase choices and engines ranging from a 72-horsepower four-cylinder to a 115-horsepower inline-six.
Advertisement
The Jeep Forward Control Series doesn’t look like much of anything on the road today. It was a utilitarian hauler with superior visibility and a distinctly Jeep grille — though that’s about the only design cue that is recognizably Jeep. The FC ultimately faced competition from the likes of the Chevrolet Corvair Rampside and Volkswagen Transporter pickup. About 30,000 FCs rolled off the assembly line during its production run, making it somewhat difficult to find today.
Advertisement
1968 Dodge Power Wagon
The Power Wagon was based on Dodge trucks that served during World War II, and if that’s not enough of a proving ground for you, then you must be pretty rough on your trucks. The nameplate debuted in 1946 for the post-war civilian market. America was facing an extended period of growth, and Dodge had just the truck to get it done.
High on utility and low on comfort, the Power Wagon was used (and revered) by government agencies for rough-and-tumble work. Dodge has made plenty of hay out of its high-output Hemi V8s over the past several decades, but the Power Wagon was primarily known for inline-six engines. Rugged and reliable, the U.S. Navy, the Park Service, the Department of Fish and Wildlife, and others put it to the test over the years
By the 1960s, the Power Wagon line was mid-stride — the last model would roll off the line in 1980 — but it hit a high point in design. The final year of the first generation was 1968, after which the Power Wagon was designated export-only as part of a government program, despite protests from the U.S. Forest Service, which loved the Mopar workhorse. Part of the reason (aside from emissions) was that its design was still based on the 1946 aesthetic, which itself dated to pre-war styles. The result was a truck that was hopelessly outdated by contemporary standards, but looks pretty darn cool to us today. In fact, we’re lobbying Dodge to bring this classic pickup truck back to the masses.
Hot afternoons demand something cold and sweet right when the craving strikes, yet store pints cost plenty and rarely match what fresh ingredients deliver at home. Traditional machines take time to churn and leave bowls to clean afterward, so many people stick with whatever sits in the freezer section. The CuisinArt FastFreeze Ice Cream Maker (ICE-FD10), priced at $97.56 (was $120), changes that routine entirely by handling the heavy work in under a minute once the base sits ready.
Preparation is so simple that all you have to do is pour a few ingredients into a half-pint cup and freeze for a day. From there, simply twist the wand onto the cup, choose a preset from the five solid possibilities, and push it down, and you’ll have a treat ready to go. After a few spins, ice cream becomes silky smooth, sorbet remains bright and fruity, slushies become icy and ideal for hot days, and milkshakes blend up smooth without the need for any pre-freezer prep at all. To top it all off, each serving is the perfect size for one person, with no leftovers to clutter up the freezer or waste.
5-IN-1 FROZEN DESSERT MAKER: Cuisinart FastFreeze Ice Cream Maker delivers 5-in-1 functionality to make half a pint of frozen desserts in minutes…
EASY TO USE: Automatic ice cream machine with five preset programs makes frozen dessert styles instantly, including ice cream, milkshakes, slushies…
HEALTH-CONSCIOUS TREATS: Health-conscious users can make frozen treats including non-dairy ice cream, protein milk shakes or fruit-based sorbets.
Storage is a breeze because the entire wand disassembles into pieces that go into the kitchen drawer among all of the other culinary tools. The cups stack nicely in the freezer and are then washed in the dishwasher, as all that is required to clean the blade is a brief rinse. The item is so quiet, it operates without even disturbing the area, unlike those large heavy machines that usually appear to be rumbling around.
Customization is the name of the game with this device, especially if you don’t have much free time. Start with a mango and cream foundation, then add some actual mango chunks at the end to create a mango ice cream that tastes like it came directly from the shop. Want a post-workout treat that tastes like dessert? Throw in some protein powder and peanut butter, and you’re ready to go. Cookie dough and candy bits combine well without becoming damaged, and the half-pint size is ideal for experimenting with new combinations without committing to a whole recipe. Recipes in the manual or online suggestions will provide you with some inspiration to get you started, but you can also wing it with whatever you have in the cupboard and still come out with something wonderful.
In terms of cost, it’s a no-brainer because it pays for itself quickly when you consider how much you’ll save on store-bought ice cream. A single store-bought pint might cost several dollars, whereas making your own allows you to use components you already have on hand. Over the course of a few weeks, the machine will have more than paid for itself simply by reducing impulse purchases and improving control over sugar and nutritional ingredients.
Google may cut the free storage for new Gmail accounts from 15GB to 5GB, according to a report from Android Authority. Those who want a storage upgrade from those 5GB accounts would need to provide a phone number to Google to unlock the extra gigabytes.
A Google representative confirmed that it’s trying out new account options.
“We’re testing a new storage policy for new accounts created in select regions that will help us continue to provide a high-quality storage service to our users, while encouraging users to improve their account security and data recovery,” the representative said in a statement to CNET.
Advertisement
Typically, a verified phone number blocks multiaccount storage abuse and secures Google profiles with a reliable recovery method. Some online speculate that the move could also be a way for Google to encourage more people to subscribe to paid cloud storage plans under Google One.
It’s unclear if the regions where this is being tested include the US. Android Authority reported that accounts with only 5GB of storage were primarily in African countries.
Google has been expanding the tiers of paid accounts it offers, combining Gemini AI features into bundles. It recently added three new tiers focused on AI features, starting at $8 a month with 200GB of storage included.
Google’s storage space
When Gmail debuted in 2004, it offered users a full 1GB of storage, which fundamentally changed the way many people used email: They could keep everything and search for what they needed.
Advertisement
The next year, Google doubled the amount of free storage to 2GB. The storage kept ticking up, to 7GB, then 10GB and finally to 15GB in 2013, when Google Drive, Google Phones and Gmail merged into a shared pool of data for users.
Why did Google do it? In 2013, CNET wrote: “Google — as we all know — is in the business of making money. If Google is offering you more storage, then there is something that extra storage helps you do that will help Google make more money.”
One reason Gmail succeeded over other email services in the early days was lowering the barrier to entry, as Google increased free access to services across the platform, making it less likely that customers would leave for competitors.
These days, Google is battling its competitors on the AI front, which explains why it’s increasingly bundling Gemini AI features with the email, photo and document services users have come to depend on.
AMD says FSR 4.1 will finally bring its newer hardware-accelerated upscaling technology to older Radeon GPUs. “The rollout will begin in July with RDNA3- and 3.5-based GPUs, which include the Radeon RX 7000 series, as well as integrated GPUs like the Radeon 890M and Radeon 8060S,” reports Ars Technica. “In ‘early 2027,’ support will also be extended to the RDNA2 architecture, which includes the Radeon RX 6000 series, integrated GPUs like the Radeon 680M, and the Steam Deck’s GPU. This would also open the door to supporting FSR 4 on the PlayStation 5 and Xbox Series X and S, all of which also use RDNA2-based GPUs.” From the report: [AMD Computing and Graphics SVP Jack Huynh’s] short video presentation didn’t get into performance comparisons, but did mention that AMD had to work to get FSR 4’s superior hardware-backed upscaling working on its older graphics architectures. RDNA4 includes AI accelerators that support the FP8 data format in the hardware, and porting FSR 4 to older GPUs meant getting it running on the integer-based INT8 hardware in the RDNA3 and RDNA2-based GPUs.
This may mean that FSR 4.1 running on an RDNA3 or RDNA2-based GPU may come with a larger performance hit relative to RDNA4 cards, or that image quality may differ slightly. Modders have already worked to get FSR4 working on INT8-supporting GPUs, and the older GPUs reportedly see a 10 to 20 percent performance hit relative to FSR 3.1 running on the same hardware. AMD’s official implementation may or may not improve on these numbers.
[…] Any games that support FSR 4 should be able to support FSR 4.1 running on Radeon 7000-series cards; users will presumably be able to install a driver update in July that enables the new feature. Games that support the older FSR 3.1 can also be forced to use FSR 4 in the Radeon graphics driver.
The medtech space, like most STEM fields, has evolved exponentially, so what skills might help you keep your head above water?
The medtech space is incredibly diverse with careers in a range of areas, with many requiring unique skills or a resume of cross-compatible abilities. That is to say, it can be difficult as a student selecting a college course, or indeed a graduate looking at post-bachelor degrees, to identify the skills most suited to a future career in the medtech industry.
Well SiliconRepublic.com is here to help. While this list is by no means exhaustive, it will give you an idea of some of the skills you should prioritise if you want to develop a broad range of abilities in an ever-evolving and highly skilled field. Without further ado, here are some of the most crucial skills in any medtech career.
Regulation
It is fair to say that regardless of which aspect of medtech you specialise in, or which avenue your professional life goes down, you are going to require a degree of knowledge in how the sector is regulated. The regulatory landscape is undergoing significant transformation as evolving frameworks in Europe and the US make an impact.
Advertisement
EUDAMED, the European Database on Medical Devices, is set to come into effect in late May and is just one of the critical frameworks students and professionals in this area will have to become familiar with.
Europe is also in the process of revising its Medical Device Regulation and In Vitro Diagnostic Regulation policies, a task that began in late 2025.
The point is, medical frameworks and requirements are always going to be subject to revision and change. To that point, it’s the job of a professional in this space not just to understand the rules that govern their own country, but to have a greater understanding of the global ecosystem.
AI and automation
To what extent AI and automation are going to play a role in modern-day healthcare is unclear. Some would say it will only be used when needed and others would argue that it is becoming a significant element of the medtech scene. Regardless of where you fall between the two schools of thought, knowing how to wield AI and automation in healthcare is undoubtedly a useful and potentially necessary addition to a medtech skillset.
Advertisement
Since its inception, AI and automated technologies have been used to create wearables that monitor health, accelerate drug discovery and administer treatments and therapies for conditions that impact quality of life.
People considering a career that amalgamates AI, medtech and entrepreneurship could benefit from researching how AI impacts the medtech space, the application of AI for medical devices, how to bring a device to market and commercialisation. For the more technical experience, an understanding of robotics, automation, engineering and programming languages will be crucial.
Quantum
To what extent quantum might impact the medtech space is also up for debate, as the field is still in some parts theoretical. But what is not theoretical is that quantum computing has the potential to address many of the globe’s most pressing health-related challenges, in that it could accelerate drug discovery, improve diagnostics, personalise treatment and aid research far more quickly than current methods allow.
With that in mind skills in this space could certainly give a student or professional an edge when it comes time to secure a position or further a career.
Advertisement
If this sounds like an opportunity you would be interested in, start studying quantum mechanics, quantum computing, quantum-related cybersecurity and ensure you have a strong understanding of both the ethics involved and any regulation covering quantum and the medtech space. A basis in maths and physics is also going to be a great help. Quantum is in some ways a new frontier for STEM professionals, so this is certainly a skill worthy of a future-focused, adventurous and ambitious person.
Soft skills
You can’t talk about cross-functional skills without mentioning soft skills. You may not think soft skills rank as highly as technical abilities when it comes to preparing for a future career, but you would be wrong.
Skills that empower communication, that advance learning, that establish and build upon networks, that create opportunities for you in competitive landscapes, are immensely valuable and should not be overlooked.
In the medtech sector, employers will likely value employees who can work independently but also work as part of a team, they will prize presentation skills and acknowledge those who can contribute not just to the research, but to the wider team as a motivator and leader. So don’t neglect soft skills as you become a technical wizard.
Advertisement
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
A critical vulnerability in the Funnel Builder plugin for WordPress is being actively exploited to inject malicious JavaScript snippets into WooCommerce checkout pages.
The flaw has not received an official identifier and can be leveraged without authentication. It affects all versions of the plugin before 3.15.0.3.
Funnel Builder is a WordPress plugin for WooCommerce Checkout developed by FunnelKit, primarily used to customize checkout pages, with features like one-click upsells, landing pages, and to optimize conversion rates.
E-commerce security company Sansec detected the malicious activity and noticed that the payload (analytics-reports[.]com/wss/jquery-lib.js) is disguised as a fake Google Tag Manager/Google Analytics script that opens a WebSocket connection to an external location (wss://protect-wss[.]com/ws).
An attacker can exploit it to modify the plugin’s global settings via an unprotected, publicly exposed checkout endpoint. This allows them to inject arbitrary JavaScript into the plugin’s “External Scripts” setting, causing malicious code to execute on every checkout page.
According to Sansec, the attacker-controlled server delivers a customized payment card skimmer that steals the following information:
Credit card numbers
CVVs
Billing addresses
Other customer information
Payment card skimmers enable threat actors to make fraudulent online purchases, while stolen records often end up sold individually or in bulk on dark web portals known as carding markets.
FunnelKit addressed the vulnerability in version 3.15.0.3 of Funnel Builder, released yesterday.
Advertisement
A security advisory from the vendor, seen by Sansec, confirms the malicious activity, saying “we identified an issue that allowed bad actors to inject scripts.”
The vendor recommends that website owners and administrators prioritize updating to the latest version from the WordPress dashboard and also review Settings > Checkout > External Scripts for potential rogue scripts the attacker may have added.
Automated pentesting tools deliver real value, but they were built to answer one question: can an attacker move through the network? They were not built to test whether your controls block threats, your detection rules fire, or your cloud configs hold.
This guide covers the 6 surfaces you actually need to validate.
You must be logged in to post a comment Login