While there are still some affordable manual cars you can buy new, cars equipped with manual transmissions remain a very small percentage of the total market. If you drive one, know that starting out in second gear in a manual transmission is generally a very bad idea, mostly because it will cause additional clutch wear on the vehicle. That’s why drivers should use the first gear to start out in the vast majority of situations.
But there is one situation in which starting out in second gear is the right choice for numerous reasons, as explained by Jason Fenske of Engineering Explained. This would be when you are starting out on a downward incline, when the car will naturally accelerate down the hill as you release the parking brake. Once the car starts to accelerate as it rolls down the hill, you can engage second gear while the car is in motion. The vehicle’s momentum going down the hill helps you reach a speed at which you would normally be shifting into second gear anyway, so there is no additional wear on your clutch’s friction surfaces.
There is another set of driving circumstances in which you might be tempted to start in second gear, and that is starting on a slippery surface like snow or ice. But as Fenske explains in the same video, this will cause additional clutch wear and should be avoided.
Advertisement
Why are there so few manual transmissions and so many automatics today?
monshtein/Shutterstock
There are surely manual cars that leave their automatic counterparts in the dust, but the real reason manual transmissions are disappearing from so many vehicles is that automatic transmissions deliver better fuel economy. This is partly thanks to automatic transmissions’ additional gear ratios, some of which can have eight, nine, or even 10 speeds. Another solution is the continuously variable transmission (CVT), which has no traditional fixed gears and achieves high fuel efficiency by constantly adjusting to keep the engine in its most efficient RPM band. Manufacturers want the highest possible fuel economy figures for their vehicles, so they have switched to automatic transmissions to achieve them. Even so, manual transmissions are still popular in Europe.
There are other good reasons to drive a manual transmission, even though manuals account for only 0.7% of all new cars sold last year, according to Motor1.com data. Driving a stick is a more hands-on experience, involving you much more deeply in the act of driving. You have direct control over which gear the car is in and how much to accelerate within that gear. You also decide exactly when to shift down and when to shift up, something that an automatic does without your intervention. It’s a much more involving experience.
Advertisement
That being said, driving a manual transmission is definitely not for everyone. Beginners and those easily overwhelmed by the mechanics of driving may be better off first learning the rules of the road in an automatic transmission-equipped vehicle. Then they can drive a manual if they so choose.
BrianFagioli writes: Kioxia and Dell Technologies say they have built a 2U server configuration capable of scaling to 9.8PB of flash storage, which is the sort of density that would have sounded impossible just a few years ago. The setup combines a Dell PowerEdge R7725xd Server with 40 Kioxia LC9 Series 245.76TB NVMe SSDs and AMD EPYC processors. According to Kioxia, matching the same capacity with more common 30.72TB SSDs would require seven additional servers and another 280 drives.
The companies are pitching the hardware squarely at AI and hyperscale workloads, where storage is rapidly becoming a bottleneck alongside compute. Kioxia claims the denser configuration can dramatically reduce power consumption and rack space requirements while remaining air cooled. The announcement also highlights how quickly enterprise storage capacities are escalating as organizations race to support larger AI models, massive datasets, and increasingly demanding data pipelines.
Unfettered design disappeared from the automotive world decades ago. Safety requirements, governmental regulations, and advances in aerodynamics have reduced what was once an artistic discipline into an engineering discipline. As a result, many modern cars are beginning to look similar. The amorphous crossover, the standard pickup truck, and the bland sedan come in shades of grey, navy, and black. Once upon a time, there were fewer restrictions, and manufacturers had a little more freedom to exercise creativity.
As a result, plenty of the classic cars and pickups of yesteryear lure automotive fans with unique aesthetics that are impossible to replicate today. In honor of the good ol’ days, we check out the history and performance behind designs that have gained iconic status five decades after they hit the market. These aren’t necessarily the best pickup trucks to come out of the 1960s, but they are definitely amongst the coolest looking.
Advertisement
1960 Studebaker Champ
Studebaker Automobiles isn’t the first manufacturer that comes to mind when you think pickup trucks. Founded in 1852, the brand got its start building wagons before entering the automobile space. By 1960, however, the once-proud brand was entering its final decade.
Pickup trucks were undergoing a transformation during the 1960s. Once purely utilitarian, by the late 1950s, manufacturers were turning toward car-like designs, with more comfortable interiors and smoother rides. The Studebaker Champ is one example of this evolutionary stage of pickup design.
Advertisement
The Studebaker Champ pickup truck debuted in 1960, but it wasn’t an all-new design. It saved money by using components and sheet metal from the pre-existing Studebaker Lark compact, essentially hitching a pickup bed to the Lark’s front end. With a pair of engine options, including 170- and 245-cubic-inch six-cylinders making 90 and 118 horsepower, respectively, the bubble-fendered pickup came in ½- and ¾-ton models.
Not only was the Champ a warmed-over Frankenstein of parts, but its nameplate was reminiscent of the Studebaker Champion sedan produced from 1939 to 1958. Alas, the Champ was not enough to save Studebaker, which went out of business in 1966. But we still have the unique looks and lines of the short-lived but distinctive Champ.
Advertisement
1963 Ford Falcon Ranchero
Mark Roger Bailey/Shutterstock
Ford got into the car-truck combination business with the Ranchero in 1958. Ultimately overshadowed by the Chevrolet El Camino that arrived in 1959, the Ranchero nonetheless holds a special place in the classic pickup portion of our hearts.
Inspiration for the Ranchero came from the Land Down Under. The Australian market was nuts for what was called coupe-utility vehicles, or utes. Ford wanted to capitalize on its success with the so-called utes in North America. It tapped its car division, which built the Ford Falcon, to build the Ranchero. The Ranchero was produced for seven generations between 1957 and 1979. The second generation arrived for the 1960 model year, retaining a certain straitlaced ’50s aesthetic that marks a transition between ’50s and ’60s design mores.
The Ranchero could hold more payload than the El Camino despite its 144-cubic-inch six-cylinder engine being smaller than Chevrolet’s V8 options. With pickup trucks increasingly skewing toward lane-filling behemoths, maybe Ford can look into bringing back the car-truck combo. Except, as of 2026, it doesn’t sell a single traditional sedan to convert.
Advertisement
1965 Chevrolet C10
Gestalt Imagery/Shutterstock
The Chevrolet C10 may be the most quintessential pickup truck in history. Its 39-year career began when it debuted in 1960. Set up to compete with Ford’s successful (and even longer-running) F-line, it was a competitive unit that put to use everything Chevrolet had learned building pickups since 1918.
In 1965, the C10 was still in its first generation. It was only available with a standard cab, though buyers could choose between 6.5- and 8-foot beds. It was more farm truck than highway cruiser, with inline six and V8 engine options ranging from 135 to 220 horsepower. An odd overbit hood contains signal lamps underneath, which the grille is plastered with from headlight to headlight, almost making it look like it’s smiling. A trim cabin and flat lines running to the bed (except for the gorgeous sidestep models — another characteristic missing from modern pickups) give it a look that suggests it was once as comfortable in the dirt as it is now on the pedestal at car shows.
The first-gen C10 retains a distinctive Americana vibe, evoking greasers and drive-in movies. Chevrolet wanted to differentiate its new C10 line from its 1950s products, taking a clean-sheet approach to introduce radical design changes. The resulting truck is certainly outdated now, but it holds a place in history as a bygone era of American manufacturing.
Advertisement
1965 Jeep Forward Control Series
Savage Camper/YouTube
Jeep recently re-entered the pickup truck game by resurrecting its Gladiator nameplate in 2020, but it’s not the Gladiator we’re looking back at. Jeep once was a major player in pickups, and its 1966 Forward Control (FC) series was a friendly little pickup truck designed as a practical hauler.
Cab-forward design allowed truck makers to maximize the available space of the wheelbase by placing the engine beneath the cab rather than under a long hood. Volkswagen, Ford, and Chevrolet all got in on the action, but our favorite interpretation belongs to Jeep. The Jeep Forward Control Series hit the market in 1957 and had essentially run its course by 1966. It offered two wheelbase choices and engines ranging from a 72-horsepower four-cylinder to a 115-horsepower inline-six.
Advertisement
The Jeep Forward Control Series doesn’t look like much of anything on the road today. It was a utilitarian hauler with superior visibility and a distinctly Jeep grille — though that’s about the only design cue that is recognizably Jeep. The FC ultimately faced competition from the likes of the Chevrolet Corvair Rampside and Volkswagen Transporter pickup. About 30,000 FCs rolled off the assembly line during its production run, making it somewhat difficult to find today.
Advertisement
1968 Dodge Power Wagon
The Power Wagon was based on Dodge trucks that served during World War II, and if that’s not enough of a proving ground for you, then you must be pretty rough on your trucks. The nameplate debuted in 1946 for the post-war civilian market. America was facing an extended period of growth, and Dodge had just the truck to get it done.
High on utility and low on comfort, the Power Wagon was used (and revered) by government agencies for rough-and-tumble work. Dodge has made plenty of hay out of its high-output Hemi V8s over the past several decades, but the Power Wagon was primarily known for inline-six engines. Rugged and reliable, the U.S. Navy, the Park Service, the Department of Fish and Wildlife, and others put it to the test over the years
By the 1960s, the Power Wagon line was mid-stride — the last model would roll off the line in 1980 — but it hit a high point in design. The final year of the first generation was 1968, after which the Power Wagon was designated export-only as part of a government program, despite protests from the U.S. Forest Service, which loved the Mopar workhorse. Part of the reason (aside from emissions) was that its design was still based on the 1946 aesthetic, which itself dated to pre-war styles. The result was a truck that was hopelessly outdated by contemporary standards, but looks pretty darn cool to us today. In fact, we’re lobbying Dodge to bring this classic pickup truck back to the masses.
Hot afternoons demand something cold and sweet right when the craving strikes, yet store pints cost plenty and rarely match what fresh ingredients deliver at home. Traditional machines take time to churn and leave bowls to clean afterward, so many people stick with whatever sits in the freezer section. The CuisinArt FastFreeze Ice Cream Maker (ICE-FD10), priced at $97.56 (was $120), changes that routine entirely by handling the heavy work in under a minute once the base sits ready.
Preparation is so simple that all you have to do is pour a few ingredients into a half-pint cup and freeze for a day. From there, simply twist the wand onto the cup, choose a preset from the five solid possibilities, and push it down, and you’ll have a treat ready to go. After a few spins, ice cream becomes silky smooth, sorbet remains bright and fruity, slushies become icy and ideal for hot days, and milkshakes blend up smooth without the need for any pre-freezer prep at all. To top it all off, each serving is the perfect size for one person, with no leftovers to clutter up the freezer or waste.
5-IN-1 FROZEN DESSERT MAKER: Cuisinart FastFreeze Ice Cream Maker delivers 5-in-1 functionality to make half a pint of frozen desserts in minutes…
EASY TO USE: Automatic ice cream machine with five preset programs makes frozen dessert styles instantly, including ice cream, milkshakes, slushies…
HEALTH-CONSCIOUS TREATS: Health-conscious users can make frozen treats including non-dairy ice cream, protein milk shakes or fruit-based sorbets.
Storage is a breeze because the entire wand disassembles into pieces that go into the kitchen drawer among all of the other culinary tools. The cups stack nicely in the freezer and are then washed in the dishwasher, as all that is required to clean the blade is a brief rinse. The item is so quiet, it operates without even disturbing the area, unlike those large heavy machines that usually appear to be rumbling around.
Customization is the name of the game with this device, especially if you don’t have much free time. Start with a mango and cream foundation, then add some actual mango chunks at the end to create a mango ice cream that tastes like it came directly from the shop. Want a post-workout treat that tastes like dessert? Throw in some protein powder and peanut butter, and you’re ready to go. Cookie dough and candy bits combine well without becoming damaged, and the half-pint size is ideal for experimenting with new combinations without committing to a whole recipe. Recipes in the manual or online suggestions will provide you with some inspiration to get you started, but you can also wing it with whatever you have in the cupboard and still come out with something wonderful.
In terms of cost, it’s a no-brainer because it pays for itself quickly when you consider how much you’ll save on store-bought ice cream. A single store-bought pint might cost several dollars, whereas making your own allows you to use components you already have on hand. Over the course of a few weeks, the machine will have more than paid for itself simply by reducing impulse purchases and improving control over sugar and nutritional ingredients.
Google may cut the free storage for new Gmail accounts from 15GB to 5GB, according to a report from Android Authority. Those who want a storage upgrade from those 5GB accounts would need to provide a phone number to Google to unlock the extra gigabytes.
A Google representative confirmed that it’s trying out new account options.
“We’re testing a new storage policy for new accounts created in select regions that will help us continue to provide a high-quality storage service to our users, while encouraging users to improve their account security and data recovery,” the representative said in a statement to CNET.
Advertisement
Typically, a verified phone number blocks multiaccount storage abuse and secures Google profiles with a reliable recovery method. Some online speculate that the move could also be a way for Google to encourage more people to subscribe to paid cloud storage plans under Google One.
It’s unclear if the regions where this is being tested include the US. Android Authority reported that accounts with only 5GB of storage were primarily in African countries.
Google has been expanding the tiers of paid accounts it offers, combining Gemini AI features into bundles. It recently added three new tiers focused on AI features, starting at $8 a month with 200GB of storage included.
Google’s storage space
When Gmail debuted in 2004, it offered users a full 1GB of storage, which fundamentally changed the way many people used email: They could keep everything and search for what they needed.
Advertisement
The next year, Google doubled the amount of free storage to 2GB. The storage kept ticking up, to 7GB, then 10GB and finally to 15GB in 2013, when Google Drive, Google Phones and Gmail merged into a shared pool of data for users.
Why did Google do it? In 2013, CNET wrote: “Google — as we all know — is in the business of making money. If Google is offering you more storage, then there is something that extra storage helps you do that will help Google make more money.”
One reason Gmail succeeded over other email services in the early days was lowering the barrier to entry, as Google increased free access to services across the platform, making it less likely that customers would leave for competitors.
These days, Google is battling its competitors on the AI front, which explains why it’s increasingly bundling Gemini AI features with the email, photo and document services users have come to depend on.
AMD says FSR 4.1 will finally bring its newer hardware-accelerated upscaling technology to older Radeon GPUs. “The rollout will begin in July with RDNA3- and 3.5-based GPUs, which include the Radeon RX 7000 series, as well as integrated GPUs like the Radeon 890M and Radeon 8060S,” reports Ars Technica. “In ‘early 2027,’ support will also be extended to the RDNA2 architecture, which includes the Radeon RX 6000 series, integrated GPUs like the Radeon 680M, and the Steam Deck’s GPU. This would also open the door to supporting FSR 4 on the PlayStation 5 and Xbox Series X and S, all of which also use RDNA2-based GPUs.” From the report: [AMD Computing and Graphics SVP Jack Huynh’s] short video presentation didn’t get into performance comparisons, but did mention that AMD had to work to get FSR 4’s superior hardware-backed upscaling working on its older graphics architectures. RDNA4 includes AI accelerators that support the FP8 data format in the hardware, and porting FSR 4 to older GPUs meant getting it running on the integer-based INT8 hardware in the RDNA3 and RDNA2-based GPUs.
This may mean that FSR 4.1 running on an RDNA3 or RDNA2-based GPU may come with a larger performance hit relative to RDNA4 cards, or that image quality may differ slightly. Modders have already worked to get FSR4 working on INT8-supporting GPUs, and the older GPUs reportedly see a 10 to 20 percent performance hit relative to FSR 3.1 running on the same hardware. AMD’s official implementation may or may not improve on these numbers.
[…] Any games that support FSR 4 should be able to support FSR 4.1 running on Radeon 7000-series cards; users will presumably be able to install a driver update in July that enables the new feature. Games that support the older FSR 3.1 can also be forced to use FSR 4 in the Radeon graphics driver.
The medtech space, like most STEM fields, has evolved exponentially, so what skills might help you keep your head above water?
The medtech space is incredibly diverse with careers in a range of areas, with many requiring unique skills or a resume of cross-compatible abilities. That is to say, it can be difficult as a student selecting a college course, or indeed a graduate looking at post-bachelor degrees, to identify the skills most suited to a future career in the medtech industry.
Well SiliconRepublic.com is here to help. While this list is by no means exhaustive, it will give you an idea of some of the skills you should prioritise if you want to develop a broad range of abilities in an ever-evolving and highly skilled field. Without further ado, here are some of the most crucial skills in any medtech career.
Regulation
It is fair to say that regardless of which aspect of medtech you specialise in, or which avenue your professional life goes down, you are going to require a degree of knowledge in how the sector is regulated. The regulatory landscape is undergoing significant transformation as evolving frameworks in Europe and the US make an impact.
Advertisement
EUDAMED, the European Database on Medical Devices, is set to come into effect in late May and is just one of the critical frameworks students and professionals in this area will have to become familiar with.
Europe is also in the process of revising its Medical Device Regulation and In Vitro Diagnostic Regulation policies, a task that began in late 2025.
The point is, medical frameworks and requirements are always going to be subject to revision and change. To that point, it’s the job of a professional in this space not just to understand the rules that govern their own country, but to have a greater understanding of the global ecosystem.
AI and automation
To what extent AI and automation are going to play a role in modern-day healthcare is unclear. Some would say it will only be used when needed and others would argue that it is becoming a significant element of the medtech scene. Regardless of where you fall between the two schools of thought, knowing how to wield AI and automation in healthcare is undoubtedly a useful and potentially necessary addition to a medtech skillset.
Advertisement
Since its inception, AI and automated technologies have been used to create wearables that monitor health, accelerate drug discovery and administer treatments and therapies for conditions that impact quality of life.
People considering a career that amalgamates AI, medtech and entrepreneurship could benefit from researching how AI impacts the medtech space, the application of AI for medical devices, how to bring a device to market and commercialisation. For the more technical experience, an understanding of robotics, automation, engineering and programming languages will be crucial.
Quantum
To what extent quantum might impact the medtech space is also up for debate, as the field is still in some parts theoretical. But what is not theoretical is that quantum computing has the potential to address many of the globe’s most pressing health-related challenges, in that it could accelerate drug discovery, improve diagnostics, personalise treatment and aid research far more quickly than current methods allow.
With that in mind skills in this space could certainly give a student or professional an edge when it comes time to secure a position or further a career.
Advertisement
If this sounds like an opportunity you would be interested in, start studying quantum mechanics, quantum computing, quantum-related cybersecurity and ensure you have a strong understanding of both the ethics involved and any regulation covering quantum and the medtech space. A basis in maths and physics is also going to be a great help. Quantum is in some ways a new frontier for STEM professionals, so this is certainly a skill worthy of a future-focused, adventurous and ambitious person.
Soft skills
You can’t talk about cross-functional skills without mentioning soft skills. You may not think soft skills rank as highly as technical abilities when it comes to preparing for a future career, but you would be wrong.
Skills that empower communication, that advance learning, that establish and build upon networks, that create opportunities for you in competitive landscapes, are immensely valuable and should not be overlooked.
In the medtech sector, employers will likely value employees who can work independently but also work as part of a team, they will prize presentation skills and acknowledge those who can contribute not just to the research, but to the wider team as a motivator and leader. So don’t neglect soft skills as you become a technical wizard.
Advertisement
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
A critical vulnerability in the Funnel Builder plugin for WordPress is being actively exploited to inject malicious JavaScript snippets into WooCommerce checkout pages.
The flaw has not received an official identifier and can be leveraged without authentication. It affects all versions of the plugin before 3.15.0.3.
Funnel Builder is a WordPress plugin for WooCommerce Checkout developed by FunnelKit, primarily used to customize checkout pages, with features like one-click upsells, landing pages, and to optimize conversion rates.
E-commerce security company Sansec detected the malicious activity and noticed that the payload (analytics-reports[.]com/wss/jquery-lib.js) is disguised as a fake Google Tag Manager/Google Analytics script that opens a WebSocket connection to an external location (wss://protect-wss[.]com/ws).
An attacker can exploit it to modify the plugin’s global settings via an unprotected, publicly exposed checkout endpoint. This allows them to inject arbitrary JavaScript into the plugin’s “External Scripts” setting, causing malicious code to execute on every checkout page.
According to Sansec, the attacker-controlled server delivers a customized payment card skimmer that steals the following information:
Credit card numbers
CVVs
Billing addresses
Other customer information
Payment card skimmers enable threat actors to make fraudulent online purchases, while stolen records often end up sold individually or in bulk on dark web portals known as carding markets.
FunnelKit addressed the vulnerability in version 3.15.0.3 of Funnel Builder, released yesterday.
Advertisement
A security advisory from the vendor, seen by Sansec, confirms the malicious activity, saying “we identified an issue that allowed bad actors to inject scripts.”
The vendor recommends that website owners and administrators prioritize updating to the latest version from the WordPress dashboard and also review Settings > Checkout > External Scripts for potential rogue scripts the attacker may have added.
Automated pentesting tools deliver real value, but they were built to answer one question: can an attacker move through the network? They were not built to test whether your controls block threats, your detection rules fire, or your cloud configs hold.
This guide covers the 6 surfaces you actually need to validate.
Many people do their best to protect themselves from cybercrimes by prioritizing cybersecurity on their laptops, smartphones and other devices. But the real risk may be the theft of your personal data through other means.
A survey commissioned by the cybersecurity company NordVPN, released Friday, found that cybercriminals are shifting how they gain entry, and many of us are ignoring the signs. Although attackers used to mainly target devices — like by slipping malicious files into a computer — the focus is now on manipulating people through scam calls, phishing messages that imitate popular companies and fake login pages to steal passwords.
In a survey of 1,200 Americans, NordVPN found that 91% of people are concerned about cybersecurity scams. Some 56% worry more about broader threats, such as identity theft or fraud, than about scam calls, which they are more likely to encounter regularly. On a daily basis, 46% of people say they experience scam calls, and 17% say they personally deal with identity theft and fraud.
Advertisement
“People picture cybercrime as stolen accounts or drained bank balances, but criminals succeed much earlier in the process,” NordVPN Chief Technology Officer Marijus Briedis said in a statement. “The real moment of attack is when someone feels rushed, scared or pressured into trusting too quickly. From that point, it takes just a few clicks to go from a phone call to a stolen identity.”
In other words, many people are unaware of the major risks that come with scam calls or phishing emails. “They’re the starting point for exactly the kinds of identity theft and financial fraud people are most afraid of,” Briedis said.
Scam calls have increased significantly in the past year, but there are ways to counter them. One way is to use the iOS call screening feature on the iPhone and ignore calls from unknown numbers.
NordVPN recommends several steps you can take to protect yourself from cybercrime. If you encounter a suspicious text message, email or call, pause before taking any type of action. According to NordVPN, scammers like to rush you and messages that seem urgent should be deemed suspicious.
Advertisement
“Modern attacks depend on people reacting faster than they can think,” Briedis said.
Other steps you can take include:
Verify information before responding.
Ensure that links and addresses are legitimate before clicking them.
Enable multifactor notifications whenever possible and use strong passwords.
Only accept downloads from trusted sources.
Download security software that can handle multiple cyber threats simultaneously.
A representative for NordVPN did not immediately respond to a request for comment.
In a recent press release, the Court of Justice of the European Union (CJEU/Curia) explained its decision in a case involving Meta and the Italian regulatory authority (C-797/23). The EU’s highest judicial body stated that member states may grant local publishers the right to fair compensation, requiring online service providers… Read Entire Article Source link
New VB Pulse data shows Microsoft and OpenAI leading enterprise agent orchestration, but Anthropic’s first measurable foothold points to a larger fight over who controls the infrastructure where AI agents run.
For the last two years, the enterprise AI race has mostly been framed as a model war: OpenAI’s GPT series versus Anthropic’s Claude versus Google’s Gemini, with smaller and open-source alternatives also coming in from the U.S. and China.
But the next strategic fight may not be over which model answers a prompt best. It may be over who controls the layer where agents plan, call tools, access data, run workflows and prove to security teams that they did not do anything they were not supposed to do.
New VB Pulse survey data suggests the category is already taking shape. Our independent Enterprise Agentic Orchestration tracker, a survey that records the preferences of qualified, verified technical-decision maker respondents at enterprises at regular intervals, found that Microsoft Copilot Studio and Azure AI Studioled with 38.6% primary-platform adoption in February, up from 35.7% in January.
Advertisement
OpenAI’s Assistants and Responses API held second place, rising from 23.2% to 25.7%.
Anthropic remained far smaller, but it made its first appearance in the tracker: moving from 0% in January to 5.7% in February for Anthropic tool use and workflows.
VP Pulse Enterprise Agentic Orchestration change in respondents’ primary agent orchestration platform from Jan-Feb 2026. Credit: VentureBeat
The underlying move is small — four respondents out of a total 70 in this cohort, with more to come — but strategically interesting because it marks the first sign in this tracker of Claude usage moving from the model layer into native orchestration.
Advertisement
That distinction matters. Enterprises are not merely choosing chatbots. They are deciding where the live operational machinery of AI work will sit: inside Microsoft’s stack, inside OpenAI’s API layer, inside Anthropic’s managed runtime, inside an open framework, or across a hybrid mix of all of them.
“This is the convergence moment for enterprise AI,” said Tom Findling, CEO and cofounder of AI cybsersecurity startup Conifers, in a statement to VentureBeat. “Models and agent frameworks have matured enough together that enterprises are now shifting focus beyond model quality to the control plane around it. In security operations, we’re seeing the competitive advantage move toward platforms that can orchestrate agents, leverage enterprise context, and provide governance and auditability across customer environments.”
Anthropic’s number is still small to start — but the increase is not
The Anthropic number, by itself, should not be overread. A move from zero to 5.7% is not a juggernaut. It is not proof that Anthropic has captured enterprise orchestration.
It is not even enough to say Anthropic has a durable lead in any part of this market. Microsoft owns the early enterprise distribution advantage, and OpenAI has a much larger installed base in orchestration than Anthropic.
Advertisement
But small numbers can matter when they appear at the start of a new market structure. Anthropic’s emergence in orchestration comes as the broader VB Pulse data shows Claude also gaining massive enterprise adoption at the model layer.
In our VB Pulse Q1 Foundation Models and Intelligence Platforms tracker, Anthropic rose from 23.9% in January to 28.6%in February and then even more dramatically to 56.2% in March among qualified enterprise respondents, with the March reading flagged as directional only, because the sample was only 16 respondents.
VB Pulse Foundation Models and Intelligence Platforms comparison chart Jan-March 2026. Credit: VentureBeat
The story, then, is not that Anthropic is winning orchestration today. It is that Anthropic’s model momentum may be starting to spill into the orchestration layer.
Advertisement
That is where the strategic stakes get higher.
A model is easier to swap than an agent runtime
A model is relatively easy to swap, at least in theory. A company can route one workload to Claude, another to GPT, another to Gemini and another to a smaller open model.
In fact, the VB Pulse Foundation Models tracker over the same Q1 period shows that multi-model strategy is the enterprise consensus: respondents increasingly report adopting multiple models and building orchestration layers that route across them by task, cost and risk profile.
An agent runtime is different. Once a company’s workflows, tool permissions, credentials, audit logs, memory, sandboxed execution and operational monitoring live inside one provider’s environment, switching providers becomes less like changing models and more like changing infrastructure.
Advertisement
That is the real reason Anthropic’s 5.7% foothold is worth watching
Anthropic has already made clear that it wants to provide more than the model. Its Claude Managed Agents documentation describes a public beta for a managed agent harness with secure sandboxing, built-in tools and API-run sessions, while Anthropic’s engineering post frames the architecture around decoupling the model from the surrounding agent machinery: the session, the harness and the sandbox.
In plain English, Anthropic is trying to host the environment where Claude agents remember context, use tools, run code, operate inside sandboxes and persist across long-running workflows. That is no longer just inference. That is operational infrastructure.
The pitch is obvious: most enterprises do not want to stitch together their own agent stack from scratch. They want agents that can act, but they also want permission boundaries, audit trails, workflow reliability and ways to stop the system when something goes wrong.
Security is becoming the buying criterion
The VB Pulse orchestration tracker shows that buyers are prioritizing exactly those concerns. Security and permissions ranked as the top orchestration platform selection criterion in both January and February, at 39.3% and 37.1%.
Advertisement
VB Pulse Enterprise Agentic Orchestration, Q1 2026 chart of top selection criteria for agent orchestration solutions. Credit: VentureBeat
Control over agent execution rose from 17.9% to 22.9%, while flexibility across models and tools fell from 35.7% to 25.7%. The market appears to be shifting from optionality toward governance.
That shift is not surprising. A chatbot can be wrong and still remain mostly contained. An agent that can send emails, modify documents, query databases, call APIs or execute workflows has a much larger blast radius. The enterprise question is not only whether the agent is smart enough.
It is who gave it permission, what it touched, what it changed, whether those actions were logged, and whether the company can unwind the damage if something goes wrong.
Advertisement
Ev Kontsevoy, cofounder and CEO of Teleport, an identity and digital infrastructure solutions company, argues that the industry is still putting too much emphasis on orchestration itself and not enough on identity: “The race to own the agent orchestration layer is real,” Kontsevoy said. “It’s also solving the wrong problem first. Orchestration without identity only multiplies chaos. Without identity, you don’t know what an agent can access, what it actually did, or how to revoke its access when it operates outside policy. A unified identity layer is a prerequisite to deploying agents — one or many — in infrastructure.”
Syam Nair, Chief Product Officer at the intelligent data infrastructure company NetApp, believes data management is key in all cases to secure AI agent orchestration across the enterprise. As he said in a statement to VentureBeat: “Effective agent management requires built-in intelligence and a continuously updated understanding of both data and, critically, its metadata. This visibility allows organizations to define and enforce clear policies so data is used only by the right agents, for the right purposes. Making this work at scale is a crossfunctional effort. Security, storage, and data science teams must work together to implement policies that safeguard company data, while creating a strong data foundation for AI.”
He continued: “The CIOs and technology leaders that are successful are the ones who take the input, policies, and vision from all these teams into account as they build a data infrastructure that minimizes risk and drives business value.”
Microsoft has the distribution edge
That is why Microsoft’s early lead makes sense. Copilot Studio and Azure AI Studio sit inside an enterprise stack many companies already use: Microsoft 365, Teams, Entra ID, Azure and existing procurement relationships.
Advertisement
The VB Pulse Orchestration Tracker for Q1 2026 describes Microsoft as the enterprise default, with no other platform within 13 percentage points in February.
David Weston, CVP, AI Security, Microsoft, provided some insight on why, writing in a statement to VentureBeat: “Without a unified control layer, you start to see fragmentation – agents operating in silos, inconsistent governance, and gaps in security. What customers are asking for is a way to bring order to that complexity. With Agent 365, we’re providing a single control plane to observe, govern, and secure agents across Microsoft, partner, and third-party ecosystems, all grounded in enterprise data and identity.”
OpenAI’s second-place position is also unsurprising. Its Assistants and Responses API gave developers an early way to build agent-like systems using OpenAI’s models and tooling. In the orchestration tracker, OpenAI is not surging, but it is still ticking up steadily: 23.2% in January to 25.7% in February.
Anthropic is the newcomer at the orchestration layer. But its timing may be favorable. The VB Pulse Foundation Models tracker for Q1 2026 suggests enterprises increasingly see Claude as a fit for higher-stakes workloads where safety, instruction following, long context and governance matter.
Advertisement
The orchestration tracker suggests those same buyers are now moving from agent experiments toward production workflows, where security, permissions and task reliability become the gating issues.
That creates a possible path for Anthropic: not to beat Microsoft as the default enterprise platform, at least not immediately, but to become the agent runtime for companies that already trust Claude for sensitive or complex workloads.
The orchestration tracker found that a hybrid control plane — combining provider-native orchestration with external orchestration — was the leading expected architecture, holding around 35% to 36% across the two substantive waves.
Provider-managed-only approaches grew modestly but remained a minority. The report’s conclusion is blunt: enterprises are not willing to give full orchestration control to any single provider.
It makes total sense as enterprises seek to leverage the “best-in-breed” models, harnesses, and tools from multiple vendors, especially as their needs differ widely across sector, business, and size.
“Most enterprises will operate in a multi-model, multi-agent environment, which makes an independent control plane essential,” agreed Felix Van de Maele, CEO of Collibra, a company specializing in unified governance for data and AI, in a statement to VentureBeat. “That is why we built AI Command Center: to give organizations the visibility, governance, and real-time oversight needed to manage AI systems and agents across the full lifecycle.”
Advertisement
That caution shows up in the risk data. When asked about risks if agent control lives inside a model provider platform, respondents cited security and permissioning limitations as the top concern. Vendor lock-in was the second-largest concern and the only one that increased from January to February, rising from 23.2% to 25.7%.
VB Pulse Enterprise Agentic Orchestration Q1 2026 chart of top concerns over the period. Credit: VentureBeat
This is the tension at the heart of the agent market. Enterprises want managed infrastructure because building reliable agents is hard. But the more a provider manages, the more it may own.
Dr. Rania Khalaf, chief AI officer at WSO2 — the subsidiary of EQT that offers open source, customizable AI stacks for enterprises — said enterprises will need an agent control plane that sits apart from individual frameworks, harnesses and runtimes because agents combine the unpredictability of LLMs with the ability to take actions that have consequences.
Advertisement
“Teams want the freedom to use the best model and framework for each job — Claude for coding, Gemini for writing, LangGraph or CrewAI for dynamic modular behavior — and that heterogeneity makes consistent governance untenable in integrated platforms that lock into one ecosystem,” Khalaf said.
From LLMOps to Agent Ops
Khalaf said the industry is also moving from MLOps to LLMOps to “Agent Ops,” where governance has to cover the whole agent, not just the model call.
“A guardrail on an LLM call can catch hallucination or toxic output, but it will not catch an agent thrashing in an unbreakable, costly loop, which is why governance now has to extend out from the LLM interaction to the scope of the agent,” she said.
The practical implication is that enterprises need to separate policy and control from the agent logic itself. Khalaf pointed to the recent example of an agent deleting a production database despite being told not to, arguing that the failure showed the limits of relying on prompt-level instructions where hard identity and access controls are needed.
Advertisement
“Pulling guardrails, evals, policies, bindings, and agent identity out of the core agent logic allows them to be configured per deployment and per environment, owned by the appropriate teams in security, product, and compliance, without fragmenting the governance layer as different teams choose different models and frameworks,” Khalaf said.
MCP is open. The runtime may still be sticky
That is where Anthropic’s Model Context Protocol, or MCP, complicates the story. MCP is not a walled garden; Anthropic introduced it as an open standard for connecting AI systems to data and tools, and Anthropic’s documentation describes MCP as an open-source standard for connecting AI applications to external systems.
But openness at the protocol layer does not automatically eliminate lock-in at the runtime layer. An enterprise could use an open protocol to connect tools while still becoming dependent on a provider’s managed sessions, logs, sandboxes, permissions model, workflow state and deployment environment. In other words, MCP may reduce integration friction, while managed agent infrastructure could still increase switching costs.
Khalaf said Microsoft’s lead likely reflects its M365 and Azure distribution, while Anthropic’s emerging foothold could reflect a different architectural bet around open protocols such as MCP. But she argued the long-term direction is not a single-provider stack.
Advertisement
“Enterprises serious about running agents in production will end up multi-vendor across these layers,” Khalaf said, “which is why the open and interoperable control plane matters more than the current percentages might suggest.”
The next cycle may be cross-vendor collaboration
That same tension — between provider-native convenience and cross-vendor reality — is where Arick Goomanovsky, CEO and cofounder of universal AI agent orchestrator startup BAND, sees the next competitive cycle forming.
“Enterprises now run agents everywhere: individual assistants and coding agents, multi-agent systems in production, agents embedded in Agentforce and ServiceNow, and third-party agents consumed as agent-as-a-service,” Goomanovsky said. “None of them collaborate across those boundaries by default.”
Goomanovsky argues that the missing layer is not just orchestration inside a single model provider, but a cross-vendor collaboration layer that lets agents from different ecosystems act together.
Advertisement
“What’s emerging in parallel is demand for an agentic collaboration harness – an interaction layer that lets agents from Microsoft, OpenAI, Anthropic, and internal teams operate as one workforce,” he said. “Orchestration inside any single vendor is still a walled garden so the next competitive cycle is cross-vendor agent collaboration.”
Independent frameworks face an enterprise packaging problem
There is also a warning sign for independent orchestration frameworks. LangChain and LangGraph fell from 5.4% to 1.4% as the primary orchestration platform in the qualified enterprise sample.
External orchestration abstracted entirely from model providers also fell from 8.9% to 2.9%.
Scott Likens, Global Chief AI Engineer at professional services giant PwC, has a front row seat to this trend as the company spearheads and assists clients with their AI transformations.
Advertisement
As he told VentureBeat in a statement: “Right now, most enterprises are still operating in fragmented environments, with orchestration spread across platforms, business applications, and internally developed tooling. Over time, the market will likely move toward more unified orchestration models, but interoperability, governance and security will remain critical because enterprises are unlikely to standardize on a single agent ecosystem.”
The report argues that fully independent orchestration frameworks may not yet have the enterprise packaging — security certifications, support, compliance documentation and vendor accountability — that procurement teams require.
That does not mean open frameworks are irrelevant. It does suggest that enterprise buyers may increasingly consume open or developer-first orchestration through managed products, cloud-provider partnerships or internal control planes rather than as standalone frameworks.
The agent market starts to look like cloud infrastructure
This is where the agent market starts to look less like the early chatbot market and more like enterprise cloud infrastructure. The winning vendors will not only have capable models. They will have identity integration, permission controls, audit logs, observability, workflow tooling, sandboxing, evaluation and a credible answer to who owns the control plane.
Advertisement
Indeed, the orchestration layer is but one part of the stack that the enterprise must fill in, and enterprises may actually decide to have different orchestration layers for agents working in different departments and functions.
As Nithya Lakshmanan, Chief Product Officer at revenue team AI orchestration startup Outreach.ai wrote in a statement to VentureBeat: “General-purpose orchestration platforms coordinate agent activity well, but they don’t carry the workflow-specific context that determines whether an agent’s action is correct for a given situation. In revenue workflows, an agent acting on incomplete deal history or missing buyer context will underperform and erode trust with users. The teams getting the most out of multi-agent systems are treating domain-specific data as the governance layer, with orchestration sitting on top. Most enterprises have chosen their orchestration stack, and what they’re now figuring out is how those platforms get access to the workflow context they need to make agents useful inside specific business functions.”
That is why Anthropic — which is increasingly launching its own domain-specific agents for finance and design, among other categories — is worth following closely. The company does not need to win the entire orchestration market tomorrow for its strategy to matter. It only needs to persuade a growing set of Claude enterprise customers to let Anthropic handle more of the surrounding machinery: tools, workflows, memory, execution and governance.
If it succeeds, Claude becomes more than a model in a multi-model portfolio. It becomes part of the infrastructure where enterprise work gets done.
Advertisement
That would put Anthropic in a more direct fight with OpenAI and Microsoft — not just over model quality, but over the operating layer of AI agents.
The narrow but important read
The safe interpretation of the VB Pulse data is narrow but important: Anthropic is not yet a major enterprise orchestration platform. Microsoft is. OpenAI is much closer. But Anthropic has registered its first measurable foothold at the orchestration layer, just as the market is deciding who should control agent execution.
For enterprise buyers, that may be the question that matters most in 2026. Not which model is best, but which provider gets to run the agent — and how hard it will be to leave once the agent is running.
You must be logged in to post a comment Login