Connect with us
DAPA Banner

Tech

The BCI User Experience: Living With Brain Implants

Published

on

Scott Imbrie vividly remembers the first time he used a robotic arm to shake someone’s hand and felt the robotic limb as if it were his own. “I still get goosebumps when I think about that initial contact,” he says. “It’s just unexplainable.” The moment came courtesy of a brain implant: an array of electrodes that let him control a robotic arm and receive tactile sensations back to the brain.

Getting there took decades. In 1985, Imbrie had woken up in the hospital after a car accident with a broken neck and a doctor telling him he’d never use his hands or legs again. His response was an expletive, he says—and a decision. “I’m not going to allow someone to tell me what I can and can’t do.” With the determination of a head-strong 22-year-old, Imbrie gradually regained the ability to walk and some limited arm movement. Aware of how unusual his recovery was, the Illinois-native wanted to help others in similar situations and began looking for research projects related to spinal cord injuries. For decades, though, he wasn’t the right fit, until in 2020 he was finally accepted into a University of Chicago trial.

Elderly person in orange sweater sits as robotic arm with black hand extends forward

Two photos. The first shows a man sitting in a chair with a large robotic arm extending in front of him. The second is a close-up of implants on the surface of a brain.  Scott Imbrie has shaken hands with a robotic arm controlled by a brain implant. The electrodes record neural signals that enable him to move the device and receive tactile feedback. Top: 60 Minutes/CBS News; Bottom: University of Chicago

Imbrie is part of a rarefied group: More people have gone to space than have received advanced brain-computer interfaces (BCI) like his. But a growing number of companies are now attempting to move the devices out of neuroscience labs and into mainstream medical care, where they could help millions of people with paralysis and other neurological conditions. Some companies even hope that BCIs will eventually become a consumer technology.

None of that will be possible without people like Imbrie. He’s a member of the BCI Pioneers Coalition, an advocacy group founded in 2018 by Ian Burkhart, the first quadriplegic to regain hand movement using a brain implant.

Advertisement

That life-changing experience convinced Burkhart that BCIs will make the leap from lab to real world only if users help shape the technology by sharing their perspectives on what works, what doesn’t, and how the devices fit into daily life. The coalition aims to ensure that companies, clinicians, and regulators hear directly from trial participants.

Two images. The first is a photo of a man sitting in a wheelchair; attached to the top of his head is a device with a cable attached. The second is a medical image showing the location of electrodes in the brain.  Ian Burkhart founded the BCI Pioneers Coalition to ensure that companies developing brain implants hear directly from the people using them. Left: Andrew Spear/Redux; Right: Ian Burkhart

The group also serves as a peer-support network for trial participants. That’s crucial, because despite the steady drumbeat of miraculous results from BCI trials, receiving a brain implant comes with significant risks. Surgical complications, such as bleeding or infection in the brain, are possible. Even more concerning is the potential psychological toll if the implant fails to work as expected or if life-changing improvements are eventually withdrawn.

Researchers spell this out upfront, and many are put off, says John Downey, an assistant professor of neurological surgery at the University of Chicago and the lead on Imbrie’s clinical trial. “I would say, the number of people I talk to about doing it is probably 10 to 20 times the number of people that actually end up doing it,” he says.

What Happens in a BCI Trial?

BCI pioneers arrive at their unique status via a number of paths, including spinal cord injuries, stroke-induced paralysis, and amyotrophic lateral sclerosis (ALS). The implants they receive come from Blackrock Neurotech, Neuralink, Synchron, and other companies, and are being tested for restoring limb function, controlling computers and robotic arms, and even restoring speech.

Advertisement

Many of the implants record signals from the motor cortex—the part of the brain that controls voluntary movements—to move external devices. Some others target the somatosensory cortex, which processes sensory signals from the body, including touch, pain, temperature, and limb position, to re-create tactile sensation.

Ease of use depends heavily on the application. Restoring function to a user’s own limbs or controlling robotic arms involves the most difficult learning curve. In early sessions, participants watch a virtual arm reach for objects while they imagine or attempt the same movement. Researchers record related brain signals and use them to train “decoder” software, which translates neural activity into control signals for a robotic arm or stimulation patterns for the user’s nerves or muscles.

Paralyzed in a 2010 swimming accident, Burkhart took part in a trial conducted by Battelle Memorial Institute and Ohio State University from 2014 to 2021. His implant recorded signals from his motor cortex as he attempted to move his hand, and the system relayed those commands to electrodes in his arm that stimulated the muscles controlling his fingers.

A man seated at a desk has electronics wrapped around his right arm. He\u2019s holding a device shaped like a guitar and looking at a screen showing the fretboard of a guitar. Ian Burkhart, who is paralyzed from the chest down, received a brain implant that routed neural signals through a computer to his paralyzed muscles, enabling him to play a video game. Battelle

Getting the system to work seamlessly took time, says Burkhart, and initially required intense concentration. Eventually, he could shift his focus from each individual finger movement to the overall task, allowing him to swipe a credit card, pour from a bottle, and even play Guitar Hero.

Advertisement

Training a decoder is also not a one-and-done process. Systems must be regularly recalibrated to account for “neural drift”—the gradual shift in a person’s neural activity patterns over time. For complex tasks like robotic arm control, researchers may have to essentially train an entirely new decoder before each session, which can take up to an hour.

A man sits in a wheelchair surrounded by screens and electrical equipment. A device is attached to the top of his head, and a wire extends from it. Two other men stand in the room wearing masks.  Austin Beggin says that testing a BCI is hard work, but he adds that moments like petting his dog make it all worth it. Daniel Lozada/The New York Times/Redux

Even after the system is ready, using the device can be taxing, says Austin Beggin, who was paralyzed in a swimming accident in 2015 and now participates in a Case Western Reserve University trial aimed at restoring hand movement. “The mental work of just trying to do something like shaking hands or feeding yourself is 100-fold versus you guys that don’t even think about it,” he says.

It’s also a serious time commitment. Beggin travels more than 2 hours from his home in Lima, Ohio, to Cleveland for two weeks every month to take part in experiments. All the equipment is set up in the house he stays in, and he typically works with the researchers for 3 to 4 hours a day. The majority of the experiments are not actually task-focused, he says, and instead are aimed at adjusting the control software or better understanding his neural responses to different stimuli.

But the BCI users say the hard work is worth it. Beyond the hope of restoring lost function, many feel a strong moral obligation to advance a technology that could help others. Beggin compares the pioneers to the early astronauts who laid the groundwork for the lunar landings. “We’re some of the first astronauts just to get shot up for a couple of hours and come back down to earth,” he says.

Advertisement

The Emotional Impact of BCIs

Speak to BCI early adopters and a pattern emerges: The biggest benefits are often more emotional than practical. Using a robotic arm to feed oneself or control a computer is clearly useful, but many pioneers say the most meaningful moments are the ones the experiment wasn’t even trying to produce. Beggin counts shaking his parents’ hands for the first time since his injury and stroking his pet dachshund as among his favorite moments. “That stuff is absolutely incredible,” he says.

Neuralink participant Alex Conley, who broke his neck in a car accident in 2021, uses his implant to control both a robotic arm and computers, enabling him to open doors, feed himself, and handle a smartphone. But he says the biggest boost has come from using computer-aided design software.

A former mechanic, Conley began using the software within days of receiving his implant to design parts that could be fabricated on a 3D printer. He has designed everything from replacement parts for his uncle’s power tools to bumpers for his brother-in-law’s truck. “I was a very big problem solver before my accident, I was able to fix people’s things,” he says. “This gives me that same little burst of joy.”

Two photos show former U.S. president Barack Obama with a man seated in a wheelchair that has a robotic arm mounted to it. The first photo shows their whole bodies, the second is a close-up of a fist bump between Obama and the robotic hand. BCI user Nathan Copeland used a robotic arm to get a fist bump from then-President Barack Obama in 2016. Jim Watson/AFP/Getty Images

The outside world often underestimates those little wins, says Nathan Copeland, who holds the record for the longest functional brain implant. After breaking his neck in a car accident in 2004, he joined a University of Pittsburgh BCI trial in 2015 and has since used the device to control both computers and a robotic arm.

Advertisement

After he uploaded a video to Reddit of himself playing Final Fantasy XIV, one commenter criticized him for not using his device for more practical tasks. Copeland says people don’t understand that those lighthearted activities also matter. “A lot of tasks that people think are mundane or frivolous are probably the tasks that have the most impact on someone that can’t do them,” he says. “Agency and freedom of expression, I think, are the things that impact a person’s life the most.”

Nathan Copeland plays Final Fantasy XIV using his brain implant to control the game character.

When Brain Implants Become Life-Changing

This perspective resonates with Neuralink’s first user, Noland Arbaugh—paralyzed from the neck down after a swimming accident in 2016. After receiving his implant in January 2024, he was able to control a cursor within minutes of the device being switched on. A few days later, the engineers let him play the video game Civilisation VI, and the technology’s potential suddenly felt real. “I played it for 8 hours or 12 hours straight,” he says. “It made me feel so independent and so free.”

A man seated in a wheelchair looks at the screen of a laptop that\u2019s mounted on his wheelchair.  Before receiving his Neuralink implant, Noland Arbaugh used mouth-operated devices to control a computer. He says the BCI is more reliable and enables him to do many more things on his own. Rebecca Noble/The New York Times/Redux

But the technology is also providing more practical benefits. Before his implant, Arbaugh relied on a mouth-held typing stick and a mouth-controlled joystick called a quadstick, which uses sip-or-puff sensors to issue commands. But the fiddliness of this equipment required constant caregiver support. The Neuralink implant has dramatically increased the number of things he can do independently. He says he finds great value in not needing his family “to come in and help me 100 times a day.”

Advertisement

For Casey Harrell, the technology has been even more transformative. Diagnosed with ALS in 2020, the climate activist had just welcomed a baby daughter and was in the midst of a major campaign, pressuring a financial firm to divest from companies that had poor environmental records.

Person in a wheelchair outdoors, surrounded by green foliage and soft sunlight.

Bald head with wired brain-computer interface sensors attached in front of a monitor

Person using a brain-computer interface to control text on a monitor.Casey Harrell was able to communicate again within 30 minutes of his BCI being switched on. The device translates his neural signals quickly enough for him to hold conversations. Ian Bates/The New York Times/Redux

“Every morning we’d wake up and there’d be a new thing he couldn’t do, a new part of his body that didn’t work,” says his wife, Levana Saxon. Most alarming was his rapid loss of speech, which, among other things, left him unable to indicate when he was in pain. Then a relative alerted him to a clinical trial at the University of California, Davis, using BCIs to restore speech. He immediately signed up.

The device, implanted in July 2023, records from the brain region that controls muscles involved in talking and translates these signals into instructions for a voice synthesizer. Within 30 minutes of it being switched on, Harrell could communicate again. “I was absolutely overwhelmed with the thought of how this would impact my life and allow me to talk to my family and friends and better interact with my daughter,” he says. “It just was so overwhelming that I began to cry.”

While earlier assistive technology limited him to short, direct commands, Harrell says the BCI is fast enough that he can hold a proper conversation, and he’s been able to resume work part-time.

Advertisement

What’s Holding BCI Technology Back?

BCI technology still has limits. Most trial participants using Blackrock Neurotech implants can operate their devices only in the lab because the systems rely on wired connections and racks of computer hardware. Some users, including Copeland and Harrell, have had the equipment installed at home, but they still can’t leave the house with it. “That would be a big unlock if I was able to do so,” says Harrell.

The academic nature of many trials creates additional constraints. Pressure to publish and secure funding pushes researchers to demonstrate peak performance on narrow tasks rather than build more versatile and reliable systems, says Mariska Vansteensel, who runs BCI studies at the University Medical Center Utrecht in the Netherlands. She says that investigating the technology’s limits or repeating an experiment in new patients is “less rewarded in terms of funding.”

In a clinical trial, Scott Imbrie uses a BCI to control a robotic arm, using signals from his motor cortex to make it move a block. University of Chicago

One of Imbrie’s biggest frustrations is the rapid turnover in experiments. Just as he begins to get proficient at one task, he’s asked to switch to the next task. Study designs also mean that much of the users’ time is spent on mundane tasks required to fine-tune the system.

Advertisement

Perhaps the biggest issue is that trials are often time-limited. That’s partly because scar tissue from the body’s immune response to the implant can gradually degrade signal quality. But constraints on funding and researcher availability can also make it impossible for users to keep using their BCIs after their trials end, even when the technology is still functional.

Ian Burkhart’s BCI enables him to grasp objects, pour from a bottle, and swipe a credit card.

Burkhart has firsthand experience. His trial was extended, but the implant was eventually removed after he got an infection. He always knew the trial would end, but it was nonetheless challenging. “It was a little bit of a tease where I got to see the capability of the restoration of function,” he says. “Now I’m just back to where I was.”

The Push to Commercialize BCIs

Progress is being made in transitioning the technology from experimental research devices to fully-fledged medical products that could help users in their everyday lives. Most academic BCI research has relied on Blackrock Neurotech’s Utah Arrays, which typically feature 96 needlelike electrodes that penetrate the brain’s surface. The implant is connected to a skull-mounted pedestal that’s wired to external hardware. But some of the newer devices are sleeker and less invasive.

Advertisement

Neuralink’s implant houses its electronics and rechargeable battery in a coin-size unit connected to flexible electrode threads inserted into the brain by a robotic “sewing machine.” The implant, which is roughly the size of a quarter or a euro, is mounted in a hole cut into the skull and charges and transfers data wirelessly. Synchron takes a different approach, threading a stent-like implant through blood vessels into the motor cortex. This “stentrode” connects by wire to a unit in the chest that powers the implant and transmits data wirelessly.

Bearded person in red T\u2011shirt using a laptop at a kitchen table

Man using a large on-screen keyboard to type messages on a tablet computer Rodney Gorham can use his Synchron implant to control not just a computer, but also smart devices in his home like an air conditioner, fan, and smart speaker. Rodney Decker

Neuralink’s decoder runs on a laptop, while Synchron deploys a smartphone-size signal processing unit as a wireless bridge to the user’s devices, which allows them to use their implants at home and on the move. The companies have also developed adaptive decoders that use machine learning to adjust to neural drift on the fly, reducing the need for recalibration.

Making these devices truly user-friendly will require technology that can interpret user context, says Kurt Haggstrom, Synchron’s chief commercial officer—including mood, attention levels, and environmental factors like background noise and location. This approach will require AI that analyzes neural signals alongside other data streams such as audio and visual input.

Last year, Synchron took a first step by pairing its implant with an Apple Vision Pro headset. When trial participant Rodney Gorham looked at devices such as a fan, a smart speaker, and an air conditioner, the headset overlaid a menu that enabled him to adjust the device’s settings using his implant.

Advertisement

Rodney Gorham uses his Synchron implant to turn on music, feed his dog, and more. Synchron BCI

Another way to reduce cognitive load is to detect high-order signals of intent in neural data rather than low-level motor commands, says Florian Solzbacher, cofounder and chief scientific officer of Blackrock Neurotech. For instance, rather than manually navigating to an email app and typing, the user could simply think about sending an email and the system would then open it with content already prepopulated, he says.

Durability may prove a thornier problem to solve, UChicago’s Downey says. Current implants last around a decade—well short of a lifelong solution. And with limited real estate in the brain, replacement is only possible once or twice, he says.

Rapid technological progress also raises difficult decisions about whether to get a BCI implant now or wait for a more advanced device. This was a major concern for Gorham’s wife, Caroline. “I was hesitant. I didn’t want him to go on the trial but maybe a future one,” she says. “It was my fear of missing out on future upgrades.”

Advertisement

Will Brain Implants Ever Become Consumer Tech?

Some executives have raised the prospect of BCIs eventually becoming consumer devices. Neuralink founder Elon Musk has been particularly vocal, suggesting that the company’s implants could replace smartphones, let people save and replay memories, or even achieve “symbiosis” with AI.

This kind of talk inspires mixed feelings in users. The hype brings visibility and funding, says Beggin, but could divert attention from medical users’ needs. Copeland worries that consumer branding could strip the devices of insurance coverage and that rising demand may make it harder to access qualified surgeons.

A man, seen in profile, sits in a wheelchair. Noland Arbaugh, the first recipient of Neuralink’s BCI, says that using the implant to control a computer made him feel independent and free. Steve Craft/Guardian/eyevine/Redux

There are also concerns about how data collected by BCI companies will be handled if the devices go mainstream. As a trial participant, Arbaugh says he’s comfortable signing away his data rights to advance the technology, but he thinks stronger legal protections will be needed in the future. “Does that data still belong to Neuralink? Does it belong to each person? And can that data be sold?” he asks.

Blackrock’s Solzbacher says the company remains focused on the medical applications of the technology. But he also believes it is building a “universal interface to any kind of a computerized system” that may have broader applications in the future. And he says the company owes it to users not to limit them to a bare-bones assistive technology. “Why would somebody who’s got a medical condition want to get less than something that somebody who’s able-bodied would possibly also take?” says Solzbacher.

Advertisement

The ever-optimistic Imbrie heartily agrees. Medical devices are invariably expensive, he says, but targeting consumer applications could push companies to keep devices simple and affordable while continuing to add features. “I truly believe that making it a consumer-available product will just enhance the product’s capabilities for the medical field,” he says.

Imbrie is on a mission to refocus the conversation around BCIs on the positives. While concerns about risks are valid, he worries that the alarming language often used to describe brain implants discourages people from volunteering for trials that could help them.

“I remember laying there in the bed and not being able to move,” he says, “and it was really dehumanizing having to ask someone to do everything for you. As humans, we want to be independent.”

From Your Site Articles

Advertisement

Related Articles Around the Web

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Beef: When to Watch Season 2 on Netflix

Published

on

Expect more bottled-up frustration, resentment and insecurity this week as Netflix‘s anthology dramedy, Beef, returns for a second season with a fresh cast and setting. 

After picking up a trio of Emmy awards for its debut season, which starred Steven Yeun and Ali Wong, director and screenwriter Lee Sung Jin starts from a clean slate with an entirely different talent lineup. Great Gatsby star Carey Mulligan and Frankenstein’s Oscar Isaac lead the story as Joshua and Lindsay Martín, a couple whose marriage is more than rocky. Cailee Spaeny and Charles Melton play a newly engaged couple, Ashley and Austin, who both work under the Martíns.

Produced by A24 and featuring a soundtrack by Billie Eilish’s brother Finneas, Beef is set for more dark humor as season 2 delivers another high-stakes feud. Read on to find out more on how to stream Beef’s second season this month.

Advertisement

What’s the plot for Beef Season 2?

As with Season 1, this sophomore run of Beef centers around a big argument. But this time it’s between between Joshua and Lindsay, who are running an exclusive Southern California country club together. 

What starts off as a seemingly contained dispute quickly spirals, however, dragging in their friends and employees, including two staff members — the Gen Z couple played by Melton and Spaeny who witness their bosses’ shocking confrontation. 

The fallout from the argument triggers a chain of events, leading to both couples fighting for the approval of the club’s powerful billionaire owner played by Youn Yuh-jung. 

Advertisement

How to watch Beef Season 2

Beef’s entire eight-episode second season will arrive on April 16 on Netflix. The streaming service recently raised prices on its subscription plans, which now range from $9 to $27 a month. 

James Martin/CNET

Netflix offers three plans in the US, ranging from $9 a month for its ad-free version to $20 or $27 a month if you want to stream without ads. 

Advertisement

In the UK, the current pricing is £6 for Standard with Ads, £13 for Standard (Ad-Free) and £19 for Premium (Ad-Free). For Australian viewers, Standard with Ads comes in at AU$10, Standard (Ad-Free) is priced at AU$21, while Premium (Ad-Free) will set you back AU$30.

Source link

Advertisement
Continue Reading

Tech

April updates trigger BitLocker key prompts on some servers

Published

on

Windows BitLocker

Microsoft confirmed on Tuesday that some Windows Server 2025 devices will boot into BitLocker recovery after installing the April 2026 KB5082063 Windows security update.

BitLocker is a Windows security feature that encrypts storage drives to prevent data theft. Windows computers typically enter BitLocker recovery mode after hardware changes or events such as TPM (Trusted Platform Module) updates, to regain access to protected drives that have not been unlocked via the default unlock mechanism.

“Some devices with an unrecommended BitLocker Group Policy configuration might be required to enter their BitLocker recovery key on the first restart after installing this update,” Microsoft said.

Wiz

“In this scenario, the BitLocker recovery key only needs to be entered once — subsequent restarts will not trigger a BitLocker recovery screen, as long as the group policy configuration remains unchanged.”

However, as the company explained, this only happens for very specific configurations, on systems where all the following conditions are met:

Advertisement
  1. BitLocker is enabled on the OS drive.
  2. The Group Policy “Configure TPM platform validation profile for native UEFI firmware configurations” is configured, and PCR7 is included in the validation profile (or the equivalent registry key is set manually).
  3. System Information (msinfo32.exe) reports that the Secure Boot State PCR7 Binding is “Not Possible“.
  4. The Windows UEFI CA 2023 certificate is present in the device’s Secure Boot Signature Database (DB), making the device eligible for the 2023‑signed Windows Boot Manager to be made the default.
  5. The device is not already running the 2023-signed Windows Boot Manager.

Microsoft added that this known issue is unlikely to affect personal devices, as impacted configurations are typically found on systems managed by enterprise IT teams.

BitLocker recovery screen
BitLocker recovery screen (Microsoft)

​The company is now working on a solution to this issue and has shared temporary workarounds that allow installation of this month’s security updates.

Admins are advised to remove the Group Policy configuration before deploying the KB5082063 update, and to ensure that BitLocker bindings use the PCR7 profile by following these steps.

Those who can’t remove the PCR7 group policy before installing can apply a Known Issue Rollback (KIR) on affected devices to prevent the automatic switch to the 2023 Boot Manager and to avoid triggering BitLocker recovery.

In May 2025, Microsoft released emergency updates to address a similar issue that was causing Windows 10 systems to boot into BitLocker recovery after installing the May 2025 security updates.

One year earlier, in August 2024, Microsoft fixed another known issue triggering BitLocker recovery prompts across all supported Windows versions after installing the July 2024 Windows security updates.

Advertisement

In August 2022, Windows devices also became stuck at a BitLocker recovery prompt after installing the KB5012170 security update.

Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.

This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.

Source link

Advertisement
Continue Reading

Tech

Google leaders including Demis Hassabis push back on claim of uneven AI adoption internally

Published

on

A viral post on X from veteran programmer and former Google engineer Steve Yegge set off a rhetorical firestorm this week, drawing sharp public rebuttals from some of Google’s most prominent AI leaders and reopening a sensitive question for the company: how deeply are its own engineers really using the latest generation of AI coding tools?

The debate began after Yegge summarized what he said was the view of his friend, a current and longtime Google employee (or Googler), who claimed the Gemini AI-firm’s internal AI adoption looks much more ordinary and less cutting-edge than outsiders might expect.

Yegge said Googler friend claimed Google engineering mirrors an “average” industry pattern of a 20%-60%-20% split: a small group of outright AI refusers (20%) a much larger middle still relying mainly on simpler chat and coding-assistant workflows (60%), and another small group of AI-first, cutting-edge engineers using agentic tools extensively and mastering them (20%).

A VentureBeat search of X using its parent company’s AI assistant Grok found that Yegge’s April 13 post spread quickly, topping 4,500 likes, 205 quote posts, 458 replies and 1.9 million views as of April 14.

Advertisement

We’ve reached out to Google for comment on the claims and will update when we receive a response.

A veteran, oustpoken Googler voice

Why did the opinion of Yegge’s unnamed Googler friend land so hard? In part because Yegge is not just another commentator taking shots from the sidelines.

He spent about 13 years at Google after earlier stints at Amazon and GeoWorks, later joined Grab, and then became head of engineering at Sourcegraph in 2022. He has long been known in software circles for widely read essays on programming and engineering culture, and for an earlier internal Google memo that accidentally became public in 2011 and drew broad media attention.

That history helps explain why engineers and executives still take his critiques seriously, even when they reject them.

Advertisement

Yegge has built a reputation over many years as a blunt insider-outsider voice on software culture, someone with enough standing in the industry that his judgments can travel fast, especially when they touch nerves inside big technology companies.

Wikipedia’s summary of his career notes his long Google tenure and the outsized attention his blog posts and prior Google critiques have received.

Unpacking Yegge’s friend’s argument

In this case, Yegge’s argument was not simply that Google uses too little AI. It was that the company’s adoption may be uneven, culturally constrained and less transformed than its branding implies.

His friend supposedly argued that some Googlers could not use Anthropic’s Claude Code because it was framed as “the enemy,” and that Gemini was not yet sufficient for the fullest agentic coding workflows. He contrasted Google with what he described as a smaller set of companies moving much faster.

Advertisement

Pushback from Hassabis and current Googlers

The first major pushback came from Demis Hassabis, the co-founder and CEO of Google DeepMind, who replied directly and forcefully. “Maybe tell your buddy to do some actual work and to stop spreading absolute nonsense. This post is completely false and just pure clickbait,” Hassabis wrote.

Other Google leaders followed with lengthier defenses.

Addy Osmani, a director at Google Cloud AI, wrote that Yegge’s account “doesn’t match the state of agentic coding at our company.” He added, “Over 40K SWEs use agentic coding weekly here.”

Osmani said Googlers have access to internal tools and systems including “custom models, skills, CLIs and MCPs,” and pushed back on the idea that Google employees are sealed off from outside models, writing that “folks can even use @AnthropicAI’s models on Vertex” and concluding that “Google is anything but average.”

Advertisement

Other current Google employees reinforced that message. Jaana Dogan, a software engineer at Google, wrote in a quote tweet: “Everyone I work with uses @antigravity like every second of the day,” later following up with another X post stating: “Unpopular opinion: If you think tokens burned is a productivity metric, no one should take you seriously. Imagine you are a top 0.0001% writer and they are only counting the tokens you produce.”

Paige Bailey, a DevX engineering lead at Google DeepMind, said teams had agents “running 24/7.”

Several other Google and DeepMind figures also challenged Yegge’s characterization, some disputing the factual basis of his claims and others suggesting he lacked visibility into current internal usage.

Yegge’s rebuttal

Yegge, for his part, did not retreat. In a follow-up to Hassabis, he wrote, “I’m not trying to misrepresent anyone,” but argued that by his own standard for advanced AI adoption, Google still does not appear to be doing especially well.

Advertisement

He pointed to token usage and the replacement of older development habits with truly agentic workflows as the more meaningful benchmark, and said he would be willing to retract his criticism if Google could show its engineers were operating at that level.

AI adoption vs. AI transformation

That leaves the core dispute unresolved, but clearer. This is less a fight over whether Google engineers use AI at all than a fight over what should count as meaningful adoption.

Googlers are pointing to scale, weekly usage and the availability of internal and external tools. Yegge is arguing that those measures may capture broad exposure without proving a deeper change, an AI transformation, in how engineering work gets done. The clash reflects a wider industry split between visible usage metrics and more transformative, power-user behavior.

For Google, the subject is especially sensitive. Yegge has criticized the company before, including in a 2018 essay explaining why he left, where he argued Google had become too risk-averse and had lost much of its ability to innovate.

Advertisement

If his latest critique had come from a lesser-known poster, it might have faded. Coming from a former longtime Google engineer with a record of memorable public criticism, it instead drew direct responses from some of the company’s top AI figures — and turned a single post into a broader public argument about whether Google’s AI leadership is as deep internally as it looks from the outside.

Source link

Continue Reading

Tech

The CDC Doesn’t Want You To See A CDC Report On How Effective COVID Vaccines Are

Published

on

from the why-not? dept

One of the most remarkable staffing decisions of the second Trump administration has been to allow the anti-vaxxers to run America’s health agencies. While RFK Jr. and his cadre of lieutenants prattle on about how they’re going to be super transparent, data-driven stewards of American health, their actions have put the lie to all of it. Vague statements about diseases, disinformation about the cause and source of other diseases, missed appointment deadlines for key personnel are all combined to create an HHS full of chaos, confusion, and the storied practice of CYA.

But these are ideological people, with some of the stupidest possible ideologies governing their actions. That’s how you get a situation where the CDC produces a report on the efficacy of recent COVID vaccines, following commonly used methodology, only to have the acting CDC director bury the report because he doesn’t like what it finds.

CDC scientists and insiders told the Post that the COVID-19 vaccine study went through the agency’s standard scientific review process and was slated for publication on March 19 in the agency’s Morbidity and Mortality Weekly Report (MMWR). But acting CDC director Jay Bhattacharya blocked the scheduled publication and is holding the study, claiming he has concerns about its methodology.

According to a summary the Post obtained, the study concluded that between September and December of last year, healthy adults vaccinated with a 2025–2026 COVID-19 vaccine saw the risk of emergency department or urgent care visits cut by 50 percent, and the risk of COVID-19-associated hospitalizations cut by 55 percent, compared with healthy adults who did not get this season’s shot.

Bhattacharya reportedly took issue with the test-negative design of the study, which is a well-established method to examine real-world data on vaccine effectiveness. This type of observational study looks at people who have symptoms related to the disease of interest (in this case, COVID-19) and have the same test-seeking behavior. Those who test positive for the disease of interest become positive cases in the study, and those who test negative are test-negative controls. Researchers then compare the two groups based on vaccination status.

Advertisement

So there is no misunderstanding, Bhattacharya is completely full of shit here. He is not holding this CDC study back because he’s concerned about the test-negative methodology. I know that because the same CDC under the same acting Director published a study on the efficacy of flu shots that used the same test-negative method last month, a week before this same COVID vaccine study was to be published. If Bhattacharya had a problem with the method for the COVID study, why didn’t have the same problem with the flu shot study?

The answer is obvious: the methodology is not the issue here. Instead, the problem is the mRNA nature of most COVID vaccines. That, combined with Bhattacharya’s criticisms for COVID vaccines specifically, and his support for changing of the schedule and availability for them, means this study is at odds with his ideology and might be embarrassing for him personally.

Dan Jernigan, who headed CDC’s influenza division for six years and resigned last year in protest of Kennedy’s political interference at the agency, suggested to the Post that stalling the paper fits with Kennedy’s anti-vaccine agenda.

“The secretary has already taken steps to try and remove the availability of the vaccine from children and others, so if you’re putting out an MMWR that the vaccine is effective at preventing hospitalizations and medical care visits … that message is not in line with the direction you’ve been taking with the removal of the vaccine,” he said.

And there you have it: politics and personal CYA injected into matters of public health. Having those in charge of our health agencies prioritize their own personal stature over science, literally burying studies simply because they don’t like what they show, is certainly not going to lead to a more health America.

Advertisement

Filed Under: anti-vaxxers, cdc, covid, covid vaccines, jay bhattacharya, rfk jr.

Source link

Advertisement
Continue Reading

Tech

‘A total disappointment’: Some Galaxy Watch owners are seeing major battery drains after the latest software update

Published

on


  • Many Galaxy Watch owners are reporting a battery drain issue
  • It seems to have stemmed from a recent software update
  • Samsung hasn’t yet commented but Google Play Services might be the culprit

Samsung makes some of the best smartwatches around, but we’re seeing multiple reports from Galaxy Watch owners about rapidly draining battery life — and the issue seems to be tied to a recent software update.

As spotted by Android Authority and others, users have taken to Reddit to complain about a sudden drop in battery life. Given the number of comments and upvotes that thread has received, it seems quite a few people are having problems.

Source link

Continue Reading

Tech

Do data and AI talent needs conflict with a workforce seeking stability?

Published

on

The report shows that many organisations plan to increase the size of their data teams, however, many professionals say they are unlikely to change employers in 2026.

The Data Salaries Job Sentiment Analysis 2026 report, published by Analytics Institute and SAS, examines the growing challenges being experienced by organisations looking to expand their data capabilities. The companies spoke to 167 employees and Analytic members across Ireland to collect the information. 

What was discovered is that there is potentially an emerging AI skills mobility issue in which the heightened demand for data and AI expertise is colliding with a workforce that is increasingly choosing stability over job movement. 

The report found that, while 64pc of organisations have plans to increase the size of their data teams throughout 2026, 70pc of professionals who contributed their information explained that they are unlikely to change employers this year. 

Advertisement

“AI technologies are beginning to reshape roles within the data profession, but their impact is more evolutionary than disruptive,” said Lorcan Malone, chief executive of the Analytics Institute. 

He added, “Rather than replacing expertise, AI is augmenting it, increasing the need for strong governance, critical thinking and the ability to translate outputs into meaningful business outcomes. This makes continuous learning, adaptability, and targeted upskilling more important than ever.”

However, he noted retention has become a defining theme within the space, as job mobility has started to slow, tenure is increasing and professionals are placing greater emphasis on meaningful work, career progression and strong leadership. This, he explained, is leading to the creation of environments that support long-term development where engagement is central to sustaining high-performing data teams. 

Job satisfaction was found to be of key importance to employees across the sector, with nearly half of participants agreeing that they enjoy their role a lot or moderately, with the remainder stating that they enjoy their role “a little”. Meaningful work (65pc), a supportive boss (49pc) and hybrid work (38pc) were also found to be among the most important workplace features or benefits. 

Advertisement

Of the minority of professionals who are open to moving, perhaps surprisingly, salary alone was not the key motivator. The study explained that 41pc of professionals would move for a greater challenge. Many want to work on AI and data projects that have real impact, mainly on projects that go beyond pilots or cost-cutting exercises and drive genuine innovation.

“With only around 30pc of professionals willing to move, organisations face competition for a limited group of talent,” said Alan McGlinn, the director for financial services UK&I and Ireland country lead at SAS.

He said, “If companies are poaching talent from the same group, the pond could become very small. To grow capabilities and succeed with AI and data initiatives, companies need to invest in internal skills development, data literacy and training.”

How important is data strategy?

Rising from 45pc last year, to 49pc in 2026, data was found to be central to organisational strategy, with the number of those who see data as important but not yet central declining slightly from 36pc to 33pc. The report said, this suggests that more companies are prioritising data within strategic decision-making. 

Advertisement

Only 0.66pc reported that data is not important, which the report said highlights its “near-universal value”. Data visualisation and business intelligence reporting were found to be among the most critical technical skills in the sector, with 74pc of respondents identifying them as essential. As were project management 43pc,  machine learning and AI at 33pc. 

Data also suggested there are evolving trends in the tools used by data professionals. Excel at 77pc and SQL at 71pc remain the most widely used technologies, while Python adoption has grown significantly to 53pc, which the report said is reflective of the increasing use of open-source analytics tools across organisations.

Malone said, “Overall, the report shows that as organisations increasingly rely on data-driven insight to inform strategy and improve performance, the demand for skilled professionals remains strong. 

“Companies that can attract and retain individuals with both technical expertise and commercial understanding and develop skills internally where hiring is difficult, will be best positioned to unlock the full value of their data and AI investments.”

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

FCC just handed Netgear a de facto router monopoly in the US

Published

on

The Federal Communications Commission has announced that Netgear has been given conditional approval that effectively exempts it from a previous ban on foreign-made networking routers. The conditional approval gives the company a de facto — though potentially temporary — monopoly on the selling and servicing of new consumer routers in the US.

“We’re pleased to share that Netgear is the first retail consumer router company to receive conditional approval from the Federal Communications Commission (FCC) as a trusted consumer router company,” Netgear CEO CJ Prober said in a statement. “As a US founded and headquartered company, Netgear is aligned with the vision for a more secure digital future for our customers. For the last thirty years, we have been, and continue to be, committed to leading the consumer router category for the United States and setting the bar for quality, performance, innovation and security.”

Both Netgear’s lines of Nighthawk and Orbi mesh routers are covered by the approval until October 1, 2027, which appears to mean that the company can continue to offer software updates to both lines and presumably release and sell new models in the future.

The FCC dramatically expanded the Covered List, a collection of communications equipment seen as posing a risk to national security, to cover all foreign-made routers in March 2026. The decision prevents companies who make routers outside of the US from introducing new foreign-made models, and pushing certain software updates to existing models after March 1, 2027. Confusingly, though, it doesn’t require anyone to replace their existing router or prevent those companies from selling routers they’ve already made. Receiving conditional approval is the definitive way companies can get off the list, but part of the FCC’s requirements for approval is the company offering a plan to bring some or all of its manufacturing to the US — a theoretically costly decision.

Advertisement

Engadget has contacted Netgear for information about the US manufacturing plan it included in its application for conditional approval. We’ll update this article if we hear back.

The vast majority of router companies, even ones that are headquartered in the US like Netgear, build their routers in Asia. It’s not clear what makes Netgear’s currently foreign-made routers safer than, say, an Amazon Eero 7 or a Google Nest WiFi Pro. Until other companies are given conditional approval, though, Netgear is in a unique position.

Source link

Advertisement
Continue Reading

Tech

Best GoPro Camera (2026): Compact, Budget, Accessories

Published

on

The Top 5 GoPro Hero Cameras Compared

GoPros to Avoid

GoPro doesn’t sell anything older than the Hero 12, but there are plenty of Hero 11s and even Hero 10s out there for sale on the internet. We suggest avoiding them. They may work fine, but modern accessories designed for later models won’t work, and these cameras have likely been through the wringer. (They are action cameras, after all.)

GoPro

Advertisement

Hero 11 Black

GoPro no longer sells the Hero 11, but it’s still commonly available on Amazon and other retailers. Unfortunately, it’s usually the same price as the Hero 12 (around $300) and therefore not worth buying.

The Best GoPro Accessories

Best GoPro Camera  Compact Budget Accessories

Photograph: Scott Gilbertson

Should you buy a bundle? Generally, I say no. Get the camera, figure it out, and see how you end up using it. When you find yourself trying to solve a problem, start looking for an accessory. Here are some of my favorite things that I’ve tested and used, but if you have favorites you think I should try, drop a comment below.

Advertisement

A Good MicroSD card for $50: According to GoPro’s recommendations, you want a microSD card with a V30 or UHS-3 rating. That said, GoPros can be finicky about SD cards. I’ve had good luck with, and recommend, the Samsung linked here. Another card I’ve used extensively is the Sandisk Extreme Pro.

GoPro Media Mod for $100: By far my most-used accessory, the media mod does add some bulk, but in most cases this is more than made up for by the fact that you can plug in a real microphone (I use mine with a Rode Wireless). Sound quality is radically improved with this one. This may be less necessary if you get the Hero 12 or later, since those models do have support for Bluetooth mics.

Advertisement

GoPro Handlebar Mount for $40: I’ve been doing a lot more riding lately, and this mount pretty much lives on my bike these days. It’s been rock solid in my testing, and beats any of the third-party mounts I’ve tested.

GoPro Tripod Mount Adapters for $30: Unless you have the Hero 12 or 13, which have a tripod mount built-in, you’ll need a few of these to mount your GoPro to a tripod like the GorillaPod.

Advertisement

GoPro Floaty for $35: If you’re getting anywhere near the water, grab one of these. Trust me, you will drop your GoPro, and when you do, you will glad you have this (unless the water is clear and you’re a good free diver). GoPro also makes a Floating Hand Grip ($23), which not only floats but has a leash for diving or surfing.

GoPro Selfie Stick for $80: This 48-inch extension pole collapses up surprisingly small and isn’t very heavy. It’s the best selfie stick I’ve used. I rarely use it for selfies, but it makes a great monopod on soft ground, like a sandy beach.

DaVinci Resolve Studio for $300: This is my video editing software of choice. There is a free version, but I got tired of converting media to fit the restrictions of the free version. Best money I ever spent when it comes to making better videos.

Do More With Your GoPro

Advertisement
Image may contain Person Electronics Computer Hardware and Hardware

GoPro Hero 13 Black in color White with the Anamorphic Lens attachedPhotograph: Scott Gilbertson

So you bought a GoPro, now what? Well aside from reading the manual and learning how to control it, the best thing to do is get out there and experiment. Here are a few suggestions and things I use my GoPro for regularly.

GoPro Labs: GoPro Labs is an alternative firmware for your GoPro Hero camera that enables all sort of features and experiments that allow you to do things you can’t do with the stock firmware. There is some risk of instability and bugs, but I’ve been using the Labs firmware for five years now and never had an issue. It’s like adding 10 new features to your GoPro for free. I’ll reference several of my favorites in the sections below, but you can see the full list of things you can do with GoPro Labs on the GoPro Labs website.

TimeLapse Videos: After mounting the GoPro on my bike, this is my most used feature. GoPro’s time lapse is incredibly easy to use (compared to most mirrorless cameras anyway) and with the Labs firmware you can do really long timelapse shots, over 24 hours if you have a battery pack to help power it.

Advertisement

Raise the Bitrate: By default the Hero 13 Black does not record at the highest bitrate. This is likely the reason your video looks mushy and not as clear and sharp as it should. Change that by going to the ProTune settings and pick “high” for the bitrate. If you want to go crazy you can use Labs to raise your bitrate all the way the 200 (the “high” setting in the stock firmware is 100). The caveat is that depending on your SD card, you may not be able to record that high. But every bit helps. The trade off is that cranking up the bitrate does chew through battery and can also lead to overheating, so if you’re shooting in very warm conditions, you might want to dial this down. Some footage is better than no footage because you overheated and the camera shut off.

Learn Manual Exposure: The Hero 13 Black gives you full control over exposure, so take advantage of it. Play with the exposure compensation especially (called EV Comp in the settings). Try dialing it down to -1 for midday shots. Also play with max ISO. The lower you can keep this, the better your footage will look, though because the GoPro kinda sucks in low light, there are some limits here.

Improve Sound with the Media Mod: It bears repeating, but the Media Mod is the best way to get good sound out of the GoPro without investing in Bluetooth mics (which is impractical in many mounting scenarios anyway). The only time the media mod leaves my GoPro is when I’m in the water (sadly, the Hero with the media mod installed is not at all waterproof).

Is Now a Good Time to Buy?

Advertisement

Earlier this week, GoPro announced its new Mission 1 cameras, which offer cinema-ready features in an action camera form. The Mission 1 cameras have a new image processor, the GP3. The last time GoPro updated its processor was in 2021 with the release of the Hero 10 with used the GP2.

The GP3 is a 5-nanometer system on a chip (SoC), which matches what we saw in Insta360 and DJI’s action cameras released late last year. More interesting is the claim that the GP3 will have “more than 2X the pixel processing power,” which would be what you want to handle 8K (or higher) footage. The Hero 13 Black is limited to 5.3K video.

The Mission 1 cameras will have better low light performance, which GoPro has sucked at in the past, while retaining the small camera form factor. However, it’s worth noting here that GoPro has yet to mention the price and the Mission 1 and Mission 1 Pro aren’t available for preorder until May 21. The Mission 1 Pro ILS, along with some of the bundles, will not arrive until later this year (GoPro says Q3).

If you want a GoPro for the start of the summer, the Hero 13 Black is still fine. Otherwise, I will update this guide once I’ve tested one, or all, of the Mission cameras.

Advertisement

Power up with unlimited access to WIRED. Get best-in-class reporting and exclusive subscriber content that’s too important to ignore. Subscribe Today.

Source link

Continue Reading

Tech

GoPro’s New MISSION 1 Line of Cinema Cameras Boast 8K Recording and Swappable Lens System

Published

on

GoPro Mission 1 Cinema Camera
GoPro unveiled its MISSION 1 series today, a trio of compact cinema cameras designed to tackle both extreme action and demanding film work in a single robust chassis. The lineup includes the regular MISSION 1, MISSION 1 PRO, and MISSION 1 PRO ILS, all of which share the same cutting-edge 50-megapixel one-inch sensor, as well as a new GP3 CPU that pushes resolution, performance, and battery life further than anything GoPro has done before.



Each of the three models uses the same core components to give excellent low-light performance and up to 14 stops of dynamic range right out of the box. Individual pixels measure 1.6 micrometers at maximum resolution and combine to generate effective pixels of 3.2 micrometers while shooting 4K film, capturing substantially more light than rival action cameras’ small sensors. The GP3 chip handles the heavy lifting with power efficiency, preventing the cameras from overheating even while filming long, high-resolution clips, and allowing them to function for hours on a single battery charge.

Sale


DJI Osmo 360 Essential Combo, 360 Camera with 1-Inch 360° Imaging, Native 8K 360° Video, 4K/120fps…
  • Big Views, Brilliant Quality – Groundbreaking 1-inch 360° imaging [1] delivers excellent low-light for sharper shots on every adventure. Now with…
  • Stunning, Day or Night – Capture every detail with 8K 360° videos. Whether you’re exploring city nights or chasing sunrise, your 360 camera delivers…
  • No Pole, All Action – 1.2m Invisible Selfie Stick turns your Osmo 360 into a cameraman that follows you. Shoot super-smooth 4K/120fps and magical…

The video options are quite impressive, especially since the MISSION 1 PRO and MISSION 1 PRO ILS can record 8K at 60 frames per second in the regular 16:9 aspect, 4K at 240 frames per second, and 1080p at 960 frames per second for silky-smooth slow-motion views. Both models feature open-gate 4:3 recording at 8K30 and 4K120 resolutions, providing filmmakers greater options when cropping later. The MISSION 1 base model steps things back a bit to 8K at 30 frames per second, 4K at 120 frames per second, and 1080p at 240 frames per second, but retains the 4K120 open-gate option. Every model can shoot 50-megapixel RAW photographs at up to 60 frames per second in bursts, and all can capture 10-bit color in either a GP-Log2 profile for grading or HLG HDR for extremely bright highlights.

Advertisement


Battery life has just become a whole lot better, with the upgraded Enduro 2 cell easily lasting over 5 hours at 1080p30 and more than three hours at 4k30 on any of the models – and charging faster than previous generations. Even at the highest settings, such as 8K60 on the PRO models, the cameras will gladly run for more than an hour as long as there is enough airflow due to the better thermal design. Storage remains simple with a microSD card, and the bodies are still waterproof down to a generous 20 meters without the need for an additional case.


The physical changes make daily use easier, since each camera now has a larger OLED screen on the rear and a front display for quick checks. The buttons are higher up, making them easier to press with gloves on, and the overall construction feels sturdy enough to survive rigorous use. Stabilization is implemented via the popular HyperSmooth method, which has been tuned to work well with larger sensor data sets. Audio has also advanced significantly, with four microphones on board capable of recording crisp 32-bit float sound, wind suppression, and compatibility for wireless mics or USB-C connected choices.

GoPro Mission 1 Cinema Camera
The Mission 1 Pro ILS includes a Micro Four Thirds lens mount, allowing you to use any prime lens while still benefiting from the camera’s full-fat HyperSmooth stabilisation. This gives up a world of possibilities for telephoto shooting, macro work, and custom glass, all without sacrificing the compact size or sturdy build quality that we’ve come to expect from GoPro. Meanwhile, the other two models include a fixed wide-angle lens with a 159-degree field of view and a retractable hood to reduce glare.

GoPro Mission 1 Cinema Camera
GoPro Mission 1 Cinema Camera
GoPro Mission 1 Cinema Camera
Mounting-wise, GoPro sticks with what we know and love, with built-in fingers and a magnetic latch that fits perfectly into their usual range of grips, cages, and housings. If you want to get a little more fancy, there’s a new point-and-shoot grip that includes cold-shoe mounts and a standard thread for easy setup. If you need even more flexibility, the extra media mods provide more input options.

GoPro Mission 1 Cinema Camera
GoPro plans to launch pre-orders for the Mission 1 and Mission 1 Pro on May 21, with the first units entering stores on May 28. The Mission 1 Pro ILS will be released later in the third quarter, but don’t worry, we’ll know precisely how much it will cost once the specifics are finalized. For the time being, the company is pitching the entire line as a more inexpensive option to get your hands on some significant tiny cinema equipment.
[Source]

Source link

Advertisement
Continue Reading

Tech

Anthropic’s Claude Managed Agents gives enterprises a new one-stop shop but raises vendor ‘lock-in’ risk

Published

on

Anthropic announced a new platform last week, Claude Managed Agents, aiming to cut out the more complex parts of AI agent deployment for enterprises and competes with existing orchestration frameworks.

Claude Managed Agents is also an architectural shift: enterprises, already burdened with orchestrating an increasing number of agents, can now choose to embed the orchestration logic in the AI model layer.

While this comes with some potential advantages, such as speed (Anthropic proposes its customers can deploy agents in days instead of weeks or months), it also, of course, then also turns more control over the enterprise’s AI agent deployments and operations to the model provider — in this case, Anthropic — potentially resulting in greater “lock in” for the enterprise customer, leaving them more subject to Anthropic’s terms, conditions, and any subsequent platform changes.

But maybe that is worth it for your enterprise, as Anthropic further claims that its platform “handles the complexity” by letting users define agent tasks, tools and guardrails with a built-in orchestration harness, all without the need for sandboxing code execution, checkpointing, credential management, scoped permissions and end-to-end tracing. 

Advertisement

The framework manages state, execution graphs and routing and brings managed agents to a vendor-controlled runtime loop.

Even before the release of Claude Managed Agents, new directional VentureBeat research showed that Anthropic was gaining traction at the orchestration level as enterprises adopted its native tooling. Claude Managed Agents represents a new attempt by the firm to widen its footprint as the orchestration method of choice for organizations.

Anthropic is surging in orchestration interest

Orchestration has emerged as an important segment for enterprises to address as they scale AI systems and deploy agentic workflows. 

VentureBeat directional research of several dozen firms for the first quarter of 2026 found that enterprises mostly chose existing frameworks, such as Microsoft’s Copilot Studio/Azure AI Studio, with 38.6% of respondents in February reporting using Microsoft’s platform. VentureBeat surveyed 56 organizations with more than 100 employees in January and 70 in February.

Advertisement

OpenAI closely followed at 25.7%. Both showed strong growth between the first two months of the year.

VB Plulse orches

Anthropic, driven by increased interest in its offerings, such as Claude Code, over the past year, is putting up a fight. 

Adoption of the Anthropic tool-use and workflows API increased from 0% to 5.7% between January and February. This tracks closely with the growing adoption of Anthropic’s foundation models, showing that enterprises using Claude turn to the company’s native orchestration tooling instead of adding a third-party framework. 

While VentureBeat surveyed before the launch of Claude Managed Agents, we can extrapolate that the new tool will build on that growth, especially if it promises a more straightforward way to deploy agents.

Collapsing the external orchestration layer

Enterprises may find that a streamlined, internal harness for agents compelling, but it does mean giving up certain controls.

Advertisement

Session data is stored in a database managed by Anthropic, increasing the risk that enterprises become locked into a system run by a single company. This may be less desirable for some firms and compete with their desires to move away from the locked-in software-as-a-service (SaaS) applications in the current stacks, which many hope that AI will facilitate.

The specter of vendor lock-in means agent execution becomes more model-driven rather than direct by the organization, happens in an environment enterprises don’t fully control, and behavior becomes harder to guarantee.

It also opens the possibility of giving agents conflicting instructions, especially if the only way for users to exert any control over agents is to prompt them with more context.

Agents could have two control planes: one defined by the enterprises’ orchestration system through instructions and the other as an embedded skill from the Claude runtime.

Advertisement

This could pose an issue for highly sensitive and regulated workflows, such as financial analysis or customer-facing tasks. 

Pricing, control and competitive set

Balancing control with ease is one thing; enterprises also consider the cost structure of Claude Managed Agents.

Claude Managed Agents introduces a hybrid pricing model that blends token-based billing with a usage-based runtime fee.

This makes Managed Agets more dynamic, though less predictable, when determining cost structures. Enterprises will be charged a standard rate of $0.08 per hour when agents are actively running.

Advertisement

For example, at $0.70 per hour, a one-hour session could cost up to $37 to process 10,000 support tickets, depending on how long each agent runs and how many steps it takes to complete a task.

Microsoft, currently the leader according to VentureBeat’s directional survey, offers several orchestration offerings. Copilot Studio uses a capacity-based billing structure, so enterprises pay for blocks of interactions between users and agents rather than the number of steps an agent takes.

Microsoft’s approach tends to be more predictable than Anthropic’s pricing plan: Copilot Studio starts at $200 per month for 25,000 messages.

Compared to similar competitors like OpenAI’s Agents SDK, the picture becomes murky. Agents SDK is technically free to use as an open-source project. However, OpenAI bills for the underlying API usage. Agents built and orchestration with Agents SDK using GPT-5.4, for example, will cost $2.50 per 1 million input tokens and $15 per 1 million output tokens.

Advertisement

The enterprise decision

Claude Managed Agents does give enterprises who find the actual deployment of production agents too complicated a reprieve. It reduces their engineering overhead while adding speed and simplicity in a fast-changing enterprise environment. 

But that comes with a choice: lose control, observability and portability and risk further vendor lock-in.

Anthropic just made a case for why its ecosystem is becoming not just the foundation model of choice for enterprises, but also the orchestration infrastructure. It becomes more imperative for enterprises to balance ease with lesser control. 

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025