The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has given government agencies four days to secure their systems against another Catalyst SD-WAN Manager vulnerability it flagged as actively exploited in attacks.
Catalyst SD-WAN Manager (formerly known as vManage) is a network management software that helps admins monitor and manage up to 6,000 Catalyst SD-WAN devices from a single dashboard.
Cisco patched this information disclosure vulnerability (CVE-2026-20133) in late February, saying that it allows unauthenticated remote attackers to access sensitive information on unpatched devices.
“This vulnerability is due to insufficient file system access restrictions. An attacker could exploit this vulnerability by accessing the API of an affected system,” Cisco said at the time. “A successful exploit could allow the attacker to read sensitive information on the underlying operating system.”
One week later, the company revealed that two other security flaws it had patched the same day (CVE-2026-20128 and CVE-2026-20122)were being exploited in the wild.
Advertisement
Federal agencies ordered to patch until Friday
On Monday, CISA added CVE-2026-20133 to its Known Exploited Vulnerabilities (KEV) Catalog, “based on evidence of active exploitation,” and ordered Federal Civilian Executive Branch (FCEB) agencies to secure their networks until Friday, April 24.
“Please adhere to CISA’s guidelines to assess exposure and mitigate risks associated with Cisco SD-WAN devices as outlined in CISA’s Emergency Directive 26-03 and CISA’s Hunt & Hardening Guidance for Cisco SD-WAN Devices,” CISA said. “Adhere to the applicable BOD 22-01 guidance for cloud services or discontinue use of the product if mitigations are not available.”
Cisco has yet to confirm the U.S. cybersecurity agency’s report that the flaw is being exploited in attacks, with its security advisory still saying that its Product Security Incident Response Team (PSIRT) is “not aware of any public announcements or malicious use of the vulnerabilities that are described in CVE-2026-20133.”
In February, Cisco also tagged a critical authentication bypass vulnerability (CVE-2026-20127) as exploited in zero-day attacks that were enabling threat actors to add malicious rogue peers to targeted networks since at least 2023.
Advertisement
More recently, in early March, the company released security updates to address two maximum-severity vulnerabilities in its Secure Firewall Management Center (FMC) software that can allow attackers to gain root access to the underlying operating system and execute arbitrary Java code with root privileges.
Over the last several years, CISA has tagged 91 Cisco vulnerabilities as exploited in the wild, six of which have been used by various ransomware operations.
AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.
At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.
A rare decay exposes cracks in physics that refuse easy explanation
The Standard Model shows strain under one of its toughest tests
Four-sigma anomaly hints something subtle may be missing in physics
Scientists at the Large Hadron Collider (LHC) have found something strange inside a particle decay process called an electroweak penguin decay, which could signal a major problem for modern physics.
The LHC is a 27-kilometer circular tunnel buried under the French-Swiss border where proton beams smash together at nearly the speed of light, recreating conditions similar to those just after the Big Bang.
Experiments like LHCb analyze the collision debris to look for cracks in the Standard Model, the rulebook for particle physics that has passed every test for over 50 years despite being known to be incomplete.
Article continues below
Advertisement
How scientists spotted the glitch in a million-to-one event
In their experiment, the researchers observed a B meson, a short-lived particle, breaking apart into three other particles.
This transformation is extremely rare, happening only once in every million B meson collisions.
Advertisement
That rarity makes it a powerful tool for spotting hidden influences from unknown particles.
Think of it like hearing a faint whisper in a noisy stadium. The whisper might be nothing, or it might be the most important message you have ever heard.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The scientists measured two things: the angles at which the particles fly apart, and how often the decay happens.
Advertisement
Both measurements disagreed with what the Standard Model of physics predicts, which sounds impressive, but physicists demand much higher certainty for a formal discovery.
The odds of this disagreement being a random fluke are about 1 in 16,000, as the current finding sits at four sigma.
The gold standard for a discovery is five sigma, which is a 1 in 1.7 million chance of being wrong.
Advertisement
Imagine rolling a die and getting the same number six times in a row. That is unusual, but not impossible.
Now imagine rolling the same number 20 times in a row. That would make you seriously question whether the die is fair. That is the difference between four sigma and five sigma.
There are several possible explanations if this anomaly turns out to be real.
One idea involves particles called leptoquarks, which would unite two different types of matter: leptons and quarks.
Advertisement
Another possibility is the existence of heavier versions of particles we already know about, extending the Standard Model rather than replacing it.
This kind of indirect evidence has happened before in physics. Radioactivity was discovered 80 years before scientists found the particles responsible for it.
This proves that you can detect something’s effects long before you can see it directly.
The current anomaly could be a similar early warning. The LHCb experiment analyzed about 650 billion B meson decays between 2011 and 2018 to find this penguin process.
Advertisement
Since then, the team has already collected three times more data, which will help confirm or rule out the anomaly.
Future upgrades in the 2030s will increase the dataset by 15 times, giving physicists the statistical power needed to reach a definitive conclusion.
The main complication comes from something called “charming penguins.” These are Standard Model processes involving charm quarks that are very hard to calculate precisely.
Recent estimates suggest these effects are not large enough to explain the anomaly. But the calculations are so tricky that physicists cannot be completely sure yet.
Advertisement
Think of it like trying to measure the thickness of a hair with a ruler. The ruler is simply not precise enough for the job.
The current available data is like that ruler. It is pointing in an interesting direction, but we need a sharper tool to be certain.
The four-sigma tension is genuinely exciting, but particle physics has seen promising anomalies disappear before.
More data and better calculations could still bring the results back into line with the Standard Model.
Advertisement
Last year, there was an independent LHC experiment known as CMS, which published results in agreement with the current study, albeit with lesser precision.
Together, both studies make the strongest combined case yet that something genuinely new may be operating at the most fundamental level of reality, but both share similar uncertainties.
For now, the Standard Model remains standing, but for the first time in decades it appears to be wobbling.
Whether that wobble is the beginning of a collapse or just a statistical mirage will be decided by the next few years of data.
Advertisement
Either outcome will teach us something profound about how science progresses when the most successful theory in history meets its first real test.
OpenAI is building a systems integrator channel for Codex, enlisting large consulting firms to carry the coding agent into organisations it cannot reach through direct sales. Cognizant and CGI are the first named SI partners in the programme, announced on the same day. Codex has grown 6x among ChatGPT Business and Enterprise users since January.
OpenAI has launched a formal partner programme for Codex, its AI coding and software development agent, enlisting a select group of global systems integrators to deploy the product inside enterprise clients that lack the internal capability to implement and govern it themselves.
The first named partners, Cognizant (NASDAQ: CTSH) and CGI (NYSE: GIB), each announced their inclusion in the programme on 21 April, coinciding with OpenAI’s own blog post setting out the enterprise push.
Both firms describe being part of “a select group” of SIs chosen for their track record in deploying AI at enterprise scale. The programme is a distribution bet as much as a product one.
Advertisement
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
OpenAI’s direct sales organisation can reach technology-forward enterprises with dedicated engineering teams, but large-scale rollouts into complex, regulated, or legacy-heavy environments require the change management, systems integration, and industry-specific compliance expertise that consulting firms carry at scale.
Cognizant, with $21.1 billion in annual revenue and operations across financial services, healthcare, and manufacturing, is embedding Codex into its Software Engineering Group as a standardised capability, both for its own delivery and as a tool it takes to clients.
Advertisement
CGI, whose engineers already use Codex in volume across government, public safety, and commercial sectors, gains early access to new Codex capabilities as part of the expanded agreement.
OpenAI’s chief revenue officer, Denise Dresser, framed the partnership in terms of the gap between early Codex adoption and repeatable deployment at scale.
“As enterprises move quickly to put Codex to work, we’re working with leading partners like Cognizant to help more organisations move from early usage to repeatable deployment,” she said.
The programme extends Codex’s scope beyond code generation: both partners are positioning it for legacy code modernisation, vulnerability detection, code review automation, and broader agentic workflow use cases beyond software development.
Advertisement
The backdrop to the announcement is a pattern of rapid enterprise adoption that has strained the product’s earlier model of direct-access usage. Codex now has 3 million weekly active developers, up from 2 million in mid-March and 1.6 million at the time of the desktop app launch in February.
Within ChatGPT Business and Enterprise, the number of Codex users grew 6x between January and April. OpenAI’s enterprise segment now accounts for more than 40% of its revenue and is on track to reach parity with consumer revenue by the end of 2026.
Named enterprise users include Notion, Ramp, Braintrust, GitHub, Nextdoor, Wonderful, Cisco, and Nvidia, among others.
The Codex partner programme builds on a broader enterprise alliance strategy OpenAI announced in February, when it unveiled Frontier Alliances with McKinsey, Boston Consulting Group, Accenture, and Capgemini, oriented around its Frontier agent platform rather than Codex specifically.
Advertisement
The distinction matters: Frontier Alliances are positioned as strategy-and-deployment partnerships for OpenAI’s enterprise agent infrastructure, while the Codex partner programme is a more targeted engineering-and-delivery play aimed at software teams.
Both tracks reflect the same underlying ambition: to use incumbent consulting relationships to accelerate adoption in the parts of the enterprise market that are slow to self-serve.
The dynamics of this channel push are uncomfortable for some established software vendors. Fortune has reported that investors in SaaS companies including Salesforce, Workday, and ServiceNow have repriced their stakes in part on the concern that enterprises will use AI coding agents such as Codex and Anthropic’s Claude Code to build bespoke software, eliminating the need for standard SaaS products.
Enlisting the same SI firms those vendors have historically depended on for sales and implementation accelerates that dynamic.
Advertisement
Accenture, Capgemini, Cognizant, and CGI each serve large incumbent software vendors and AI-native platforms simultaneously; the degree to which they tilt their Codex workloads away from existing enterprise software implementations will be the commercial signal to watch.
YouTube is cracking down on celebrity deepfakes, and this time around, it is not just talking about the problem in vague platform-safety terms. In a new blog post, YouTube announced that it is expanding its likeness detection technology to the entertainment industry.
So now, the tools will be accessible to talent agencies and management companies for the celebrities they represent. This tool works in a way that is similar to Content ID, but rather than matching copyrighted media, it looks for AI-generated content using a person’s likeness and gives eligible participants the ability to find that content and request removal.
Why this is YouTube’s answer to AI celebrity fakes
Rachit Agarwal / Digital Trends
The Content AI comparison here is key, since that is exactly how YouTube wants people to think about this. If the system works well, it could give high-profile people a much faster way to spot fake videos using their face before those clips spread too far.
And yes, this is clearly about celebrity fakes first. YouTube’s expanded program is aimed at the entertainment industry right now, with support from major talent agencies and management companies, including CAA, UTA, WME, and Untitled Management.
The company has worked with those groups to refine how the tool should serve talent, which suggests this has been shaped around the practical needs of public figures rather than launched as a generic moderation experiment.
Advertisement
Deepfake Tom Hanks on InstagramTom Hanks via Instagram
One notable detail in the announcement is that celebrities and entertainers are eligible to access the tool even if they do not have a YouTube channel. In other words, it isn’t just a creator perk and functions more like a platform-wide control system. Deepfake scams, fake endorsements, and manipulated celebrity clips are no longer fringe internet weirdness. They’re a real part of online dangers.
How far is YouTube taking this
As of right now, the announcement is focused on the entertainment industry. YouTube did not announce a broad public rollout that protects regular users. We also have no details regarding how fast the detection system is or how proactive the company will be against these deepfakes.
This year, Garmin has been working quite hard to improve its offline presence. And keeping with that momentum, the company has opened a new exclusive brand store in New Delhi. Unlike a typical retail outlet, Garmin’s new store focuses on giving customers a hands-on feel of its products. Visitors can try out a wide range of GPS-enabled smartwatches, including the Fenix series for endurance users, Forerunner models for training insights, Instinct for rugged outdoor use, and the premium MARQ collection.
The store also features wellness-focused options like the Venu and Vívoactive series, catering to users looking for everyday fitness tracking alongside advanced health insights.
More Than Just Smartwatches
Garmin is also using this space to showcase its broader ecosystem. This includes golf tech like launch monitors and simulators, indoor cycling solutions from the Tacx lineup (including the Neo Bike Plus), and handheld GPS devices built for navigation in challenging environments.
The idea is simple: instead of just selling devices, Garmin wants users to see how its products work together across different use cases—from fitness tracking to outdoor exploration and sports performance.
Advertisement
With this launch, Garmin continues to expand its offline footprint in India, complementing its presence across multi-brand outlets such as Just in Time, Helios, Reliance Digital, and Malabar Watches. The company is also active online through its website, Amazon, and Flipkart.
Firms like Bank of America, Citi, Wells Fargo, and others are reporting strong profits while reducing head count and automating more work. “All of them credited A.I. to some degree … in areas ranging from the so-called back office, where tens of thousands of employees fill out paperwork to comply with various laws and regulations, to the front office, where seven-figure salaried professionals put together complicated financial transactions for corporate clients,” reports the New York Times. From the report: Less than four months ago, Bank of America’s chief executive, Brian T. Moynihan, volunteered in a TV interview what he would say to his 210,000 employees about the chance of artificial intelligence replacing human work. “You don’t have to worry,” he said. “It’s not a threat to their jobs.” Last week, after Bank of America reported $8.6 billion in profit for the first quarter — $1.6 billion more than the same period a year earlier — Mr. Moynihan struck a different tone. The bank’s bottom line, he said, was helped by shedding 1,000 jobs through attrition by “eliminating work and applying technology,” which he repeatedly specified was artificial intelligence. He predicted more of that in the months and years to come. “A.I. gives us places to go we haven’t gone,” Mr. Moynihan said.
The veneer of Wall Street’s longstanding assertion — that A.I. will enhance human work, not replace it — is rapidly peeling away, as evidenced by the current quarterly earnings season. JPMorgan Chase, Citi, Bank of America, Goldman Sachs, Morgan Stanley and Wells Fargo racked up $47 billion in collective profits, up 18 percent, while shedding 15,000 employees. All of them credited A.I. to some degree with helping cut jobs and automate work in areas ranging from the so-called back office, where tens of thousands of employees fill out paperwork to comply with various laws and regulations, to the front office, where seven-figure salaried professionals put together complicated financial transactions for corporate clients.
Unlike executives in Silicon Valley, few major financial figures are stating outright that A.I. is eliminating jobs. Citi, for example, has pledged to shrink its work force by 20,000 people through what one executive described to financial analysts last week as the company’s “productivity and efficiency journey.” The bank is paying for A.I. software from Anthropic, Google, Microsoft and OpenAI, to automatically read legal documents, approve account openings, send invoices for trades and organize sensitive customer data, among other tasks, according to public statements by bank executives and two people familiar with Citi’s systems. Among the recent job cuts at Citi were scores of employees who were part of the bank’s “A.I. Champions and Accelerators” program, according to the two people, who were not permitted by the bank to speak publicly. The program involves Citi employees who perform their day jobs while also working to persuade their colleagues to adopt A.I. technologies.
Last fall, an Alabama police officer decided he wasn’t going to allow a 62-year-old woman to exercise her First Amendment rights — not if she was going to do so from inside an inflatable penis costume.
Yes, these are sentences we actually have to write here at Techdirt — things that seem so implausible you’d expect them to be generated from the sloppiest of AI prompts. It’s a real thing, though. It happened to Fairhope, Alabama resident Renea Gamble. It was inflicted by Fairhope PD officer Andrew Babb, who took apparently personal offense at Gamble’s inflatable penis costume and her “No Dick-Tator” sign she carried during a “No Kings” protest.
You can watch the arrest in all of its ingloriousness below. It’s alternately comical and horrifying. Horrifying, because it involves officers assaulting a 62-year-old grandmother. Comical, because multiple attempts are made to fit the person and costume into a police cruiser before deciding it might be easier if the person and costume were separated… which then leads to an officer discovering it’s kind of difficult to shove a non-resisting inflatable penis costume into the truck of a police car.
Advertisement
This arrest and resulting prosecution gained national attention. Rather than encourage the city to drop the prosecution, it seemingly emboldened it. Prosecutors waited until people had moved onto the next outage before dropping additional charges on Renea Gamble, including “disturbing the peace” and “giving a false name to law enforcement.” (The latter charge stemmed from Gamble telling the arresting officers her name was “Auntie Fa.”)
Officer Babb — as captured by his own recording — presented a very subjective take on the First Amendment when arresting Gamble. He not only demanded Gamble explain what he was supposed to tell his own kids if they happened to see her costume (wtaf?), but said her particular form of expression was inherently unlawful because Fairhope was “a family town.”
The officer was as wrong about free speech as the town officials who supported this arrest and prosecution. Fair hope mayor Sherry Sullivan called the costume an “obscene display.” City council president Jack Burrell said the costume “violated community standards,” without bothering to assess what the community’s standards actually were.
In December, a Mobile-based talk radio station held a listener poll to choose its annual Alabamian of the Year, with “Inflatable Fairhope Protest Penis” receiving the most votes.
Much more legitimately fortunate is the disposition of Renea Gamble’s criminal case. As AL.com reports, it has been tossed by municipal judge Haymes Snedeker. However, Snedeker’s acquittal comes with some caveats that will make it a bit more difficult for Gamble to pursue a civil rights lawsuit in this particular venue:
Judge Haymes Snedeker, after a trial lasting more than two hours, said he did not believe Fairhope Police Cpl. Andrew Babb was attempting to suppress 62-year-old Renea Gamble’s free speech rights during their encounter at the anti-Trump protest. He also said there may have been enough probable cause for Babb to arrest her.
However, Snedeker said he was not 99.9% certain that Gamble should be convicted of crimes stemming from the actions that led to her arrest. She was found not guilty of misdemeanor charges of disorderly conduct and resisting arrest, as well as a municipal violation for disturbing the peace and giving a false name to law enforcement.
Snedeker gives the officer too much credit, especially when his own statements during the arrest made it clear he was singling Gamble out because he didn’t agree with her particular form of free expression. The recording shows Gamble wanted to manhandle this penis because he was employed by a “family town” and didn’t want to have to explain to his kids what this costume might represent. He didn’t present anything approaching legal justification prior to pinning Gamble to the ground and handcuffing her.
The judge said all of this despite the officer’s testimony being completely undercut by the recording of the arrest.
Advertisement
Babb testified that he was using de-escalation techniques he was trained to employ as a police officer. He said he was concerned about safety and viewed Gamble’s costume as an “obstruction.” He said he did not arrest her because he was personally offended by the costume or her anti-Trump message.
[…]
[Gamble’s lawyer David] Gespass disagreed, arguing that body camera footage revealed the true nature of the arrest. In the footage, Babb tells Gamble that her costume would not be tolerated in a town that “has values.”
“That’s all he talked about when he was confronting her was, ‘I am not going to put up with this in my town,’” Gespass said. “He said nothing about her causing any problems with traffic. Certainly, if you watch the video, he is not de-escalating anything. He approached her aggressively.”
That wasn’t the only stupid thing said by the government. Here’s the prosecutor attempting to salvage an obviously bogus prosecution:
Advertisement
“There is no constitutional right to wear a total erect penis on the side of the road,” he said. “I’m sorry.”
Hmm. Seems wrong. Pretty sure in this context it’s protected speech. And all of these qualifiers suggest no prosecution would be happening if Gamble had simply let a little bit of the air out of the costume to appear a bit more flaccid.
Both the cop and the prosecutor (Marcus McDowell) are welcome to say dumb things in their own defense during testimony. For the judge to suggest this arrest might have been supported by probable cause demands a better explanation than what was given here. If the standard is only that one cop felt something violated the law, the First Amendment is meaningless. It’s the sort of thing that tells citizens their rights only matter once they’re violated… and even then, they still may not mean much. The judge blew the call here and the local cops know it. Gamble still has a target on her back and the cops have the judicial leeway to keep arresting protesters they personally don’t like.
The US president told CNBC on Tuesday that Anthropic is ‘shaping up’ following a White House meeting last Friday at which the company’s CEO Dario Amodei discussed its Mythos AI model with Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent.
The Pentagon’s blacklisting of Anthropic remains in legal limbo, with a federal appeals court and a San Francisco district court having reached conflicting conclusions.
President Donald Trump told CNBC’s Squawk Box on Tuesday that a deal allowing Anthropic’s AI models to be used within the Department of Defense is “possible,” describing the company as “shaping up.”
“They came to the White House a few days ago, and we had some very good talks with them, and I think they’re shaping up,” Trump said.
“They’re very smart, and I think they can be of great use.” The comments mark a striking rhetorical reversal from a president who, in late February, posted on Truth Social ordering all federal agencies to “IMMEDIATELY CEASE all use of Anthropic’s technology” and declared that his administration would “not do business with them again.”
Advertisement
Trump’s remarks follow a White House meeting on Friday 18 April at which Anthropic CEO Dario Amodei met Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent to discuss the company’s new Mythos model, a frontier AI system Anthropic has described as highly capable at cybersecurity tasks and has so far made available only to a small group of organisations.
The White House described the conversation as “productive and constructive.” Anthropic said Amodei had a “productive discussion” with administration officials about how the company and the US government can “work together on key shared priorities such as cybersecurity, America’s lead in the AI race, and AI safety.”
When reporters asked Trump about the meeting on a runway in Phoenix, he responded “Who?” and said he had “no idea” Amodei had been there.
The meeting took place against the backdrop of a dispute that has few precedents in the relationship between Washington and the technology industry.
Advertisement
In July 2025, Anthropic signed a $200 million contract with the Pentagon, becoming the first AI lab to have its models approved for use on the DOD’s classified networks.
But as negotiations over Claude’s deployment on the department’s GenAI.mil platform began in September, talks broke down. The Pentagon demanded that Anthropic grant unfettered access to its models for all lawful purposes.
Anthropic drew two firm lines: its AI would not be used in fully autonomous weapons systems that select targets without human intervention, and it would not be used for domestic mass surveillance of Americans.
Defense Secretary Pete Hegseth responded by designating Anthropic a “supply chain risk to national security” in late February 2026, a label previously reserved for companies associated with foreign adversaries.
Advertisement
The formal designation, confirmed to Anthropic’s leadership on 5 March, required defense contractors to certify they were not using Anthropic’s models in work with the military. Trump amplified the measure with his Truth Social directive.
The designation was, as Anthropic argued in subsequent litigation, unprecedented: as US District Judge Rita Lin noted in a stinging 43-page ruling that granted Anthropic a preliminary injunction in late March, it appeared to be directed not at a genuine national security threat but at punishing the company for “bringing public scrutiny to the government’s contracting position”, “classic illegal First Amendment retaliation,” she wrote.
The legal situation remains split. A federal appeals court in Washington DC denied Anthropic’s request to temporarily block the supply chain risk designation on 8 April. Judge Lin’s preliminary injunction in San Francisco, from a separate but related case, bars enforcement of Trump’s Truth Social ban on Claude across the rest of the government.
The practical effect is that Anthropic is excluded from Pentagon contracts but can continue working with other government agencies while both cases proceed. The DOD has continued to use Claude during the US-Iran war, which began before the blacklisting took effect.
Advertisement
What appears to have shifted the White House’s posture is Mythos. Parts of the intelligence community and CISA, the Cybersecurity and Infrastructure Security Agency, have been testing the model.
The White HouseOffice of Management and Budget is setting up protocols to allow federal agencies to access a controlled version.
Treasury Secretary Bessent’s presence at Friday’s meeting was read by sources close to the negotiations as a signal that the economic and financial security arguments for Mythos access had reached the most senior levels of the administration.
As one administration source told Axios: “It would be grossly irresponsible for the US government to deprive itself of the technological leaps that the new model presents. It would be a gift to China.”
Advertisement
Whether any resumption of the Anthropic–Pentagon relationship is possible remains uncertain. Trump’s Tuesday comments refer to talks that have been promising but did not produce a deal.
The appeals court ruling on the supply chain risk designation still stands. Hegseth has not withdrawn his position. Anthropic, meanwhile, has engaged Ballard Partners, the lobbying firm where Wiles previously worked, for advocacy around Department of War procurement, a move that signals it understands the political dynamics as well as the legal ones.
The company’s annualised revenue has reached $30 billion and it is considering an IPO; the supply-chain risk designation damages enterprise credibility even where it does not block commercial deals.
Investors are aggressively courting AI researchers to build startups that can make AI more reliable and efficient.
Yu Su, an Ohio State professor leading an AI agent lab, said he initially resisted the pressure from VCs to commercialize his work. He finally took the leap last year and spun out his work into a startup when he saw that foundational model advances could make agents truly personalized.
NeoCognition, a startup Su describes as a research lab developing self-learning AI agents, has just emerged from stealth with $40 million in seed funding. The round was co-led by Cambium Capital and Walden Catalyst Ventures, with participation from Vista Equity Partners and angels, including Intel CEO Lip-Bu Tan and Databricks co-founder Ion Stoica.
“Today’s agents are generalists,” Su (pictured right) told TechCrunch. “Every time you ask them to do a task, you take a leap of faith.”
Advertisement
According to Su, the issue lies in a lack of consistency. Current agents, whether from Claude Code, OpenClaw or Perplexity’s computer tools, successfully complete tasks as intended only about 50% of the time, he said.
Since agents are still so unreliable, they are not ready to be trusted, independent workers, Su told TechCrunch. NeoCognition intends to change that by developing an agent system that can self-learn to become an expert in any domain, similar to how humans learn.
Su argues that while human intelligence is broad, its real power is our ability to specialize. When we enter a new environment or profession, we can rapidly master its unique rules, relationships, and consequences.
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
NeoCognition is building agents to mirror this exact approach.
Advertisement
“For humans, our continued learning process is essentially the process of building a world model for any profession, any environment,” Su said. “We believe for agents to become experts, they need to learn autonomously to build a model of any given micro world.”
Su views this capacity for rapid specialization as the critical missing link to getting AI to work reliably on its own.
While it is possible to train agents for autonomous tasks, they must be custom-engineered for a specific vertical. NeoCognition is different because it’s building agents that are generalists capable of self-learning and specializing in any domain.
NeoCognition intends to sell its agent systems primarily to enterprises, including established SaaS companies, which can use them to build agent-workers or to enhance existing product offerings.
Advertisement
Su highlighted that an investment from Vista Equity Partners is especially valuable for this reason. As one of the largest private equity firms in the software space, Vista can provide NeoCognition with direct access to a vast portfolio of companies looking to modernize their products with AI.
NeoCognition currently has about 15 employees, the majority of whom hold PhDs.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Reuters reports that Meta plans to start collecting U.S.-based employees’ mouse movements, clicks, keystrokes, and occasional screen snapshots to train AI agents that can better learn how humans use computers. The tool, called Model Capability Initiative (MCI), will reportedly “not be used for performance assessments or any other purpose besides model training and that safeguards were in place to protect ‘sensitive content.'” From the report: Meta CTO Andrew Bosworth told employees in a separate memo shared on Monday that the company would step up internal data collection as part of those “AI for Work” efforts, now re-branded as Agent Transformation Accelerator (ATA). “The vision we are building towards is one where our agents primarily do the work and our role is to direct, review and help them improve,” Bosworth said. The aim, he added, was for agents to “automatically see where we felt the need to intervene so they can be better next time.” Bosworth did not explicitly spell out how those agents would be trained, but said Meta would be “rigorous” about “building up data and evals for all the types of interactions we have as we go about our work.”
Meta spokesperson Andy Stone acknowledged that the MCI data would be among the inputs. […] “If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how people “actually use them — things like mouse movements, clicking buttons, and navigating dropdown menus,” said Stone.
In a headphone market split between legacy brands that barely change and boutique players that release new models at a relentless pace, it’s easy to overlook the ones playing a longer, quieter game. Shure has built a reputation on consistency. Campfire Audio operates at the other extreme. Most brands fall somewhere in between.
That leaves a gap, and that’s where Audio-Technica operates. The Tokyo-based manufacturer doesn’t chase headlines or flood the market, but it brings decades of experience across both professional and consumer audio. At AXPONA 2026, that approach stands out. For newer enthusiasts, it’s a reminder that some of the most established names in personal audio are not always the loudest.
Why the Japanese Audio Brand Still Matters
Founded in Tokyo by Hideo Matsushita, Audio-Technica set out to make high quality audio accessible to a wider audience. The company began with phono cartridges in 1962 and expanded steadily into headphones, microphones, turntables, and wireless systems for broadcast and live sound. That pro side of the business still matters, even if it doesn’t always get the attention.
Today, Audio-Technica is one of the largest audio companies in Japan. Outside of cartridges, though, it can still fly under the radar for a lot of listeners. At the show, that low profile was hard to miss. Instead of setting up in the Ear Gear section where the headphone crowd tends to gather, the brand took a series of smaller rooms off the main path. Easy to walk past if you weren’t looking for them.
Advertisement
Audio-Technica NARUKAMI HPA-KG NARU Tube Headphone Amplifier at AXPONA 2026
That’s a shame, because the setup was one of the more complete at the show. Visitors could move from cartridge demos to a full spread of headphones, covering everything from entry-level models to the flagship end of the spectrum, including the $108,000 NARUKAMI HPA-KG NARU Tube Headphone Amplifier and its matching headphones.
Audio-Technica offers a headphone lineup that can stand alongside Sony, Beyerdynamic, and Sennheiser, with models that cover a wide range of prices and use cases. That includes everything from entry level wired designs to high-end open and closed back headphones, along with more niche offerings like wireless in-ear models tied to Star Wars characters. It is a broad catalog, but it rarely gets presented as aggressively as its competitors.
At AXPONA 2026, that range was on full display. I spent time with the flagship ATH-ADX7000 ($3,499 at Crutchfield), along with several of the step down open back models, and moved over to the closed back side with the Narukami system and the ATH-AWKG ($4,499 at Amazon), plus a few sub-flagship options.
The ADX7000 was not new territory. We have already reviewed it favorably, and both Editor-in-Chief Ian White and Editor-at-Large Chris Boylan placed it among their top three headphones from CanJam NYC 2026. That context matters because it frames the rest of the lineup. The flagship is not just competitive. It sets the tone for everything below it.
The house tuning of Audio-Technica headphones leans a bit brighter than what you typically get from Beyerdynamic or Sennheiser, with a noticeable lift in the presence region. Vocals come forward, strings have a bit more bite, and that works especially well with string quartets, concertos, vocal tracks, or a cappella arrangements. It’s not trying to sound polite. It’s trying to keep things engaging.
Advertisement
The upside is consistency. That same tuning shows up from the top of the line down to the entry level models. As you move up, you get more resolution, better control, and a cleaner presentation, but the core voicing doesn’t shift. The idea that Hideo Matsushita started with is still intact. You’re not relearning the sound every time you move up the ladder.
For those who haven’t spent time with the brand, the ATH-AD500X (open-back) and ATH-A550Z (closed-back) are easy entry points at around $150. They won’t match the technical performance of the flagships, but they give you a clear sense of what Audio-Technica is aiming for without asking for a major commitment.
Advertisement. Scroll to continue reading.
I was also able to speak with a representative from Audio-Technica about reviewing the newly released X-series models, which push the price of their open-back headphones down to as little as $59. That’s a meaningful shift for a brand that has traditionally started higher up the ladder. It opens the door for a lot more people to hear what that house sound is about without much risk.
Advertisement
I’m looking forward to spending time with those. The Audio-Technica models I already have get a lot of use with classical and jazz, and they offer a different perspective compared to the darker tuning you get from some of the established German brands. It’s not better or worse. Just a different take that a lot of listeners might find more engaging. The plan is to start with the X-series and work up the line so readers can see how that tuning evolves as the price climbs.
At the other end of the spectrum sits the NARUKAMI HPA-KG NARU Tube Headphone Amplifier and matching headphones. Only two units are currently in North America, which raises an obvious question. Were the other 23 already sold? At $108,000, in under two years, that would be quite a statement. Audio-Technica spent a decade developing that system and went through 11 prototypes before bringing it to market. It’s hard to justify on paper, but that’s not really the point. The design, build quality, and sonic performance are about as far as this category can be pushed right now.
And in the context of AXPONA 2026, it almost felt reasonable. There were plenty of speaker systems in the building that cost a lot more. Getting more time with it would require another trip to the show. I’m not expecting a loaner to show up anytime soon, but there’s no harm in asking.
You must be logged in to post a comment Login