This year, Garmin has been working quite hard to improve its offline presence. And keeping with that momentum, the company has opened a new exclusive brand store in New Delhi. Unlike a typical retail outlet, Garmin’s new store focuses on giving customers a hands-on feel of its products. Visitors can try out a wide range of GPS-enabled smartwatches, including the Fenix series for endurance users, Forerunner models for training insights, Instinct for rugged outdoor use, and the premium MARQ collection.
The store also features wellness-focused options like the Venu and Vívoactive series, catering to users looking for everyday fitness tracking alongside advanced health insights.
More Than Just Smartwatches
Garmin is also using this space to showcase its broader ecosystem. This includes golf tech like launch monitors and simulators, indoor cycling solutions from the Tacx lineup (including the Neo Bike Plus), and handheld GPS devices built for navigation in challenging environments.
The idea is simple: instead of just selling devices, Garmin wants users to see how its products work together across different use cases—from fitness tracking to outdoor exploration and sports performance.
Advertisement
With this launch, Garmin continues to expand its offline footprint in India, complementing its presence across multi-brand outlets such as Just in Time, Helios, Reliance Digital, and Malabar Watches. The company is also active online through its website, Amazon, and Flipkart.
This latest investment is in addition to the $8bn Amazon has already invested in the AI company.
In line with a strategy to expand AI infrastructure, Amazon has announced plans to invest up to $25bn into Anthropic – $5bn now and as much as $20bn in the future. To date, Amazon has invested $8bn in Anthropic and the AI start-up has also committed to spending more than $100bn over the next 10 years on Amazon’s cloud technologies.
This will include current and future generations of Trainium, which is Amazon’s custom AI chips, and tens of millions of Graviton cores, Amazon’s CPU chip. Additionally, Anthropic will secure up to 5GW of capacity to train and power their AI models, including significant Trainium3 capacity which is expected to come online this year.
Commenting on the announcement, Andy Jassy, the CEO of Amazon, said: “Anthropic’s commitment to run its large language models on AWS Trainium for the next decade reflects the progress we’ve made together on custom silicon, as we continue delivering the technology and infrastructure our customers need to build with generative AI.”
Advertisement
The news is hot on the heels of Anthropic’s plans to release Mythos, the platform’s latest model, to UK financial institutions. The model was launched as part of a limited release earlier this month, with access granted to big businesses and financial organisations to bolster their security. Reportedly, Mythos vastly outperforms other AI models in vulnerability detection and exploitation.
Amazon has been investing heavily in AI infrastructure as of late, with a $50bn contribution to a recent OpenAI funding round that closed at $110bn. As part of the round, Nvidia invested $30bn and SoftBank invested $30bn. The investment brought OpenAI from a $500bn valuation to a $730bn pre-money valuation.
OpenAI also has an additional deal with Amazon in which the organisation will utilise 2GW of computing capacity powered by Amazon’s in-house Trainium chips.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
According to the data, Ireland’s jobs market is holding up, but confidence is staggered as employers become more cautious.
The Employment and Recruitment Federation, supported by Icon Accounting, has published the Irish Labour Market Annual Survey. This report explores Ireland’s jobs market and the impact that temporary and contract roles are having on the wider landscape.
The Federation’s research found that while Ireland’s jobs market is holding steady, “employer confidence is becoming more measured, with temporary and contract roles now overtaking permanent recruitment in a clear sign of growing caution across the market”.
The report suggests that this is indicative of a landscape in which organisations are still actively recruiting, but with a far more defensive mindset as they navigate the pressures of rising costs, uncertainty and talent constraints.
Advertisement
In 2025, permanent recruiting accounted for 44pc of net fee income, while temporary and contracting roles together represented 48pc. The Employment and Recruitment Federation said this is reflective of a move by employers towards achieving greater flexibility and that employers are becoming more selective and more controlled in how they build teams, particularly where longer-term commitments are required.
“That matters because it tells us something important about the broader economy,” said Siobhán Kinsella, the president of the Employment and Recruitment Federation. “Demand is still there, but businesses are making more guarded decisions around cost, growth and commitment.”
Uncertain future
The report comes at a time when the Irish jobs market is experiencing relatively low unemployment, where employment itself is growing steadily, but it is happening in a space where the sentiment is, according to the research, “becoming more mixed”. More than half of the companies who contributed to the report said that they have concerns about the shape of the economy and demand over the next 12 months.
Issues with attracting and retaining key talent are also weighing on organisations, as seven out of 10 agencies said that skills availability remains the biggest challenge in the market, with the sharpest shortages reported in healthcare, engineering, accountancy and finance, construction, and IT.
Advertisement
Kinsella said: “This is a market where businesses still need people but are under more pressure in how they hire. The challenge now is not simply filling roles. It is balancing growth ambitions with cost control, uncertainty and ongoing difficulty accessing the right skills.
“As students begin reviewing CAO options ahead of the Change of Mind period, the findings also point to a longer-term pipeline issue for Ireland, particularly in areas such as accountancy and finance, engineering, healthcare and technology where demand remains strong and shortages remain persistent.
“That creates a more fragile dynamic underneath the headline numbers. The labour market is still performing, but employers are no longer behaving with the same level of confidence they were a year or two ago.”
Also commenting on the report, David Shanahan, a director at Irish recruitment agency IT Search, which is a member of the Vertical Markets Group, noted that his own organisation’s researchfound that the volume of tech roles across the Irish market have increased from 6,082 in March 2025 to 6,810 in March 2026.
Advertisement
He noted, however, that there are some “important nuances” to make note of. “In areas such as data and cybersecurity, hiring is heavily contract focused. However, across AI, software engineering and DevOps, hiring is more evenly split than it might appear.
“Contract roles are largely tied to project and programme delivery, while permanent hiring is driven by product-led and commercial software companies, where the focus is on building and scaling their own technology platforms.”
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Tenth time still isn’t the charm. One day after Tim Cook announced that he was handing the reigns to John Ternus, an analyst that has beaten this drum before is again saying today that a sale of Disney to Apple can and must happen. That sale is even less likely to happen now, than it was the last nine times we’ve updated this story.
It may have made Apple Park, but Apple is not going to take over Disney’s magic kingdom
The rumor that Apple will buy Disney is as old as the iPod and it’s lasted through a couple of Disney CEOs now. You’d think that analysts would have figured out that it isn’t going to happen. Or at least they should have begun to see that clickbait headlines about why Apple must buy Disney have to be losing their pull as the years go by and Apple keeps on doing nothing of the sort. Continue Reading on AppleInsider | Discuss on our Forums
A rare decay exposes cracks in physics that refuse easy explanation
The Standard Model shows strain under one of its toughest tests
Four-sigma anomaly hints something subtle may be missing in physics
Scientists at the Large Hadron Collider (LHC) have found something strange inside a particle decay process called an electroweak penguin decay, which could signal a major problem for modern physics.
The LHC is a 27-kilometer circular tunnel buried under the French-Swiss border where proton beams smash together at nearly the speed of light, recreating conditions similar to those just after the Big Bang.
Experiments like LHCb analyze the collision debris to look for cracks in the Standard Model, the rulebook for particle physics that has passed every test for over 50 years despite being known to be incomplete.
Article continues below
Advertisement
How scientists spotted the glitch in a million-to-one event
In their experiment, the researchers observed a B meson, a short-lived particle, breaking apart into three other particles.
This transformation is extremely rare, happening only once in every million B meson collisions.
Advertisement
That rarity makes it a powerful tool for spotting hidden influences from unknown particles.
Think of it like hearing a faint whisper in a noisy stadium. The whisper might be nothing, or it might be the most important message you have ever heard.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The scientists measured two things: the angles at which the particles fly apart, and how often the decay happens.
Advertisement
Both measurements disagreed with what the Standard Model of physics predicts, which sounds impressive, but physicists demand much higher certainty for a formal discovery.
The odds of this disagreement being a random fluke are about 1 in 16,000, as the current finding sits at four sigma.
The gold standard for a discovery is five sigma, which is a 1 in 1.7 million chance of being wrong.
Advertisement
Imagine rolling a die and getting the same number six times in a row. That is unusual, but not impossible.
Now imagine rolling the same number 20 times in a row. That would make you seriously question whether the die is fair. That is the difference between four sigma and five sigma.
There are several possible explanations if this anomaly turns out to be real.
One idea involves particles called leptoquarks, which would unite two different types of matter: leptons and quarks.
Advertisement
Another possibility is the existence of heavier versions of particles we already know about, extending the Standard Model rather than replacing it.
This kind of indirect evidence has happened before in physics. Radioactivity was discovered 80 years before scientists found the particles responsible for it.
This proves that you can detect something’s effects long before you can see it directly.
The current anomaly could be a similar early warning. The LHCb experiment analyzed about 650 billion B meson decays between 2011 and 2018 to find this penguin process.
Advertisement
Since then, the team has already collected three times more data, which will help confirm or rule out the anomaly.
Future upgrades in the 2030s will increase the dataset by 15 times, giving physicists the statistical power needed to reach a definitive conclusion.
The main complication comes from something called “charming penguins.” These are Standard Model processes involving charm quarks that are very hard to calculate precisely.
Recent estimates suggest these effects are not large enough to explain the anomaly. But the calculations are so tricky that physicists cannot be completely sure yet.
Advertisement
Think of it like trying to measure the thickness of a hair with a ruler. The ruler is simply not precise enough for the job.
The current available data is like that ruler. It is pointing in an interesting direction, but we need a sharper tool to be certain.
The four-sigma tension is genuinely exciting, but particle physics has seen promising anomalies disappear before.
More data and better calculations could still bring the results back into line with the Standard Model.
Advertisement
Last year, there was an independent LHC experiment known as CMS, which published results in agreement with the current study, albeit with lesser precision.
Together, both studies make the strongest combined case yet that something genuinely new may be operating at the most fundamental level of reality, but both share similar uncertainties.
For now, the Standard Model remains standing, but for the first time in decades it appears to be wobbling.
Whether that wobble is the beginning of a collapse or just a statistical mirage will be decided by the next few years of data.
Advertisement
Either outcome will teach us something profound about how science progresses when the most successful theory in history meets its first real test.
OpenAI is building a systems integrator channel for Codex, enlisting large consulting firms to carry the coding agent into organisations it cannot reach through direct sales. Cognizant and CGI are the first named SI partners in the programme, announced on the same day. Codex has grown 6x among ChatGPT Business and Enterprise users since January.
OpenAI has launched a formal partner programme for Codex, its AI coding and software development agent, enlisting a select group of global systems integrators to deploy the product inside enterprise clients that lack the internal capability to implement and govern it themselves.
The first named partners, Cognizant (NASDAQ: CTSH) and CGI (NYSE: GIB), each announced their inclusion in the programme on 21 April, coinciding with OpenAI’s own blog post setting out the enterprise push.
Both firms describe being part of “a select group” of SIs chosen for their track record in deploying AI at enterprise scale. The programme is a distribution bet as much as a product one.
Advertisement
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
OpenAI’s direct sales organisation can reach technology-forward enterprises with dedicated engineering teams, but large-scale rollouts into complex, regulated, or legacy-heavy environments require the change management, systems integration, and industry-specific compliance expertise that consulting firms carry at scale.
Cognizant, with $21.1 billion in annual revenue and operations across financial services, healthcare, and manufacturing, is embedding Codex into its Software Engineering Group as a standardised capability, both for its own delivery and as a tool it takes to clients.
Advertisement
CGI, whose engineers already use Codex in volume across government, public safety, and commercial sectors, gains early access to new Codex capabilities as part of the expanded agreement.
OpenAI’s chief revenue officer, Denise Dresser, framed the partnership in terms of the gap between early Codex adoption and repeatable deployment at scale.
“As enterprises move quickly to put Codex to work, we’re working with leading partners like Cognizant to help more organisations move from early usage to repeatable deployment,” she said.
The programme extends Codex’s scope beyond code generation: both partners are positioning it for legacy code modernisation, vulnerability detection, code review automation, and broader agentic workflow use cases beyond software development.
Advertisement
The backdrop to the announcement is a pattern of rapid enterprise adoption that has strained the product’s earlier model of direct-access usage. Codex now has 3 million weekly active developers, up from 2 million in mid-March and 1.6 million at the time of the desktop app launch in February.
Within ChatGPT Business and Enterprise, the number of Codex users grew 6x between January and April. OpenAI’s enterprise segment now accounts for more than 40% of its revenue and is on track to reach parity with consumer revenue by the end of 2026.
Named enterprise users include Notion, Ramp, Braintrust, GitHub, Nextdoor, Wonderful, Cisco, and Nvidia, among others.
The Codex partner programme builds on a broader enterprise alliance strategy OpenAI announced in February, when it unveiled Frontier Alliances with McKinsey, Boston Consulting Group, Accenture, and Capgemini, oriented around its Frontier agent platform rather than Codex specifically.
Advertisement
The distinction matters: Frontier Alliances are positioned as strategy-and-deployment partnerships for OpenAI’s enterprise agent infrastructure, while the Codex partner programme is a more targeted engineering-and-delivery play aimed at software teams.
Both tracks reflect the same underlying ambition: to use incumbent consulting relationships to accelerate adoption in the parts of the enterprise market that are slow to self-serve.
The dynamics of this channel push are uncomfortable for some established software vendors. Fortune has reported that investors in SaaS companies including Salesforce, Workday, and ServiceNow have repriced their stakes in part on the concern that enterprises will use AI coding agents such as Codex and Anthropic’s Claude Code to build bespoke software, eliminating the need for standard SaaS products.
Enlisting the same SI firms those vendors have historically depended on for sales and implementation accelerates that dynamic.
Advertisement
Accenture, Capgemini, Cognizant, and CGI each serve large incumbent software vendors and AI-native platforms simultaneously; the degree to which they tilt their Codex workloads away from existing enterprise software implementations will be the commercial signal to watch.
YouTube is cracking down on celebrity deepfakes, and this time around, it is not just talking about the problem in vague platform-safety terms. In a new blog post, YouTube announced that it is expanding its likeness detection technology to the entertainment industry.
So now, the tools will be accessible to talent agencies and management companies for the celebrities they represent. This tool works in a way that is similar to Content ID, but rather than matching copyrighted media, it looks for AI-generated content using a person’s likeness and gives eligible participants the ability to find that content and request removal.
Why this is YouTube’s answer to AI celebrity fakes
Rachit Agarwal / Digital Trends
The Content AI comparison here is key, since that is exactly how YouTube wants people to think about this. If the system works well, it could give high-profile people a much faster way to spot fake videos using their face before those clips spread too far.
And yes, this is clearly about celebrity fakes first. YouTube’s expanded program is aimed at the entertainment industry right now, with support from major talent agencies and management companies, including CAA, UTA, WME, and Untitled Management.
The company has worked with those groups to refine how the tool should serve talent, which suggests this has been shaped around the practical needs of public figures rather than launched as a generic moderation experiment.
Advertisement
Deepfake Tom Hanks on InstagramTom Hanks via Instagram
One notable detail in the announcement is that celebrities and entertainers are eligible to access the tool even if they do not have a YouTube channel. In other words, it isn’t just a creator perk and functions more like a platform-wide control system. Deepfake scams, fake endorsements, and manipulated celebrity clips are no longer fringe internet weirdness. They’re a real part of online dangers.
How far is YouTube taking this
As of right now, the announcement is focused on the entertainment industry. YouTube did not announce a broad public rollout that protects regular users. We also have no details regarding how fast the detection system is or how proactive the company will be against these deepfakes.
Firms like Bank of America, Citi, Wells Fargo, and others are reporting strong profits while reducing head count and automating more work. “All of them credited A.I. to some degree … in areas ranging from the so-called back office, where tens of thousands of employees fill out paperwork to comply with various laws and regulations, to the front office, where seven-figure salaried professionals put together complicated financial transactions for corporate clients,” reports the New York Times. From the report: Less than four months ago, Bank of America’s chief executive, Brian T. Moynihan, volunteered in a TV interview what he would say to his 210,000 employees about the chance of artificial intelligence replacing human work. “You don’t have to worry,” he said. “It’s not a threat to their jobs.” Last week, after Bank of America reported $8.6 billion in profit for the first quarter — $1.6 billion more than the same period a year earlier — Mr. Moynihan struck a different tone. The bank’s bottom line, he said, was helped by shedding 1,000 jobs through attrition by “eliminating work and applying technology,” which he repeatedly specified was artificial intelligence. He predicted more of that in the months and years to come. “A.I. gives us places to go we haven’t gone,” Mr. Moynihan said.
The veneer of Wall Street’s longstanding assertion — that A.I. will enhance human work, not replace it — is rapidly peeling away, as evidenced by the current quarterly earnings season. JPMorgan Chase, Citi, Bank of America, Goldman Sachs, Morgan Stanley and Wells Fargo racked up $47 billion in collective profits, up 18 percent, while shedding 15,000 employees. All of them credited A.I. to some degree with helping cut jobs and automate work in areas ranging from the so-called back office, where tens of thousands of employees fill out paperwork to comply with various laws and regulations, to the front office, where seven-figure salaried professionals put together complicated financial transactions for corporate clients.
Unlike executives in Silicon Valley, few major financial figures are stating outright that A.I. is eliminating jobs. Citi, for example, has pledged to shrink its work force by 20,000 people through what one executive described to financial analysts last week as the company’s “productivity and efficiency journey.” The bank is paying for A.I. software from Anthropic, Google, Microsoft and OpenAI, to automatically read legal documents, approve account openings, send invoices for trades and organize sensitive customer data, among other tasks, according to public statements by bank executives and two people familiar with Citi’s systems. Among the recent job cuts at Citi were scores of employees who were part of the bank’s “A.I. Champions and Accelerators” program, according to the two people, who were not permitted by the bank to speak publicly. The program involves Citi employees who perform their day jobs while also working to persuade their colleagues to adopt A.I. technologies.
Last fall, an Alabama police officer decided he wasn’t going to allow a 62-year-old woman to exercise her First Amendment rights — not if she was going to do so from inside an inflatable penis costume.
Yes, these are sentences we actually have to write here at Techdirt — things that seem so implausible you’d expect them to be generated from the sloppiest of AI prompts. It’s a real thing, though. It happened to Fairhope, Alabama resident Renea Gamble. It was inflicted by Fairhope PD officer Andrew Babb, who took apparently personal offense at Gamble’s inflatable penis costume and her “No Dick-Tator” sign she carried during a “No Kings” protest.
You can watch the arrest in all of its ingloriousness below. It’s alternately comical and horrifying. Horrifying, because it involves officers assaulting a 62-year-old grandmother. Comical, because multiple attempts are made to fit the person and costume into a police cruiser before deciding it might be easier if the person and costume were separated… which then leads to an officer discovering it’s kind of difficult to shove a non-resisting inflatable penis costume into the truck of a police car.
Advertisement
This arrest and resulting prosecution gained national attention. Rather than encourage the city to drop the prosecution, it seemingly emboldened it. Prosecutors waited until people had moved onto the next outage before dropping additional charges on Renea Gamble, including “disturbing the peace” and “giving a false name to law enforcement.” (The latter charge stemmed from Gamble telling the arresting officers her name was “Auntie Fa.”)
Officer Babb — as captured by his own recording — presented a very subjective take on the First Amendment when arresting Gamble. He not only demanded Gamble explain what he was supposed to tell his own kids if they happened to see her costume (wtaf?), but said her particular form of expression was inherently unlawful because Fairhope was “a family town.”
The officer was as wrong about free speech as the town officials who supported this arrest and prosecution. Fair hope mayor Sherry Sullivan called the costume an “obscene display.” City council president Jack Burrell said the costume “violated community standards,” without bothering to assess what the community’s standards actually were.
In December, a Mobile-based talk radio station held a listener poll to choose its annual Alabamian of the Year, with “Inflatable Fairhope Protest Penis” receiving the most votes.
Much more legitimately fortunate is the disposition of Renea Gamble’s criminal case. As AL.com reports, it has been tossed by municipal judge Haymes Snedeker. However, Snedeker’s acquittal comes with some caveats that will make it a bit more difficult for Gamble to pursue a civil rights lawsuit in this particular venue:
Judge Haymes Snedeker, after a trial lasting more than two hours, said he did not believe Fairhope Police Cpl. Andrew Babb was attempting to suppress 62-year-old Renea Gamble’s free speech rights during their encounter at the anti-Trump protest. He also said there may have been enough probable cause for Babb to arrest her.
However, Snedeker said he was not 99.9% certain that Gamble should be convicted of crimes stemming from the actions that led to her arrest. She was found not guilty of misdemeanor charges of disorderly conduct and resisting arrest, as well as a municipal violation for disturbing the peace and giving a false name to law enforcement.
Snedeker gives the officer too much credit, especially when his own statements during the arrest made it clear he was singling Gamble out because he didn’t agree with her particular form of free expression. The recording shows Gamble wanted to manhandle this penis because he was employed by a “family town” and didn’t want to have to explain to his kids what this costume might represent. He didn’t present anything approaching legal justification prior to pinning Gamble to the ground and handcuffing her.
The judge said all of this despite the officer’s testimony being completely undercut by the recording of the arrest.
Advertisement
Babb testified that he was using de-escalation techniques he was trained to employ as a police officer. He said he was concerned about safety and viewed Gamble’s costume as an “obstruction.” He said he did not arrest her because he was personally offended by the costume or her anti-Trump message.
[…]
[Gamble’s lawyer David] Gespass disagreed, arguing that body camera footage revealed the true nature of the arrest. In the footage, Babb tells Gamble that her costume would not be tolerated in a town that “has values.”
“That’s all he talked about when he was confronting her was, ‘I am not going to put up with this in my town,’” Gespass said. “He said nothing about her causing any problems with traffic. Certainly, if you watch the video, he is not de-escalating anything. He approached her aggressively.”
That wasn’t the only stupid thing said by the government. Here’s the prosecutor attempting to salvage an obviously bogus prosecution:
Advertisement
“There is no constitutional right to wear a total erect penis on the side of the road,” he said. “I’m sorry.”
Hmm. Seems wrong. Pretty sure in this context it’s protected speech. And all of these qualifiers suggest no prosecution would be happening if Gamble had simply let a little bit of the air out of the costume to appear a bit more flaccid.
Both the cop and the prosecutor (Marcus McDowell) are welcome to say dumb things in their own defense during testimony. For the judge to suggest this arrest might have been supported by probable cause demands a better explanation than what was given here. If the standard is only that one cop felt something violated the law, the First Amendment is meaningless. It’s the sort of thing that tells citizens their rights only matter once they’re violated… and even then, they still may not mean much. The judge blew the call here and the local cops know it. Gamble still has a target on her back and the cops have the judicial leeway to keep arresting protesters they personally don’t like.
The US president told CNBC on Tuesday that Anthropic is ‘shaping up’ following a White House meeting last Friday at which the company’s CEO Dario Amodei discussed its Mythos AI model with Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent.
The Pentagon’s blacklisting of Anthropic remains in legal limbo, with a federal appeals court and a San Francisco district court having reached conflicting conclusions.
President Donald Trump told CNBC’s Squawk Box on Tuesday that a deal allowing Anthropic’s AI models to be used within the Department of Defense is “possible,” describing the company as “shaping up.”
“They came to the White House a few days ago, and we had some very good talks with them, and I think they’re shaping up,” Trump said.
“They’re very smart, and I think they can be of great use.” The comments mark a striking rhetorical reversal from a president who, in late February, posted on Truth Social ordering all federal agencies to “IMMEDIATELY CEASE all use of Anthropic’s technology” and declared that his administration would “not do business with them again.”
Advertisement
Trump’s remarks follow a White House meeting on Friday 18 April at which Anthropic CEO Dario Amodei met Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent to discuss the company’s new Mythos model, a frontier AI system Anthropic has described as highly capable at cybersecurity tasks and has so far made available only to a small group of organisations.
The White House described the conversation as “productive and constructive.” Anthropic said Amodei had a “productive discussion” with administration officials about how the company and the US government can “work together on key shared priorities such as cybersecurity, America’s lead in the AI race, and AI safety.”
When reporters asked Trump about the meeting on a runway in Phoenix, he responded “Who?” and said he had “no idea” Amodei had been there.
The meeting took place against the backdrop of a dispute that has few precedents in the relationship between Washington and the technology industry.
Advertisement
In July 2025, Anthropic signed a $200 million contract with the Pentagon, becoming the first AI lab to have its models approved for use on the DOD’s classified networks.
But as negotiations over Claude’s deployment on the department’s GenAI.mil platform began in September, talks broke down. The Pentagon demanded that Anthropic grant unfettered access to its models for all lawful purposes.
Anthropic drew two firm lines: its AI would not be used in fully autonomous weapons systems that select targets without human intervention, and it would not be used for domestic mass surveillance of Americans.
Defense Secretary Pete Hegseth responded by designating Anthropic a “supply chain risk to national security” in late February 2026, a label previously reserved for companies associated with foreign adversaries.
Advertisement
The formal designation, confirmed to Anthropic’s leadership on 5 March, required defense contractors to certify they were not using Anthropic’s models in work with the military. Trump amplified the measure with his Truth Social directive.
The designation was, as Anthropic argued in subsequent litigation, unprecedented: as US District Judge Rita Lin noted in a stinging 43-page ruling that granted Anthropic a preliminary injunction in late March, it appeared to be directed not at a genuine national security threat but at punishing the company for “bringing public scrutiny to the government’s contracting position”, “classic illegal First Amendment retaliation,” she wrote.
The legal situation remains split. A federal appeals court in Washington DC denied Anthropic’s request to temporarily block the supply chain risk designation on 8 April. Judge Lin’s preliminary injunction in San Francisco, from a separate but related case, bars enforcement of Trump’s Truth Social ban on Claude across the rest of the government.
The practical effect is that Anthropic is excluded from Pentagon contracts but can continue working with other government agencies while both cases proceed. The DOD has continued to use Claude during the US-Iran war, which began before the blacklisting took effect.
Advertisement
What appears to have shifted the White House’s posture is Mythos. Parts of the intelligence community and CISA, the Cybersecurity and Infrastructure Security Agency, have been testing the model.
The White HouseOffice of Management and Budget is setting up protocols to allow federal agencies to access a controlled version.
Treasury Secretary Bessent’s presence at Friday’s meeting was read by sources close to the negotiations as a signal that the economic and financial security arguments for Mythos access had reached the most senior levels of the administration.
As one administration source told Axios: “It would be grossly irresponsible for the US government to deprive itself of the technological leaps that the new model presents. It would be a gift to China.”
Advertisement
Whether any resumption of the Anthropic–Pentagon relationship is possible remains uncertain. Trump’s Tuesday comments refer to talks that have been promising but did not produce a deal.
The appeals court ruling on the supply chain risk designation still stands. Hegseth has not withdrawn his position. Anthropic, meanwhile, has engaged Ballard Partners, the lobbying firm where Wiles previously worked, for advocacy around Department of War procurement, a move that signals it understands the political dynamics as well as the legal ones.
The company’s annualised revenue has reached $30 billion and it is considering an IPO; the supply-chain risk designation damages enterprise credibility even where it does not block commercial deals.
Investors are aggressively courting AI researchers to build startups that can make AI more reliable and efficient.
Yu Su, an Ohio State professor leading an AI agent lab, said he initially resisted the pressure from VCs to commercialize his work. He finally took the leap last year and spun out his work into a startup when he saw that foundational model advances could make agents truly personalized.
NeoCognition, a startup Su describes as a research lab developing self-learning AI agents, has just emerged from stealth with $40 million in seed funding. The round was co-led by Cambium Capital and Walden Catalyst Ventures, with participation from Vista Equity Partners and angels, including Intel CEO Lip-Bu Tan and Databricks co-founder Ion Stoica.
“Today’s agents are generalists,” Su (pictured right) told TechCrunch. “Every time you ask them to do a task, you take a leap of faith.”
Advertisement
According to Su, the issue lies in a lack of consistency. Current agents, whether from Claude Code, OpenClaw or Perplexity’s computer tools, successfully complete tasks as intended only about 50% of the time, he said.
Since agents are still so unreliable, they are not ready to be trusted, independent workers, Su told TechCrunch. NeoCognition intends to change that by developing an agent system that can self-learn to become an expert in any domain, similar to how humans learn.
Su argues that while human intelligence is broad, its real power is our ability to specialize. When we enter a new environment or profession, we can rapidly master its unique rules, relationships, and consequences.
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
NeoCognition is building agents to mirror this exact approach.
Advertisement
“For humans, our continued learning process is essentially the process of building a world model for any profession, any environment,” Su said. “We believe for agents to become experts, they need to learn autonomously to build a model of any given micro world.”
Su views this capacity for rapid specialization as the critical missing link to getting AI to work reliably on its own.
While it is possible to train agents for autonomous tasks, they must be custom-engineered for a specific vertical. NeoCognition is different because it’s building agents that are generalists capable of self-learning and specializing in any domain.
NeoCognition intends to sell its agent systems primarily to enterprises, including established SaaS companies, which can use them to build agent-workers or to enhance existing product offerings.
Advertisement
Su highlighted that an investment from Vista Equity Partners is especially valuable for this reason. As one of the largest private equity firms in the software space, Vista can provide NeoCognition with direct access to a vast portfolio of companies looking to modernize their products with AI.
NeoCognition currently has about 15 employees, the majority of whom hold PhDs.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
You must be logged in to post a comment Login