A 7,000-word “doomsday” thought experiment from Citrini Research helped trigger an 800-point drop in the Dow, “painting a dark portrait of a future in which technological change inspires a race to the bottom in white-collar knowledge work,” reports the Wall Street Journal. From the report: Concerns of hyperscalers overspending are out. Worries of software-industry disruption don’t go far enough. The “global intelligence crisis” is about to hit. The new, broader question: What if AI is so bullish for the economy that it is actually bearish? “For the entirety of modern economic history, human intelligence has been the scarce input,” Citrini wrote in a post it described as a scenario dated June 2028, not a prediction. “We are now experiencing the unwind of that premium.”
Many of Monday’s moves roughly aligned with the situation outlined by Citrini, in which fast-advancing AI tools allow spending cuts across industries, sparking mass white-collar unemployment and in turn leading to financial contagion. Software firms DataDog, CrowdStrike and Zscaler each plunged more than 9%. International Business Machines’ 13% decline was its worst one-day performance since 2000. American Express, KKR and Blackstone — all name-checked by Citrini — tumbled. That anxiety, coupled with renewed uncertainty about trade policy from Washington, weighed down major indexes Monday. The Dow Jones Industrial Average led declines, falling 1.7%, or 822 points. The S&P 500 shed 1%, while the Nasdaq composite retreated 1.1%.
[…] Monday’s market swings extended a run of AI-linked volatility. A small research outfit that has garnered a huge Substack following for macro and thematic stock research, Citrini said in its new post that software firms, payment processors and other companies formed “one long daisy chain of correlated bets on white-collar productivity growth” that AI is poised to disrupt. […] Shares in DoorDash also veered 6.6% lower Monday after Citrini’s Substack note called the delivery app a “poster child” for how new tools would upend companies that monetize interpersonal friction. In the research firm’s scenario, AI agents would help both drivers and customers navigate food deliveries at much lower costs.
Peter O’Brien has received the 2025 Semi European Award, which recognises those who have had an impact on global semiconductor innovation.
Tyndall National Institute’s photonics expert Prof Peter O’Brien has been honoured by the global semiconductor industry for his work in the sector.
O’Brien is the head of research for photonics packaging and systems integration at the University College Cork-based deep-tech institute. He has received the 2025 Semi European Award, which recognises leaders whose work has had a significant impact in global semiconductor innovation.
Semi is a global industry association representing companies and research organisations across the semiconductor and electronics development and manufacturing supply chain.
Advertisement
O’Brien has been recognised for his contributions to photonics electronic packaging, his leadership in Europe’s semiconductor pilot lines, and his work in developing specialised training programmes for up-and-coming researchers in the field.
“It is a great honour to receive the Semi European Award for 2025,” said O’Brien. “Through this award, I would like to recognise my many collaborators around the world. Working together, we accelerate research and development, turning early ideas into impactful breakthroughs.”
Prof William Scanlon, the CEO of Tyndall, added: “Prof O’Brien’s leadership and vision have placed Tyndall at the forefront of advanced packaging globally, and his contributions are shaping Europe’s semiconductor future.”
Meanwhile, Eric Beyne, a senior fellow at the Belgium-based nanoelectronics and digital tech research and innovation hub IMEC, received the Special Service Award at the ceremony earlier this month for his contributions to high-density interconnection and packaging technologies, and helping advance next-gen semiconductor integration techniques.
Advertisement
“We are honoured to recognise Peter O’Brien and Eric Beyne for their outstanding contributions to advancing semiconductor innovation and strengthening Europe’s technology ecosystem,” said Semi Europe president Laith Altimime.
“Their leadership and vision have helped drive transformative progress across the industry while inspiring the next generation of engineers and researchers, reflecting the spirit of collaboration and innovation that continues to propel the semiconductor industry toward a more resilient, digital and sustainable future.”
Tyndall has made several major announcements this year. The Cork-based research institute recently announced a €100m expansion project.
Picture a VP of finance at a large retailer. She asks the company’s new AI analytics agent a simple question: “What was our revenue last quarter?” The answer comes back in seconds.
Confident.
Clean.
Wrong.
Advertisement
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
That exact scenario happens more frequently than many organizations would care to admit. AtScale, which enables organizations to deploy governed analytics environments and semantic consistency, has found that simply increasing model parameterization alone cannot address the AI governance and context issues enterprises face.
When AI systems query inconsistent or ungoverned data, adding more model complexity doesn’t contain the problem, it compounds it. Organizations across industries have acted quickly to develop agentic AI, deploying systems that analyze data, generate insights, and trigger automated workflows. In response to this trend, the AI models have adapted to react quickly via larger model parameters, increased computing power, and additional features. The underlying assumption has been that as long as the model gets large enough, the eventual result will be reliable.
Advertisement
However, there are indications that this assumption may not hold up. Recent TDWI research found that nearly half of respondents characterized their AI governance initiatives as either immature or very immature. This may have more to do with data lineage and the business definitions on which these models are based than with the models’ capabilities.
Why bigger models don’t solve governance
The AI industry tends to operate on an unexamined assumption about what drives better performance: as we build more advanced models, they will somehow self-correct their performance errors. In enterprise analytics, that assumption can fall apart quickly.
While scale may improve the breadth of reasoning in a model, it doesn’t automatically enforce which definition of gross margin the business has agreed to use. It doesn’t resolve metric inconsistencies that have lived in separate dashboards for years. And it also doesn’t produce traceable lineage on its own.
Governance problems don’t resolve at scale. Business rules buried in individual tools, inconsistent definitions across teams, and outputs with no audit trail are structural issues, and a larger model doesn’t fix structure. It just produces unreliable answers more fluently.
Advertisement
At AtScale, there’s a consistent theme among our clients: When inconsistent data definitions followed organizations into their AI layer, the problems didn’t stop there. They propagated forward, typically at greater speed and with less transparency than the previous layer had offered.
Performance and responsibility are separate jobs. A model reasons. A governance layer defines what the model reasons over, constrains how it applies business logic, and ensures outputs can be traced back to a source of record. One cannot substitute for the other.
The real risk: Unconstrained agents in enterprise environments
The problem with AI agents is seldom the model itself. It’s what the model is working with, and if anyone can see what it did.
With common context, AI agents might read data differently on different systems. In large enterprises, even small differences in definitions can lead to different results. Structural risks typically stem from four main causes:
Advertisement
Agents pull from sources where the same metric can mean different things to different teams, making data definitions less clear.
Metrics from different departments that don’t agree – two agents give two answers, but it’s not clear which one is right.
Unclear reasoning produces outputs without a clear lineage as to how a decision was made.
Audit gaps: When outputs can’t be traced back to a governed source of record, there’s no reliable way to catch errors, assign accountability, or course-correct.
These are not signs that AI is not working. They show that the infrastructure around AI hasn’t kept up.
What guardrails actually mean in AI analytics
Guardrails are often viewed as a limitation. However, in many cases, guardrails are the very conditions that permit AI agents to operate with greater confidence.
Guardrails can help align AI-generated outputs with established business logic. They also create a structure in which autonomous agents can operate; this way, as autonomy increases, so does reliability. In analytics, guardrails typically exist in several specific formats:
Shared data definitions: A single definition of terms such as revenue, churn, or margin that are shared across all systems.
Business logic constraints: Rules governing how calculations are to be performed, regardless of the tools or agents performing those calculations.
Lineage visibility: The capability to identify where any output originated from.
Access controls: Defined permissions determining what data an agent can query.
Standardization of metrics: Consistent definitions applicable across departments and platforms.
The intention isn’t to impede AI’s performance. It’s to offer AI a base upon which it can stand.
The role of the semantic layer as a constraint framework
A semantic layer sits between data and the applications and AI agents that use it, defining business concepts, implementing logical processes, and providing a common framework of terms for all applications and AI agents to draw upon.
A semantic layer does not manipulate or duplicate data; it defines what the data represents. By asking questions of a governed semantic layer rather than the base table, AI agents can generate output based on business-defined logic, rather than on inference. The distinction of this output becomes particularly important when multiple AI agents across multiple systems must produce similar outputs.
Advertisement
From AtScale’s perspective, the semantic layer serves as a context boundary that can help ensure AI agents interpret data according to shared business definitions. The semantic layer is more analogous to a common language, as opposed to a guardrail, that ensures all systems operate with a common understanding.
Governance is an architectural question, not a model question
Enterprise organizations realize that AI governance is less about building the largest model and more about making an environment where the chosen model can work well. A well-designed and governed architecture (with shared definitions for concepts, traceable logic, and a shared context across all systems) will likely deliver better, more reliable results than a larger model running in an uncontrolled data environment.
Scaling models without improving semantic clarity tends to add complexity, not reduce it. As each additional tool, system, or workflow is added to an uncontrolled environment, the opportunities for divergence increase.
In this sense, responsible AI is an infrastructure challenge. Organizations with successful AI deployments treat the meaning of their data as a design decision,before the model is even chosen.
Advertisement
Economic and operational implications
Governance gaps do not stay abstract for long. They tend to show up in the budget.
Ambiguity in data meaning may increase operational friction, agents that produce inconsistent outputs require human review, reconciliation cycles, and rework that compounds across teams and tools. When lineage is not clear, audits cost more. Retrofitting controls after deployment typically costs more than building the right architecture from the start.
In complex enterprise settings, costs can show up in predictable ways: redundant validation when outputs don’t match across systems, excess compute triggered by unclear queries, and slower analysis as teams pause to figure out which answer is actually reliable. Clear semantic constraints can mean fewer validation cycles, and that operational value is becoming easier to measure.
The path forward: Constrained autonomy
AI agents aren’t a future consideration, they’re already in use. What’s still catching up is the infrastructure around them. Agents without clear context and constraints tend to operate beyond what the organization can actually govern. That gap doesn’t close on its own.
Advertisement
The differentiator in enterprise AI, AtScale contends, won’t be model scale, it will be the clarity of the environment models operate in. As agents become more common in business workflows, how well the semantic layer is defined may matter more than how large the model is.
This shift toward governed context and constrained autonomy is explored in more detail in AtScale’s 2026 State of the Semantic Layer report, which examines how open standards, interoperability, and semantic governance are shaping the next phase of enterprise intelligence.
DoorDash has launched a new option for its gig economy workers to earn some extra cash. The delivery service introduced Tasks, which it describes as “short activities Dashers can complete between deliveries or in their own time.” It gives taking pictures of restaurant dishes or recording video of unscripted conversations in languages other than English as examples. These materials will be used to train artificial intelligence and robotics models.
A representative from DoorDash told Bloomberg News that it will use Tasks content for evaluating its in-house AI models as well as those made by its partner companies in retail, insurance, hospitality and tech. DoorDash is piloting a standalone app for Tasks where Dashers will submit their content. The blog post notes that pay will be displayed upfront, and compensation will vary based on the complexity of the activity.
This idea isn’t new. We’ve seen otherstartups in AI and robotics offering payment for content filmed by regular people. Considering how many lawsuits are underway against AI companies that have already benefited from unauthorized use of copyrightedmaterials, at least this approach lets people be directly compensated for training content.
Searching for a reason to buy collectors’ editions of Mattel products? With the toy drops unveiled during Mattel Creations Revealed event Thursday, you may be able to make a case for yourself. Homing in on fandoms ranging from anime to Barbie to Monster High, the company shared a fresh lineup of releases. Masters of the Universe fans: lock in.
You can stack Mattel Bricks to create Eternia’s legendary Castle Grayskull with a new set (pictured above) that’s the first of its kind for the toy giant. Get into your display — or play — with Masters of the Universe Nano figures depicting characters like He-Man, Evil-Lyn, Skeletor, Battle Cat and Teela. Available in the Brick Shop starting April 25, the set will retail for $65. For fans who want a bit of extra nostalgia, you can also buy the light-up Laser Power He-Man figure for $30 to add to your display.
Mattel has been on somewhat of a roll with its Monster High Skullector series, with doll collabs that feature iconic movie and TV favorites like Coraline, Wednesday and Morticia, Pennywise and Alien. Skeletor has been added to the collection and is quite the baddie. Just check out the outfit, high-heeled boots and signature smirk. Her staff speaks as well, but I won’t spoil it by telling you the catchphrase. The price? $65.
Advertisement
Will you be adding this version of Skeletor to your Monster High collection?
Mattel/Jeff Hazelwood/CNET
If the He-Man franchise isn’t for you, maybe you’ll be into the drop for Naruto. Burn up a Hot Wheels track with the Nissan Silvia S15 model that carries Naruto emblazoned on both sides. Making its debut as part of a partnership between Mattel and anime franchises, the car is priced at $25.
Advertisement
Naruto fans, this one is made for you.
Mattel/Jeff Hazelwood/CNET
We’ve seen Barbie go couture in the past, and Mattel has just revealed its atelier design that takes handcrafted fashion up a notch. The new poseable Grand Couture Silkstone Barbie doll retails for $342, stands at 14.5 inches and wears a ruffled coat (with a train) with a shimmery embroidered dress underneath. Drop earrings and pink boots complete the high-end look.
Not to be sidelined at today’s event, Ken was spotlighted with the release of his own Uno card deck that features a variety of, well, Kens. Celebrate his 65th anniversary with the $13 deck, and you can decide if you’ve had Ken-ough of draw fours and skips.
Advertisement
New Barbie and Ken products revealed today, including a celebratory Ken Uno deck.
Mattel/Jeff Hazelwood/CNET
One of the other pop culture moments — and figures — commemorated with a toy reveal today came in the form of WWE star Stone Cold Steve Austin. He’s been immortalized as a Mattel action figure that captures when he coined his 3:16 catchphrase 30 years ago, with the Elite Collection addition coming equipped with a crown, table and throne. Get it for your WWE collection for $30.
Navia Benefit Solutions, Inc. (Navia) is informing nearly 2.7 million individuals of a data breach that exposed their sensitive information to attackers.
An investigation into the incident revealed that the hackers had access to the organization’s systems between December 22, 2025, and January 15, 2026. However, the company discovered the suspicious activity on January 23.
Navia says that it responded immediately and launched an inquiry to determine the potential impact of the incident.
“The investigation determined that an unauthorized actor accessed and acquired certain information between December 22, 2025, and January 15, 2026,” the company says in the notification to impacted individuals.
Advertisement
Navia is a consumer-focused administrator of benefits that provides services to more than 10,000 employers across the U.S.
The company provides software and customer services for the administration of Flexible Spending Accounts (FSA), Health Savings Accounts (HSA), Health Reimbursement Arrangements (HRA), Commuter Benefits and COBRA Services.
It also helps handle commuter benefits, lifestyle accounts, education benefits, compliance/risk services, and retirement-related offerings.
According to the company, the investigation into the breach revealed that the hacker accessed and may have exfiltrated the following types of data:
Advertisement
Full name
Date of birth
Social Security Number (SSN)
Phone number
Email address
Participation in HRA (Health Reimbursement Arrangements)
FSA (Flexible Spending Accounts) information
Consolidated Omnibus Budget Reconciliation Act (COBRA) enrollment information
Navia underlines that the data breach did not expose details about claims or financial information. Nevertheless, the exposed data is enough for threat actors to deploy phishing and social engineering attacks aimed at affected individuals.
The company states that it has reviewed its security posture and data retention policies to identify potential weaknesses that can be improved, and has notified federal law enforcement about the incident.
Customers whose information was exposed will be covered by a free 12-month identity protection and credit monitoring service from Kroll. Letter recipients are also encouraged to consider placing a fraud alert and security freeze on their credit files.
At the time of writing, no ransomware group has claimed the Navia data breach.
Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.
Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.
In many of our portable, mobile, and desktop computers, we’re used to solid-state storage. It’s fast and low power, and current supply-chain price hikes notwithstanding, affordable in the grand scheme of things. It wasn’t always this way though, a couple of decades ago a large flash drive was prohibitively expensive. Hard drive manufacturers did their best to fill the gap with tiny spinning-rust storage devices which led to the smallest of them all: the Toshiba MK4001MTD. It crammed 4 GB onto a 0.85″ platter, and could be found in a few devices such as high-end Nokia phones.
Breaking out the Nokia’s hard drive interface.
The drive’s connector is a pattern of pads on a flexible PCB, one he couldn’t help noticing had a striking resemblance to an obscure SD card variant. Hooking it up to an SD reader didn’t work unfortunately, so a battered Nokia was called into service. It was found to be using something electrically similar to the SD cards, but with the ATA protocol familiar from the world of full-size hard drives.
The interface uses the PIO capability of the RP2040, and the board makes a tidy peripheral in itself. We’re guessing not many of you have one of these drives, but perhaps if you do, those early 2000s phone pics aren’t lost for good after all.
The build is based around the ESP32-2432S028—also known as the CYD, or Cheap Yellow Display, for the integrated 320 x 240 LCD screen. [Jordan] took this all-in-one device and wrapped it in an attractive 3D-printed housing in the shape of an old-school CRT monitor, just… teenier. A special lever mechanism was built in to the enclosure to allow front panel controls to activate the tactile buttons on the CYD board. The ESP32 is programmed to check Open-Meteo feeds for forecasts and current weather data, while also querying a webcam feed and satellite and radar JPEGs from available weather services. These are then displayed on screen in a way that largely resembles the Windows 95 UI design language, with pages for current conditions, future forecasts, wind speeds, and the like.
Crypto-powered gift card store Bitrefill says that the attack it suffered at the beginning of the month was likely perpetrated by North Korean hackers of the Bluenoroff group.
During the investigation, the platform observed indicators similar to previous attacks attributed to the North Korean threat actor, like tactics, malware, IP and email addresses.
“Based on indicators observed during the investigation – including the modus operandi, the malware used, on-chain tracing and reused IP + email addresses (!) – we find many similarities between this attack and past cyberattacks by the DPRK Lazarus / Bluenoroff group against other companies in the crypto industries,” reads Bitrefill’s statement.
Bitrefill is a mid-sized e-commerce platform that enables people to pay in cryptocurrency for gift cards at stores in 150 countries. The gift cards can be used to pay for anything from clothing, food and groceries, health and beauty products to bills, services, gas, transportation, and electronics.
Advertisement
The platform supports more than 600 mobile operators and thousands of brands worldwide.
On March 1st, Bitrefill announced technical issues affecting access to its website and app. A day later, the company disclosed that it had identified a security issue and took all services offline.
Although user balances were not affected, the gradual restoration of all services still continues to this day.
The breach was discovered after Bitrefill noticed suspicious supplier purchasing patterns, exploitation of gift card stock and supply lines, and draining of some “hot” wallets.
Advertisement
The investigation the firm launched to determine the cause revealed that the attack originated on a compromised employee’s laptop.
The attackers stole legacy credentials and used them to access a snapshot with production secrets, later escalating access to the larger Bitrefill infrastructure, including parts of the database and some cryptocurrency wallets.
About 18,500 purchase records containing customer email addresses, IP addresses, and cryptocurrency payment addresses were exposed in the breach. For 1,000 purchases, customer names were also exposed.
Although this information is stored in encrypted form, Bitrefill notes that the attackers may have obtained the decryption keys.
Advertisement
Bitrefill says this was the most serious cyberattack it has suffered in its ten years of existence, but it survived with minimal losses, which will be covered from its capital.
Ultimately, Bitrefill believes that attackers were after cryptocurrency and gift card inventory, not customer information.
BlueNoroff, also known as APT38, is a cluster of the Lazarus group that has been active since at least 2014. It typically targets financial organizations, with a more recent focus on the cryptocurrency industry, the objective being crypto theft.
Bitrefill says this was the most serious cyberattack it has suffered in the ten years of its existence, but it survived with minimal losses, which will be covered from its capital.
Advertisement
Meanwhile, it is expanding security reviews and pen-testing, tightening access controls, improving logging and monitoring, and refining automated shutdown mechanisms.
At this time, most of its services have returned to normal operational status, and customers aren’t required to take any action beyond treating incoming communications with extra caution.
Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.
Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.
GMKtec EVO-T2 mini PC reaches 180 TOPS using combined CPU, GPU, and NPU acceleration
Its PCIe 5.0 storage introduces data speeds exceeding 10GB per second
Local AI models run without relying on external cloud infrastructure
At a recent launch event, GMKtec introduced the GMKtec EVO-T2, a compact desktop system built for local AI computing.
According to the company, the device integrates third-generation Intel Core Ultra processors and claims up to 180 TOPS of compute capability.
It combines CPU, GPU, and NPU resources, and enables local execution of large language models up to 70B parameters without dependence on external cloud infrastructure.
Article continues below
Advertisement
Compute architecture and AI workloads
The EVO-T2 is based on Intel’s Panther Lake architecture and is manufactured using the 18A process node, incorporating RibbonFET transistors and backside power delivery.
These design elements are associated with improved efficiency and transistor density, although most performance data referenced remains tied to internal benchmarks.
Advertisement
The company claims that complex workloads such as code generation and document processing can be executed rapidly with this device.
For some tasks, GMKtec says the EVO-T2 completes them within seconds under controlled conditions.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Graphics capabilities are handled by the integrated Intel Arc B390 GPU, which includes twelve Xe cores and support for DirectX 12 Ultimate, real-time ray tracing, and AI-assisted upscaling.
Advertisement
This configuration allows the system to extend beyond AI inference into areas such as rendering and visual content workflows.
Despite its small footprint, the device includes dual M.2 storage slots supporting PCIe 5.0 and PCIe 4.0, with total capacity reaching up to 16TB.
PCIe 5.0 SSDs are theoretically capable of sequential speeds exceeding 10GB/s, with some exceeding 15 GB/s, while PCIe 4.0 drives typically reach around 7GB/s under optimal conditions.
Advertisement
For connectivity, it includes USB4 with 40Gbps bandwidth and OCuLink support for external GPUs.
In addition, the system supports dual Ethernet configurations, offering both 10GbE and 2.5GbE networking.
To address memory constraints, Phison collaborated with GMKtec to integrate aiDAPTIV+ AI SSD technology.
This system dynamically extends available memory by distributing workloads between DRAM and storage, allowing large models to be segmented during execution.
Advertisement
Active portions are processed on the GPU, while less active data remains stored across memory and SSD layers.
This “pseudo-memory” mechanism is described as reducing bottlenecks when processing large models.
However, its long-term performance implications under sustained workloads have not been independently verified.
GMKtec states that it “effectively breaks through traditional DRAM limitations,” a claim that may require independent validation.
Advertisement
The system ships with a pre-configured AI environment, allowing immediate access to AI tools and models without manual setup.
OpenClaw enables the EVO-T2 to run autonomous AI agents locally, performing tasks from data processing to content generation without relying on cloud services.
Every few years, a piece of open-source software arrives that rewires how the industry thinks about computing. Linux did it for servers. Docker did it for deployment. OpenClaw — the autonomous AI agent platform that went from niche curiosity to the fastest-growing open-source project in history in a matter of weeks — may be doing it for software itself.
Nvidia CEO and co-founder Jensen Huang made his position plain at GTC 2026 this week: “OpenClaw is the operating system for personal AI. This is the moment the industry has been waiting for — the beginning of a new renaissance in software.” And Nvidia wants to be the company that makes it enterprise-ready.
At its annual large GTC 2026 conference in San Jose this week, Nvidia unveiled NemoClaw, a software stack that integrates directly with OpenClaw and installs in a single command. Along with it came Nvidia OpenShell, an open-source security runtime designed to give autonomous AI agents — or “claws”, as the industry is increasingly calling them — the guardrails they need to operate inside real enterprise environments. Alongside both, the company announced an expanded Nvidia Agent Toolkit, a full-stack platform for building and running production-grade agentic workflows.
The message from Jensen Huang was unambiguous. “Claude Code and OpenClaw have sparked the agent inflection point — extending AI beyond generation and reasoning into action,” the Nvidia CEO said ahead of the conference. “Employees will be supercharged by teams of frontier, specialized and custom-built agents they deploy and manage.” Watch my video overview of it below and read on for more:
Advertisement
Why ‘claws’ — and why it matters that Nvidia is using the word
The terminology shift happening inside enterprise AI circles is subtle but significant. Internally, teams building with OpenClaw and similar platforms have taken to calling individual autonomous agents claws — a nod to the platform name, but also a useful shorthand for a new class of software that differs fundamentally from the chatbots and copilots of the last two years.
As Kari Briski, Nvidia’s VP of generative AI software, put it during a Sunday briefing: “Claws are autonomous agents that can plan, act, and execute tasks on their own — they’ve gone from just thinking and executing on tasks to achieving entire missions.”
That framing matters for IT decision-makers. Claws are not just assistants. They are persistent, tool-using programs that can write code, browse the web, manipulate files, call APIs, and chain actions together over hours or days without human input. The productivity upside is substantial. So is the attack surface. Which is precisely the problem Nvidia is positioning NemoClaw to solve.
Advertisement
The enterprise demand is not hypothetical. Harrison Chase, founder of LangChain — whose open-source agent frameworks have been downloaded more than a billion times — put it bluntly in a recent episode of VentureBeat’s Beyond the Pilot podcast: “I guarantee that every enterprise developer out there wants to put a safe version of OpenClaw onto onto their computer or expose it to their users.” The bottleneck, he made clear, has never been interest. It has been the absence of a credible security and governance layer underneath it. NemoClaw is Nvidia’s answer to that gap — and notably, LangChain is one of the launch partners for the Agent Toolkit and OpenShell integration.
What NemoClaw actually does — and what it doesn’t replace
NemoClaw is not a competitor to OpenClaw (or the now many alternatives). It is best understood as an enterprise wrapper around it — a distribution that ships with the components a security-conscious organization actually needs before letting an autonomous agent near production systems.
The stack has two core components. The first is Nvidia Nemotron, Nvidia’s family of open models, which can run locally on dedicated hardware rather than routing queries through external APIs. Nemotron-3-Super, scored the highest out of all open models on PinchBench, a benchmark that tests the types of tasks and tools calls needed by OpenClaw.
The second is OpenShell, the new open-source security runtime that runs each claw inside an isolated sandbox — effectively a Docker container with configurable policy controls written in YAML. Administrators can define precisely which files an agent can access, which network connections it can make, and which cloud services it can call. Everything outside those bounds is blocked.
Advertisement
Nvidia describes OpenShell as providing the missing infrastructure layer beneath claws — giving them the access they need to be productive while enforcing policy-based security, network, and privacy guardrails.
For organizations that have been watching OpenClaw’s rise with a mixture of excitement and dread, this is a meaningful development. OpenClaw’s early iterations were, by general consensus, a security liability — powerful and fast-moving, but essentially unconstrained. NemoClaw is the first attempt by a major hardware vendor to make that power manageable at enterprise scale.
The hardware angle: always-on agents need dedicated compute
One aspect of NemoClaw that deserves more attention than it has received is the hardware strategy underneath it. Claws, by design, are always-on — they do not wait for a human to open a browser tab. They run continuously, monitoring inboxes, executing tasks, building tools, and completing multi-step workflows around the clock.
That requires dedicated compute that does not compete with the rest of the organization’s workloads. Nvidia has a clear interest in pointing enterprises toward its own hardware for this purpose.
Advertisement
NemoClaw is designed to run on Nvidia GeForce RTX PCs and laptops, RTX PRO workstations, and the company’s DGX Spark and DGX Station AI supercomputers. The hybrid architecture allows agents to use locally-running Nemotron models for sensitive workloads, with a privacy router directing queries to frontier cloud models when higher capability is needed — without exposing private data to those external endpoints.
It is an elegant solution to a real problem: many enterprises are not yet ready to send customer data, internal documents, or proprietary code to cloud AI providers, but they still need model capability that exceeds what runs locally. NemoClaw’s privacy router architecture threads that needle, at least in principle.
What claws actually look like in the enterprise
Before evaluating the platform, it helps to understand what a claw doing real work looks like in practice. Two partner integrations announced alongside NemoClaw offer the clearest window into where this is heading.
Box is perhaps the most illustrative case for organizations that manage large volumes of unstructured enterprise content.
Advertisement
Box is integrating Nvidia Agent Toolkit to enable claws that use the Box file system as their primary working environment, with pre-built skills for Invoice Extraction, Contract Lifecycle Management, RFP sourcing, and GTM workflows.
The architecture supports hierarchical agent management: a parent claw — such as a Client Onboarding Agent — can spin up specialized sub-agents to handle discrete tasks, all governed by the same OpenShell Policy Engine.
Critically, an agent’s access to files in Box follows the exact same permissions model that governs human employees — enforced through OpenShell’s gateway layer before any data is exchanged. Every action is logged and attributable; no shadow copies accumulate in agent memory. As Box puts it in their announcement blog, “organizations need to know which agent touched which file, when, and why — and they need the ability to revoke access instantly if something goes wrong.”
Cisco’s integration offers perhaps the most visceral illustration of what OpenShell guardrails enable in practice. The Cisco security team has published a scenario in which a zero-day vulnerability advisory drops on a Friday evening.
Advertisement
Rather than triggering a weekend-long manual scramble — pulling asset lists, pinging on-call engineers, mapping blast radius — a claw running inside OpenShell autonomously queries the configuration database, maps impacted devices against the network topology, generates a prioritized remediation plan, and produces an audit-grade trace of every decision it made.
Cisco AI Defense verifies every tool call against approved policy in real time. The entire response completes in roughly an hour, with a complete record that satisfies compliance requirements.
“We are not trusting the model to do the right thing,” the Cisco team noted in their technical writeup. “We are constraining it so that the right thing is the only thing it can do.”
An ecosystem play: the partners behind the stack
Nvidia is not building this alone. The Agent Toolkit and OpenShell announcements came with a significant roster of enterprise partners — Box, Cisco, Atlassian, Salesforce, SAP, Adobe, CrowdStrike, Cohesity, IQVIA, ServiceNow, and more than a dozen others — whose integration depth signals how seriously the broader software industry is treating the agentic shift.
Advertisement
On the infrastructure side, OpenShell is available today on build.nvidia.com, supported by cloud inference providers including CoreWeave, Together AI, Fireworks, and DigitalOcean, and deployable on-premises on servers from Cisco, Dell, HPE, Lenovo, and Supermicro. Agents built within OpenShell can also continuously acquire new skills using coding agents including Claude Code, Codex, and Cursor — with every newly acquired capability subject to the same policy controls as the original deployment.
Separately, Nvidia announced the Nemotron Coalition — a collaborative initiative bringing together Mistral AI, Perplexity, Cursor, and LangChain to co-develop open frontier models. The coalition’s first project is a base model co-developed with Mistral that will underpin the upcoming Nemotron 4 family, aimed specifically at agentic use cases.
What enterprise leaders should be watching
The NemoClaw announcement marks a turning point in how enterprise AI is likely to be discussed in boardrooms and procurement meetings over the next twelve months. The question is no longer whether organizations will deploy autonomous agents. The industry has clearly moved past that debate. The question is now how — with what controls, on what hardware, using which models, and with what audit trail.
Nvidia’s answer is a vertically integrated stack that spans silicon, runtime, model, and security policy. For IT leaders evaluating their agentic roadmap, NemoClaw represents a significant attempt to provide all four layers from a single vendor, with meaningful third-party security integrations already in place.
Advertisement
The risks are not trivial. OpenShell’s YAML-based policy model will require operational maturity that most organizations are still building. Claws that can self-evolve and acquire new skills — as Nvidia’s architecture explicitly enables — raise governance questions that no sandbox can fully resolve. And the concentration of agentic infrastructure in a single vendor’s stack carries familiar platform risks.
That said the direction is clear. Claws are coming to the enterprise. Nvidia just made its bet on being the platform they run on — and the guardrails that keep them in bounds.
You must be logged in to post a comment Login