Connect with us
DAPA Banner

Tech

The authorization problem that could break enterprise AI

Published

on

When an AI agent needs to log into your CRM, pull records from your database, and send an email on your behalf, whose identity is it using? And what happens when no one knows the answer? Alex Stamos, chief product officer at Corridor, and Nancy Wang, CTO at 1Password joined the VB AI Impact Salon Series to dig into the new identity framework challenges that come along with the benefits of agentic AI.

“At a high level, it’s not just who this agent belongs to or which organization this agent belongs to, but what is the authority under which this agent is acting, which then translates into authorization and access,” Wang said.

How 1Password ended up at the center of the agent identity problem

Wang traced 1Password’s path into this territory through its own product history. The company started as a consumer password manager, and its enterprise footprint grew organically as employees brought tools they already trusted into their workplaces.

“Once those people got used to the interface, and really enjoyed the security and privacy standards that we provide as guarantees for our customers, then they brought it into the enterprise,” she said. The same dynamic is now happening with AI, she added. “Agents also have secrets, or passwords, just like humans do.”

Advertisement

Internally, 1Password is navigating the same tension it helps customers manage: how to let engineers move fast without creating a security mess. Wang said the company actively tracks the ratio of incidents to AI-generated code as engineers use tools like Claude Code and Cursor. “That’s a metric we track intently to make sure we’re generating quality code.”

How developers are incurring major security risks

Stamos said one of the most common behaviors Corridor observes is developers pasting credentials directly into prompts, which is a huge security risk. Corridor flags it and sends the developer back toward proper secrets management.

“The standard thing is you just go grab an API key or take your username and password and you just paste it into the prompt,” he said. “We find this all the time because we’re hooked in and grabbing the prompt.”

Wang described 1Password’s approach as working on the output side, scanning code as it is written and vaulting any plain text credentials before they persist. The tendency toward the cut-and-paste method of system access is a direct influence on 1Password’s design choices, which is to avoid security tooling that creates friction.

Advertisement

“If it’s too hard to use, to bootstrap, to get onboarded, it’s not going to be secure because frankly people will just bypass it and not use it,” she said.

Why you cannot treat a coding agent like a traditional security scanner

Another challenge in building feedback between security agents and coding models is false positives, which very friendly and agreeable large language models are prone toward. Unfortunately, these false positives from security scanners can derail an entire code session.

“If you tell it this is a flaw, it’ll be like, yes sir, it’s a total flaw!” Stamos said. But, he added, “You cannot screw up and have a false positive, because if you tell it that and you’re wrong, you will completely ruin its ability to write correct code.”

That tradeoff between precision and recall is structurally different from what traditional static analysis tools are designed to optimize for, and it has required significant engineering to get right at the latency required, on the order of a few hundred milliseconds per scan.

Advertisement

Authentication is easy, but authorization is where things get hard

“An agent typically has a lot more access than any other software in your environment,” noted Spiros Xanthos, founder and CEO at Resolve AI, in an earlier session at the event. “So, it is understandable why security teams are very concerned about that. Because if that attack vector gets utilized, then it can both result in a data breach, but even worse, maybe you have something in there that can take action on behalf of an attacker.”

So how do you give autonomous agents scoped, auditable, time-limited identities? Wang pointed to SPIFFE and SPIRE, workload identity standards developed for containerized environments, as candidates being tested in agentic contexts. But she acknowledged the fit is rough.

“We’re kind of force-fitting a square peg into a round hole,” she said.

But authentication is only half of it. Once an agent has a credential, what is it actually allowed to do? Here’s where the principle of least privilege should be applied to tasks rather than roles.

Advertisement

“You wouldn’t want to give a human a key card to an entire building that has access to every room in the building,” she explained. “You also don’t want to give an agent the keys to the kingdom, an API key to do whatever it needs to do forever. It needs to be time-bound and also bound to the task you want that agent to do.”

In enterprise environments, it won’t be enough to grant scoped access, organizations will need to know which agent acted, under what authority, and what credentials were used.

Stamos pointed to OIDC extensions as the current frontrunner in standards conversations, while dismissing the crop of proprietary solutions.

“There are 50 startups that believe their proprietary patented solution will be the winner,” he said. “None of those will win, by the way, so I would not recommend.”

Advertisement

At a billion users, edge cases are not edge cases anymore

On the consumer side, Stamos predicted the identity problem will consolidate around a small number of trusted providers, most likely the platforms that already anchor consumer authentication. Drawing on his time as CISO at Facebook, where the team handled roughly 700,000 account takeovers per day, he reframed what scale does to the concept of an edge case.

“When you’re the CISO of a company that has a billion users, corner case is something that means real human harm,” he explained. “And so identity, for normal people, for agents, going forward is going to be a humongous problem.”

Ultimately, the challenges CTOs face on the agent side stem from incomplete standards for agent identity, improvised tooling, and enterprises deploying agents faster than the frameworks meant to govern them can be written. The path forward requires building identity infrastructure from scratch around what agents actually are, not retrofitting what was built for the humans who created them.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

AI analytics agents need guardrails, not more model size

Published

on

Picture a VP of finance at a large retailer. She asks the company’s new AI analytics agent a simple question: “What was our revenue last quarter?” The answer comes back in seconds.

Confident.

Clean.

Wrong.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

That exact scenario happens more frequently than many organizations would care to admit. AtScale, which enables organizations to deploy governed analytics environments and semantic consistency, has found that simply increasing model parameterization alone cannot address the AI governance and context issues enterprises face.

When AI systems query inconsistent or ungoverned data, adding more model complexity doesn’t contain the problem, it compounds it. Organizations across industries have acted quickly to develop agentic AI, deploying systems that analyze data, generate insights, and trigger automated workflows. In response to this trend, the AI models have adapted to react quickly via larger model parameters, increased computing power, and additional features. The underlying assumption has been that as long as the model gets large enough, the eventual result will be reliable.

Advertisement

However, there are indications that this assumption may not hold up. Recent TDWI research found that nearly half of respondents characterized their AI governance initiatives as either immature or very immature. This may have more to do with data lineage and the business definitions on which these models are based than with the models’ capabilities.

Why bigger models don’t solve governance

The AI industry tends to operate on an unexamined assumption about what drives better performance: as we build more advanced models, they will somehow self-correct their performance errors. In enterprise analytics, that assumption can fall apart quickly.

While scale may improve the breadth of reasoning in a model, it doesn’t automatically enforce which definition of gross margin the business has agreed to use. It doesn’t resolve metric inconsistencies that have lived in separate dashboards for years. And it also doesn’t produce traceable lineage on its own.

Governance problems don’t resolve at scale. Business rules buried in individual tools, inconsistent definitions across teams, and outputs with no audit trail are structural issues, and a larger model doesn’t fix structure. It just produces unreliable answers more fluently.

Advertisement

At AtScale, there’s a consistent theme among our clients: When inconsistent data definitions followed organizations into their AI layer, the problems didn’t stop there. They propagated forward, typically at greater speed and with less transparency than the previous layer had offered.

Performance and responsibility are separate jobs. A model reasons. A governance layer defines what the model reasons over, constrains how it applies business logic, and ensures outputs can be traced back to a source of record. One cannot substitute for the other.

The real risk: Unconstrained agents in enterprise environments

The problem with AI agents is seldom the model itself. It’s what the model is working with, and if anyone can see what it did.

With common context, AI agents might read data differently on different systems. In large enterprises, even small differences in definitions can lead to different results. Structural risks typically stem from four main causes:

Advertisement
  • Agents pull from sources where the same metric can mean different things to different teams, making data definitions less clear.
  • Metrics from different departments that don’t agree – two agents give two answers, but it’s not clear which one is right.
  • Unclear reasoning produces outputs without a clear lineage as to how a decision was made.
  • Audit gaps: When outputs can’t be traced back to a governed source of record, there’s no reliable way to catch errors, assign accountability, or course-correct.

These are not signs that AI is not working. They show that the infrastructure around AI hasn’t kept up.

What guardrails actually mean in AI analytics

Guardrails are often viewed as a limitation. However, in many cases, guardrails are the very conditions that permit AI agents to operate with greater confidence.

Guardrails can help align AI-generated outputs with established business logic. They also create a structure in which autonomous agents can operate; this way, as autonomy increases, so does reliability. In analytics, guardrails typically exist in several specific formats:

  • Shared data definitions: A single definition of terms such as revenue, churn, or margin that are shared across all systems.
  • Business logic constraints: Rules governing how calculations are to be performed, regardless of the tools or agents performing those calculations.
  • Lineage visibility: The capability to identify where any output originated from.
  • Access controls: Defined permissions determining what data an agent can query.
  • Standardization of metrics: Consistent definitions applicable across departments and platforms.

The intention isn’t to impede AI’s performance. It’s to offer AI a base upon which it can stand.

The role of the semantic layer as a constraint framework

A semantic layer sits between data and the applications and AI agents that use it, defining business concepts, implementing logical processes, and providing a common framework of terms for all applications and AI agents to draw upon.

A semantic layer does not manipulate or duplicate data; it defines what the data represents. By asking questions of a governed semantic layer rather than the base table, AI agents can generate output based on business-defined logic, rather than on inference. The distinction of this output becomes particularly important when multiple AI agents across multiple systems must produce similar outputs.

Advertisement

From AtScale’s perspective, the semantic layer serves as a context boundary that can help ensure AI agents interpret data according to shared business definitions. The semantic layer is more analogous to a common language, as opposed to a guardrail, that ensures all systems operate with a common understanding.

Governance is an architectural question, not a model question

Enterprise organizations realize that AI governance is less about building the largest model and more about making an environment where the chosen model can work well. A well-designed and governed architecture (with shared definitions for concepts, traceable logic, and a shared context across all systems) will likely deliver better, more reliable results than a larger model running in an uncontrolled data environment.

Scaling models without improving semantic clarity tends to add complexity, not reduce it. As each additional tool, system, or workflow is added to an uncontrolled environment, the opportunities for divergence increase.

In this sense, responsible AI is an infrastructure challenge. Organizations with successful AI deployments treat the meaning of their data as a design decision,before the model is even chosen.

Advertisement

Economic and operational implications

Governance gaps do not stay abstract for long. They tend to show up in the budget.

Ambiguity in data meaning may increase operational friction, agents that produce inconsistent outputs require human review, reconciliation cycles, and rework that compounds across teams and tools. When lineage is not clear, audits cost more. Retrofitting controls after deployment typically costs more than building the right architecture from the start.

In complex enterprise settings, costs can show up in predictable ways: redundant validation when outputs don’t match across systems, excess compute triggered by unclear queries, and slower analysis as teams pause to figure out which answer is actually reliable. Clear semantic constraints can mean fewer validation cycles, and that operational value is becoming easier to measure.

The path forward: Constrained autonomy

AI agents aren’t a future consideration, they’re already in use. What’s still catching up is the infrastructure around them. Agents without clear context and constraints tend to operate beyond what the organization can actually govern. That gap doesn’t close on its own.

Advertisement

The differentiator in enterprise AI, AtScale contends, won’t be model scale, it will be the clarity of the environment models operate in. As agents become more common in business workflows, how well the semantic layer is defined may matter more than how large the model is.

This shift toward governed context and constrained autonomy is explored in more detail in AtScale’s 2026 State of the Semantic Layer report, which examines how open standards, interoperability, and semantic governance are shaping the next phase of enterprise intelligence.

Source link

Advertisement
Continue Reading

Tech

DoorDash will start paying gig workers for creating content to train AI models

Published

on

DoorDash has launched a new option for its gig economy workers to earn some extra cash. The delivery service introduced Tasks, which it describes as “short activities Dashers can complete between deliveries or in their own time.” It gives taking pictures of restaurant dishes or recording video of unscripted conversations in languages other than English as examples. These materials will be used to train artificial intelligence and robotics models.

A representative from DoorDash told Bloomberg News that it will use Tasks content for evaluating its in-house AI models as well as those made by its partner companies in retail, insurance, hospitality and tech. DoorDash is piloting a standalone app for Tasks where Dashers will submit their content. The blog post notes that pay will be displayed upfront, and compensation will vary based on the complexity of the activity.

This idea isn’t new. We’ve seen other startups in AI and robotics offering payment for content filmed by regular people. Considering how many lawsuits are underway against AI companies that have already benefited from unauthorized use of copyrighted materials, at least this approach lets people be directly compensated for training content.

Source link

Advertisement
Continue Reading

Tech

Look at What Mattel’s Cooking Up: New Castle Grayskull Bricks, Naruto Hot Wheels, Monster High Skeletor

Published

on

Searching for a reason to buy collectors’ editions of Mattel products? With the toy drops unveiled during Mattel Creations Revealed event Thursday, you may be able to make a case for yourself. Homing in on fandoms ranging from anime to Barbie to Monster High, the company shared a fresh lineup of releases. Masters of the Universe fans: lock in.

You can stack Mattel Bricks to create Eternia’s legendary Castle Grayskull with a new set (pictured above) that’s the first of its kind for the toy giant. Get into your display — or play — with Masters of the Universe Nano figures depicting characters like He-Man, Evil-Lyn, Skeletor, Battle Cat and Teela. Available in the Brick Shop starting April 25, the set will retail for $65. For fans who want a bit of extra nostalgia, you can also buy the light-up Laser Power He-Man figure for $30 to add to your display.

Mattel has been on somewhat of a roll with its Monster High Skullector series, with doll collabs that feature iconic movie and TV favorites like Coraline, Wednesday and Morticia, Pennywise and Alien. Skeletor has been added to the collection and is quite the baddie. Just check out the outfit, high-heeled boots and signature smirk. Her staff speaks as well, but I won’t spoil it by telling you the catchphrase. The price? $65.

Advertisement
Mattel Monster High Skeletor

Will you be adding this version of Skeletor to your Monster High collection?

Mattel/Jeff Hazelwood/CNET

If the He-Man franchise isn’t for you, maybe you’ll be into the drop for Naruto. Burn up a Hot Wheels track with the Nissan Silvia S15 model that carries Naruto emblazoned on both sides. Making its debut as part of a partnership between Mattel and anime franchises, the car is priced at $25.

Advertisement
Hot Wheels Gold Nissan Car

Naruto fans, this one is made for you.

Mattel/Jeff Hazelwood/CNET

We’ve seen Barbie go couture in the past, and Mattel has just revealed its atelier design that takes handcrafted fashion up a notch. The new poseable Grand Couture Silkstone Barbie doll retails for $342, stands at 14.5 inches and wears a ruffled coat (with a train) with a shimmery embroidered dress underneath. Drop earrings and pink boots complete the high-end look.

Not to be sidelined at today’s event, Ken was spotlighted with the release of his own Uno card deck that features a variety of, well, Kens. Celebrate his 65th anniversary with the $13 deck, and you can decide if you’ve had Ken-ough of draw fours and skips.

Advertisement
mattel barbie grand couture doll in fancy dress and Ken doll uno deck

New Barbie and Ken products revealed today, including a celebratory Ken Uno deck.

Mattel/Jeff Hazelwood/CNET

One of the other pop culture moments — and figures — commemorated with a toy reveal today came in the form of WWE star Stone Cold Steve Austin. He’s been immortalized as a Mattel action figure that captures when he coined his 3:16 catchphrase 30 years ago, with the Elite Collection addition coming equipped with a crown, table and throne. Get it for your WWE collection for $30.

Advertisement

Source link

Continue Reading

Tech

Navia discloses data breach impacting 2.7 million people

Published

on

Navia discloses data breach impacting 2.7 million people

Navia Benefit Solutions, Inc. (Navia) is informing nearly 2.7 million individuals of a data breach that exposed their sensitive information to attackers.

An investigation into the incident revealed that the hackers had access to the organization’s systems between December 22, 2025, and January 15, 2026. However, the company discovered the suspicious activity on January 23.

Navia says that it responded immediately and launched an inquiry to determine the potential impact of the incident.

“The investigation determined that an unauthorized actor accessed and acquired certain information between December 22, 2025, and January 15, 2026,” the company says in the notification to impacted individuals.

Advertisement

Navia is a consumer-focused administrator of benefits that provides services to more than 10,000 employers across the U.S.

The company provides software and customer services for the administration of Flexible Spending Accounts (FSA), Health Savings Accounts (HSA), Health Reimbursement Arrangements (HRA), Commuter Benefits and COBRA Services.

It also helps handle commuter benefits, lifestyle accounts, education benefits, compliance/risk services, and retirement-related offerings.

According to the company, the investigation into the breach revealed that the hacker accessed and may have exfiltrated the following types of data:

Advertisement
  • Full name
  • Date of birth
  • Social Security Number (SSN)
  • Phone number
  • Email address
  • Participation in HRA (Health Reimbursement Arrangements)
  • FSA (Flexible Spending Accounts) information
  • Consolidated Omnibus Budget Reconciliation Act (COBRA) enrollment information

Navia underlines that the data breach did not expose details about claims or financial information. Nevertheless, the exposed data is enough for threat actors to deploy phishing and social engineering attacks aimed at affected individuals.

The company states that it has reviewed its security posture and data retention policies to identify potential weaknesses that can be improved, and has notified federal law enforcement about the incident.

Customers whose information was exposed will be covered by a free 12-month identity protection and credit monitoring service from Kroll. Letter recipients are also encouraged to consider placing a fraud alert and security freeze on their credit files.

At the time of writing, no ransomware group has claimed the Navia data breach.

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Advertisement

Source link

Continue Reading

Tech

Reading The World’s Smallest Hard Drive

Published

on

You have a tiny twenty-year-old hard drive with a weird interface. How do you read it? If you’re [Will Whang], by reverse engineering, and building an interface board.

In many of our portable, mobile, and desktop computers, we’re used to solid-state storage. It’s fast and low power, and current supply-chain price hikes notwithstanding, affordable in the grand scheme of things. It wasn’t always this way though, a couple of decades ago a large flash drive was prohibitively expensive. Hard drive manufacturers did their best to fill the gap with tiny spinning-rust storage devices which led to the smallest of them all: the Toshiba MK4001MTD. It crammed 4 GB onto a 0.85″ platter, and could be found in a few devices such as high-end Nokia phones.

Breaking out the Nokia’s hard drive interface.

The drive’s connector is a pattern of pads on a flexible PCB, one he couldn’t help noticing had a striking resemblance to an obscure SD card variant. Hooking it up to an SD reader didn’t work unfortunately, so a battered Nokia was called into service. It was found to be using something electrically similar to the SD cards, but with the ATA protocol familiar from the world of full-size hard drives.

The interface uses the PIO capability of the RP2040, and the board makes a tidy peripheral in itself. We’re guessing not many of you have one of these drives, but perhaps if you do, those early 2000s phone pics aren’t lost for good after all.

Advertisement

These drives are rare enough that this is the first time we’ve featured one here at Hackaday, but we’ve certainly ventured into hard drive technology before.

Source link

Advertisement
Continue Reading

Tech

Retro Weather Display Acts Like It’s Windows 95

Published

on

Sometimes you really need to know what the weather is doing, but you don’t want to look at your phone. For times like those, this neat weather display from [Jordan] might come in handy with its throwback retro vibe.

The build is based around the ESP32-2432S028—also known as the CYD, or Cheap Yellow Display, for the integrated 320 x 240 LCD screen. [Jordan] took this all-in-one device and wrapped it in an attractive 3D-printed housing in the shape of an old-school CRT monitor, just… teenier. A special lever mechanism was built in to the enclosure to allow front panel controls to activate the tactile buttons on the CYD board. The ESP32 is programmed to check Open-Meteo feeds for forecasts and current weather data, while also querying a webcam feed and satellite and radar JPEGs from available weather services. These are then displayed on screen in a way that largely resembles the Windows 95 UI design language, with pages for current conditions, future forecasts, wind speeds, and the like.

We’ve seen some fun weather displays over the years, from graphing types to the purely beautiful. If you’ve found a fun way to display the weather (or change it) don’t hesitate to notify the tipsline. Particularly in the latter case.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Bitrefill blames North Korean Lazarus group for cyberattack

Published

on

Bitrefill blames North Korean Lazarus group for cyberattack

Crypto-powered gift card store Bitrefill says that the attack it suffered at the beginning of the month was likely perpetrated by North Korean hackers of the Bluenoroff group.

During the investigation, the platform observed indicators similar to previous attacks attributed to the North Korean threat actor, like tactics, malware, IP and email addresses.

“Based on indicators observed during the investigation  – including the modus operandi, the malware used, on-chain tracing and reused IP + email addresses (!) – we find many similarities between this attack and past cyberattacks by the DPRK Lazarus / Bluenoroff group against other companies in the crypto industries,” reads Bitrefill’s statement.

Tweet

Bitrefill is a mid-sized e-commerce platform that enables people to pay in cryptocurrency for gift cards at stores in 150 countries. The gift cards can be used to pay for anything from clothing, food and groceries, health and beauty products to bills, services, gas, transportation, and electronics.

Advertisement

The platform supports more than 600 mobile operators and thousands of brands worldwide.

On March 1st, Bitrefill announced technical issues affecting access to its website and app. A day later, the company disclosed that it had identified a security issue and took all services offline.

Although user balances were not affected, the gradual restoration of all services still continues to this day.

The breach was discovered after Bitrefill noticed suspicious supplier purchasing patterns, exploitation of gift card stock and supply lines, and draining of some “hot” wallets.

Advertisement

The investigation the firm launched to determine the cause revealed that the attack originated on a compromised employee’s laptop.

The attackers stole legacy credentials and used them to access a snapshot with production secrets, later escalating access to the larger Bitrefill infrastructure, including parts of the database and some cryptocurrency wallets.

About 18,500 purchase records containing customer email addresses, IP addresses, and cryptocurrency payment addresses were exposed in the breach. For 1,000 purchases, customer names were also exposed.

Although this information is stored in encrypted form, Bitrefill notes that the attackers may have obtained the decryption keys.

Advertisement

Bitrefill says this was the most serious cyberattack it has suffered in its ten years of existence, but it survived with minimal losses, which will be covered from its capital.

Ultimately, Bitrefill believes that attackers were after cryptocurrency and gift card inventory, not customer information.

BlueNoroff, also known as APT38, is a cluster of the Lazarus group that has been active since at least 2014. It typically targets financial organizations, with a more recent focus on the cryptocurrency industry, the objective being crypto theft.

Bitrefill says this was the most serious cyberattack it has suffered in the ten years of its existence, but it survived with minimal losses, which will be covered from its capital.

Advertisement

Meanwhile, it is expanding security reviews and pen-testing, tightening access controls, improving logging and monitoring, and refining automated shutdown mechanisms.

At this time, most of its services have returned to normal operational status, and customers aren’t required to take any action beyond treating incoming communications with extra caution.

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Advertisement
Continue Reading

Tech

Intel’s latest push gains traction as GMKtec unveils EVO-T2 mini PC with 180 TOPS and unusual memory workaround

Published

on


  • GMKtec EVO-T2 mini PC reaches 180 TOPS using combined CPU, GPU, and NPU acceleration
  • Its PCIe 5.0 storage introduces data speeds exceeding 10GB per second
  • Local AI models run without relying on external cloud infrastructure

At a recent launch event, GMKtec introduced the GMKtec EVO-T2, a compact desktop system built for local AI computing.

According to the company, the device integrates third-generation Intel Core Ultra processors and claims up to 180 TOPS of compute capability.

Advertisement

Source link

Continue Reading

Tech

Nvidia lets its ‘claws’ out: NemoClaw brings security, scale to the agent platform taking over AI

Published

on

Every few years, a piece of open-source software arrives that rewires how the industry thinks about computing. Linux did it for servers. Docker did it for deployment. OpenClaw — the autonomous AI agent platform that went from niche curiosity to the fastest-growing open-source project in history in a matter of weeks — may be doing it for software itself.

Nvidia CEO and co-founder Jensen Huang made his position plain at GTC 2026 this week: “OpenClaw is the operating system for personal AI. This is the moment the industry has been waiting for — the beginning of a new renaissance in software.” And Nvidia wants to be the company that makes it enterprise-ready.

At its annual large GTC 2026 conference in San Jose this week, Nvidia unveiled NemoClaw, a software stack that integrates directly with OpenClaw and installs in a single command. Along with it came Nvidia OpenShell, an open-source security runtime designed to give autonomous AI agents — or “claws”, as the industry is increasingly calling them — the guardrails they need to operate inside real enterprise environments. Alongside both, the company announced an expanded Nvidia Agent Toolkit, a full-stack platform for building and running production-grade agentic workflows.

The message from Jensen Huang was unambiguous. “Claude Code and OpenClaw have sparked the agent inflection point — extending AI beyond generation and reasoning into action,” the Nvidia CEO said ahead of the conference. “Employees will be supercharged by teams of frontier, specialized and custom-built agents they deploy and manage.” Watch my video overview of it below and read on for more:

Advertisement

Why ‘claws’ — and why it matters that Nvidia is using the word

The terminology shift happening inside enterprise AI circles is subtle but significant. Internally, teams building with OpenClaw and similar platforms have taken to calling individual autonomous agents claws — a nod to the platform name, but also a useful shorthand for a new class of software that differs fundamentally from the chatbots and copilots of the last two years.

As Kari Briski, Nvidia’s VP of generative AI software, put it during a Sunday briefing: “Claws are autonomous agents that can plan, act, and execute tasks on their own — they’ve gone from just thinking and executing on tasks to achieving entire missions.”

That framing matters for IT decision-makers. Claws are not just assistants. They are persistent, tool-using programs that can write code, browse the web, manipulate files, call APIs, and chain actions together over hours or days without human input. The productivity upside is substantial. So is the attack surface. Which is precisely the problem Nvidia is positioning NemoClaw to solve.

Advertisement

The enterprise demand is not hypothetical. Harrison Chase, founder of LangChain — whose open-source agent frameworks have been downloaded more than a billion times — put it bluntly in a recent episode of VentureBeat’s Beyond the Pilot podcast: “I guarantee that every enterprise developer out there wants to put a safe version of OpenClaw onto onto their computer or expose it to their users.” The bottleneck, he made clear, has never been interest. It has been the absence of a credible security and governance layer underneath it. NemoClaw is Nvidia’s answer to that gap — and notably, LangChain is one of the launch partners for the Agent Toolkit and OpenShell integration.

What NemoClaw actually does — and what it doesn’t replace

NemoClaw is not a competitor to OpenClaw (or the now many alternatives). It is best understood as an enterprise wrapper around it — a distribution that ships with the components a security-conscious organization actually needs before letting an autonomous agent near production systems.

The stack has two core components. The first is Nvidia Nemotron, Nvidia’s family of open models, which can run locally on dedicated hardware rather than routing queries through external APIs. Nemotron-3-Super, scored the highest out of all open models on PinchBench, a benchmark that tests the types of tasks and tools calls needed by OpenClaw.

The second is OpenShell, the new open-source security runtime that runs each claw inside an isolated sandbox — effectively a Docker container with configurable policy controls written in YAML. Administrators can define precisely which files an agent can access, which network connections it can make, and which cloud services it can call. Everything outside those bounds is blocked.

Advertisement

Nvidia describes OpenShell as providing the missing infrastructure layer beneath claws — giving them the access they need to be productive while enforcing policy-based security, network, and privacy guardrails.

For organizations that have been watching OpenClaw’s rise with a mixture of excitement and dread, this is a meaningful development. OpenClaw’s early iterations were, by general consensus, a security liability — powerful and fast-moving, but essentially unconstrained. NemoClaw is the first attempt by a major hardware vendor to make that power manageable at enterprise scale.

The hardware angle: always-on agents need dedicated compute

One aspect of NemoClaw that deserves more attention than it has received is the hardware strategy underneath it. Claws, by design, are always-on — they do not wait for a human to open a browser tab. They run continuously, monitoring inboxes, executing tasks, building tools, and completing multi-step workflows around the clock.

That requires dedicated compute that does not compete with the rest of the organization’s workloads. Nvidia has a clear interest in pointing enterprises toward its own hardware for this purpose.

Advertisement

NemoClaw is designed to run on Nvidia GeForce RTX PCs and laptops, RTX PRO workstations, and the company’s DGX Spark and DGX Station AI supercomputers. The hybrid architecture allows agents to use locally-running Nemotron models for sensitive workloads, with a privacy router directing queries to frontier cloud models when higher capability is needed — without exposing private data to those external endpoints.

It is an elegant solution to a real problem: many enterprises are not yet ready to send customer data, internal documents, or proprietary code to cloud AI providers, but they still need model capability that exceeds what runs locally. NemoClaw’s privacy router architecture threads that needle, at least in principle.

What claws actually look like in the enterprise 

Before evaluating the platform, it helps to understand what a claw doing real work looks like in practice. Two partner integrations announced alongside NemoClaw offer the clearest window into where this is heading.

Box is perhaps the most illustrative case for organizations that manage large volumes of unstructured enterprise content.

Advertisement

Box is integrating Nvidia Agent Toolkit to enable claws that use the Box file system as their primary working environment, with pre-built skills for Invoice Extraction, Contract Lifecycle Management, RFP sourcing, and GTM workflows.

The architecture supports hierarchical agent management: a parent claw — such as a Client Onboarding Agent — can spin up specialized sub-agents to handle discrete tasks, all governed by the same OpenShell Policy Engine.

Critically, an agent’s access to files in Box follows the exact same permissions model that governs human employees — enforced through OpenShell’s gateway layer before any data is exchanged. Every action is logged and attributable; no shadow copies accumulate in agent memory. As Box puts it in their announcement blog, “organizations need to know which agent touched which file, when, and why — and they need the ability to revoke access instantly if something goes wrong.”

Cisco’s integration offers perhaps the most visceral illustration of what OpenShell guardrails enable in practice. The Cisco security team has published a scenario in which a zero-day vulnerability advisory drops on a Friday evening.

Advertisement

Rather than triggering a weekend-long manual scramble — pulling asset lists, pinging on-call engineers, mapping blast radius — a claw running inside OpenShell autonomously queries the configuration database, maps impacted devices against the network topology, generates a prioritized remediation plan, and produces an audit-grade trace of every decision it made.

Cisco AI Defense verifies every tool call against approved policy in real time. The entire response completes in roughly an hour, with a complete record that satisfies compliance requirements.

“We are not trusting the model to do the right thing,” the Cisco team noted in their technical writeup. “We are constraining it so that the right thing is the only thing it can do.”

An ecosystem play: the partners behind the stack

Nvidia is not building this alone. The Agent Toolkit and OpenShell announcements came with a significant roster of enterprise partners — Box, Cisco, Atlassian, Salesforce, SAP, Adobe, CrowdStrike, Cohesity, IQVIA, ServiceNow, and more than a dozen others — whose integration depth signals how seriously the broader software industry is treating the agentic shift.

Advertisement

On the infrastructure side, OpenShell is available today on build.nvidia.com, supported by cloud inference providers including CoreWeave, Together AI, Fireworks, and DigitalOcean, and deployable on-premises on servers from Cisco, Dell, HPE, Lenovo, and Supermicro. Agents built within OpenShell can also continuously acquire new skills using coding agents including Claude Code, Codex, and Cursor — with every newly acquired capability subject to the same policy controls as the original deployment.

Separately, Nvidia announced the Nemotron Coalition — a collaborative initiative bringing together Mistral AI, Perplexity, Cursor, and LangChain to co-develop open frontier models. The coalition’s first project is a base model co-developed with Mistral that will underpin the upcoming Nemotron 4 family, aimed specifically at agentic use cases.

What enterprise leaders should be watching

The NemoClaw announcement marks a turning point in how enterprise AI is likely to be discussed in boardrooms and procurement meetings over the next twelve months. The question is no longer whether organizations will deploy autonomous agents. The industry has clearly moved past that debate. The question is now how — with what controls, on what hardware, using which models, and with what audit trail.

Nvidia’s answer is a vertically integrated stack that spans silicon, runtime, model, and security policy. For IT leaders evaluating their agentic roadmap, NemoClaw represents a significant attempt to provide all four layers from a single vendor, with meaningful third-party security integrations already in place.

Advertisement

The risks are not trivial. OpenShell’s YAML-based policy model will require operational maturity that most organizations are still building. Claws that can self-evolve and acquire new skills — as Nvidia’s architecture explicitly enables — raise governance questions that no sandbox can fully resolve. And the concentration of agentic infrastructure in a single vendor’s stack carries familiar platform risks.

That said the direction is clear. Claws are coming to the enterprise. Nvidia just made its bet on being the platform they run on — and the guardrails that keep them in bounds.

Source link

Advertisement
Continue Reading

Tech

Bowers & Wilkins Refreshes Pi8 and Px7 S3 with New Finishes: Is Color the New Innovation in Premium Headphones?

Published

on

Bowers & Wilkins isn’t pretending this is a breakthrough and that’s exactly the point. The British luxury audio brand has expanded its flagship Pi8 true wireless earbuds and Px7 S3 noise-cancelling headphones with a slate of new premium finishes, leaning into a trend that’s been quietly reshaping the high-end audio category: color as innovation. The Pi8 now arrives in Dark Burgundy and Pale Mauve, bringing the total to six finishes, while the Px7 S3 adds a new Vintage Maroon option to its growing lineup.

If that sounds familiar, it should. Last year, I pointed out how a long list of premium audio brands had started treating industrial design and colorways not as afterthoughts, but as a legitimate product cycle strategy; extending relevance without touching the underlying acoustics. Bowers & Wilkins is now fully committed to that playbook. The hardware hasn’t changed and it didn’t need to, but the visual refresh keeps both models firmly in the conversation in a market that’s running out of meaningful spec-sheet upgrades.

Available starting March 19, the new finishes don’t come cheap: $499 for the Pi8 in Pale Mauve or Dark Burgundy, and $479 for the Px7 S3 in Vintage Maroon. Same award-winning sound, new wardrobe. Whether that counts as innovation or just smart business depends on how easily you’re seduced by a better shade of red.

What Are the Bowers & Wilkins Pi8?

Bowers & Wilkins Pi8 Earbuds with Charging Case Dark Burgundy
Bowers & Wilkins Pi8 in new Dark Burgundy color for 2026

The Bowers & Wilkins Pi8 are the company’s flagship true wireless earbuds, positioned as a no-compromise attempt to deliver genuine high-end sound in a category that usually prioritizes convenience over fidelity. In our review, the Pi8 stand out for their refined tuning, clarity, and sense of control, offering a presentation that feels closer to a compact hi-fi system than a typical pair of wireless earbuds. They’re designed for listeners who actually pay attention to what they’re hearing and not just how easily it connects.

At their core, the Pi8 combine carbon cone drivers, advanced DSP, and support for aptX Lossless to push beyond the limitations that have traditionally defined Bluetooth audio. Bowers & Wilkins also includes a smart charging case with retransmission capability, allowing wired sources to be streamed directly to the earbuds; an unusually practical feature that adds real-world flexibility. It’s a more thoughtful approach than most, focusing on how people actually use their gear rather than chasing feature checklists.

Advertisement
2026 Bowers & Wilkins Pi8 wireless earbuds in Pale Mauve color
Bowers & Wilkins Pi8 in new Pale Mauve color for 2026

That said, the Pi8 don’t try to win on every front. As we noted in our review, the emphasis is clearly on sound quality, materials, and overall refinement, rather than class-leading noise cancellation or mass-market pricing. If overall sound quality, comfort, and strong but not class leading ANC matter most, these are among the best options available.

What Are the Bowers & Wilkins Px7 S3?

The Bowers & Wilkins Px7 S3 are the brand’s latest over-ear wireless noise-cancelling headphones, sitting just below the Px8 S2 but very much aimed at the same crowd that shops Sony, Bose, and Sennheiser at the top of the category. In our review, they come across as a deliberate refinement of what Bowers & Wilkins has been building for over a decade; premium materials, a more mature design language, and a clear focus on sound quality first. This isn’t a lifestyle headphone trying to fake it. It’s a high-end hi-fi product that just happens to be wireless. 

2026 Bowers & Wilkins Px7 S3 Wireless Headphones in Vintage Maroon
Bowers & Wilkins Px7 S3 in new Vintage Maroon color for 2026

Where the Px7 S3 separates itself is in how it sounds relative to its competition. As noted in the review, it delivers audiophile-grade clarity, deep and controlled bass, and a level of detail that outpaces most rivals in this price range, including the usual suspects from Sony, Bose, and Apple. Bowers & Wilkins has also refined the internal driver design and overall tuning, while adding modern essentials like aptX Lossless and a more flexible EQ. The result is a presentation that feels more composed and revealing than what you typically get from mainstream ANC headphones.

That said, like the Pi8, the Px7 S3 doesn’t try to dominate every category. The review makes it clear that while ANC is improved and competitive, it’s not the class leader, and comfort is very good without being the lightest or most effortless in the segment. This is a headphone built around priorities: sound quality, build, and long-term listening satisfaction. If that’s what matters most, it’s one of the strongest all-around options available right now and one of the few that still feels like it was tuned by people who actually prioritise sound quality over ANC and connectivity features.

The Bottom Line

New colors are not innovation, but they do make the Pi8 and Px7 S3 feel fresher and harder to ignore. More importantly, this kind of refresh signals longevity these models are not going anywhere. Same excellent sound, now with a little more swagger.

Where to buy:

Tip: These new finishes add to the existing colors which include back, white, blue and jade green. Currently the new colors are only available at the Bowers & Wilkins website.

Advertisement
Advertisement. Scroll to continue reading.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025