Connect with us
DAPA Banner

Tech

What is the release date for The Pitt season 2 episode 11 on HBO Max?

Published

on

It feels like a long old while since Baby Jane Doe (who’s looking good and taking formula well) was the talking point of The Pitt season 2. The shift has gotten more difficult over the last few hours, but the worst is yet to come.

The ER is currently under digital lockdown to prevent a cyber attack, meaning no computer records can be accessed, the number of patients practically doubles every five seconds, and replacement Dr. Al-Hashimi (Sepideh Moafi) isn’t making life easier for anyone.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Seattle puts Microsoft Copilot expansion on hold as new mayor takes stock of AI

Published

on

The downtown Seattle skyline. (GeekWire Photo / Lisa Stiffler)

Five months after releasing its “responsible AI plan” providing guidelines for the municipality’s use of artificial intelligence, the City of Seattle has tapped the brakes on the tech’s official deployment for city employees.

Mayor Katie Wilson last month paused the planned citywide rollout of Microsoft Copilot, as first reported Monday in The Seattle Times. Her predecessor, Mayor Bruce Harrell, had approved the launch before leaving office in December.

“While implementation of the technology is delayed, the education and governance work continues,” Megan Erb, spokesperson for the Seattle Information Technology Department, told GeekWire. “The City is still conducting educational roadshows for departments, as well as working to advance our foundational work in data governance and data readiness.” 

In September, Seattle released its AI plan, which covers training and skill-building opportunities for city employees, and establishes a framework to facilitate and evaluate the use of AI tools in city operations. The city also conducted a pilot test of Copilot with 500 employees. The technology is available at no additional cost for Microsoft 365 users under Seattle’s enterprise agreement.

Participants reported:

Advertisement
  • Collectively saving more than 450 hours of work per week, such as drafting communications, report preparation, document analysis and research.
  • The technology proved most helpful for writing more clearly, producing faster summaries of documents and meeting notes, and quick access to policies and regulations.
  • 83% said Copilot Chat provided “business value.”
  • 79% said it was a positive user experience.

Seattle has been a leader in efforts to adopt next-gen AI tools, and says it issued the nation’s first generative AI policy in fall 2023. Even before the recently released AI plan, Seattle already had policies requiring “human-in-the-loop” oversight, meaning employees must review generative AI outputs before official use and disclose when work is AI assisted. The city also identified prohibited applications, such as AI in hiring decisions and facial recognition, due to concerns about bias and reliability.

Concerns about municipal AI regulations and oversight are widespread. An investigative series published earlier this year by the news organization Cascade PBS found that multiple Washington cities had limited guardrails around AI use, raising public trust and privacy concerns. Seattle was not among the cities scrutinized.

Seattle leaders in the past have framed their effort as a balance between embracing new technology and upholding their fundamental obligation to serve the public, emphasizing that AI is a tool — not a replacement for employees.

Erb said the delayed deployment of Copilot is a part of a “phased approach” to ensure “the City responsibly tests and adopts artificial intelligence tools, meets all privacy and security requirements, and deploys solutions that provide clear benefits to employees while upholding the City’s Responsible AI commitments.” 

Rob Lloyd, Seattle’s chief technology officer, resigned last month, effective March 27, to become executive director of the Center for Digital Government. The city is recruiting a replacement.

Advertisement

In December, the city appointed Lisa Qian as its first AI Officer. Her experience includes serving as a senior manager of data science at LinkedIn, as well other tech company leadership positions.

During the fall budget process, the Seattle City Council asked the Seattle IT Department to provide quarterly reports on the use of AI, and that information will be submitted April 1.

The city previously identified 41 priority projects in which AI could potentially improve government performance and public services. Updates on those efforts will be included in the upcoming report, Erb said.

Source link

Advertisement
Continue Reading

Tech

Polyphonic Tunes On The Sharp PC-E500

Published

on

If you’re a diehard fan of the chiptune scene, you’ve probably heard endless beautiful compositions on the Nintendo Game Boy, Commodore 64, and a few phat FM tracks from Segas of years later. What the scene is yet to see is a breakout artist ripping hot tracks on the Sharp PC-E500. If you wanted to, though, you’d probably find use in this 3-voice music driver for the ancient 1993 mini-PC. 

This comes to us from [gikonekos], who dug up the “PLAY3” code from the Japanese magazine “Pocket Computer Journal” published in November 1993. Over on GitHub, the original articles have been scanned, and the assembly source code for the PLAY3 driver has been reconstructed. There’s also documentation of how the driver actually works, along with verification against RAM dumps from actual Sharp PC-E500 hardware. The driver itself runs as a machine code extension to the BASIC interpreter on the machine. The “PLAY” command can then be used to specify a string of notes to play at a given tempo and octave. Polyphony is simulated using time-division sound generation, with output via the device’s rather pathetic single piezo buzzer.

It’s very cool to see this code preserved for the future. That said, don’t expect to see it on stage at the next Boston Bitdown or anything—as this example video shows, it’s not exactly the punchiest chiptune monster out there. We’ll probably stick to our luscious fake-bit creations for now, while Nintendo hardware will still remain the bedrock of the movement.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Ramp acquires Juno, a corporate travel expense startup backed by Madrona

Published

on

How YouGov's Reform Deficit Was Created as Farage Forces Pollster to Climb Down – Guido Fawkes
Juno co-founders Devon Tivona (left) and Sam Felsenthal. (Juno Photo)

Fintech giant Ramp announced the acquisition of Juno, a startup founded in 2024 that built a corporate travel platform to help manage non-employee expenses.

Terms of the deal were not disclosed. Juno will maintain its brand and employees.

Juno’s platform guides coordinators and their guests through booking, logistics, payments, reimbursements, and reconciliation.

“A bad candidate travel experience can cost you a hire,” Ramp CTO Karim Atiyeh said in a statement. “Juno built something strong in a category that matters. Our job now is to give them leverage and stay out of the way.”

Portland, Ore.-based Devon Tivona and Denver-based Sam Felsenthal co-founded Juno and they’ll continue in their leadership roles as co-CEOs. They previously co-founded Pana, another corporate guest travel platform, that was acquired by Coupa in 2021.

Advertisement

Juno raised a $2 million seed round last year led by Seattle-based Madrona along with Bungalow Ventures.

“Joining Ramp gives Devon and Sam the resources to pursue the vision they’ve been working toward all along: guest travel, payments, and expenses operating as one coherent system,” Madrona Managing Director Steve Singh wrote on LinkedIn.

Singh co-founded the travel and expense management giant Concur, which was acquired by SAP in 2014 for $8.3 billion. He led a group of investors in the April 2024 acquisition of Direct Travel Inc., a Colorado-based corporate travel management company, and is executive chairman of Otto, a Seattle-based startup developing an AI virtual assistant for business travel booking.

Singh also serves as executive chairman at Spotnana, a travel-as-a-service technology platform (he’s currently also interim CEO); Troop, a group meetings and events company; and Center, a corporate card and expense management platform that was acquired by American Express. 

Advertisement

Source link

Continue Reading

Tech

Nvidia launches enterprise AI agent platform with Adobe, Salesforce, SAP among 17 adopters at GTC 2026

Published

on

Jensen Huang walked onto the GTC stage Monday wearing his trademark leather jacket and carrying, as it turned out, the blueprints for a new kind of monopoly.

The Nvidia CEO unveiled the Agent Toolkit, an open-source platform for building autonomous AI agents, and then rattled off the names of the companies that will use it: Adobe, Salesforce, SAP, ServiceNow, Siemens, CrowdStrike, Atlassian, Cadence, Synopsys, IQVIA, Palantir, Box, Cohesity, Dassault Systèmes, Red Hat, Cisco and Amdocs. Seventeen enterprise software companies, touching virtually every industry and every Fortune 500 corporation, all agreeing to build their next generation of AI products on a shared foundation that Nvidia designed, Nvidia optimizes and Nvidia maintains.

The toolkit provides the models, the runtime, the security framework and the optimization libraries that AI agents need to operate autonomously inside organizations — resolving customer service tickets, designing semiconductors, managing clinical trials, orchestrating marketing campaigns. Each component is open source. Each is optimized for Nvidia hardware. The combination means that as AI agents proliferate across the corporate world, they will generate demand for Nvidia GPUs not because companies choose to buy them but because the software they depend on was engineered to require them.

“The enterprise software industry will evolve into specialized agentic platforms,” Huang told the crowd, “and the IT industry is on the brink of its next great expansion.” What he left unsaid is that Nvidia has just positioned itself as the tollbooth at the entrance to that expansion — open to all, owned by one.

Advertisement

Inside Nvidia’s Agent Toolkit: the software stack designed to power every corporate AI worker

To grasp the significance of Monday’s announcements, it helps to understand the problem Nvidia is solving.

Building an enterprise AI agent today is an exercise in frustration. A company that wants to deploy an autonomous system — one that can, say, monitor a telecommunications network and proactively resolve customer issues before anyone calls to complain — must assemble a language model, a retrieval system, a security layer, an orchestration framework and a runtime environment, typically from different vendors whose products were never designed to work together.

Nvidia’s Agent Toolkit collapses that complexity into a unified platform. It includes Nemotron, a family of open models optimized for agentic reasoning; AI-Q, an open blueprint that lets agents perceive, reason and act on enterprise knowledge; OpenShell, an open-source runtime enforcing policy-based security, network and privacy guardrails; and cuOpt, an optimization skill library. Developers can use the toolkit to create specialized AI agents that act autonomously while using and building other software to complete tasks.

The AI-Q component addresses a pain point that has dogged enterprise AI adoption: cost. Its hybrid architecture routes complex orchestration tasks to frontier models while delegating research tasks to Nemotron’s open models, which Nvidia says can cut query costs by more than 50 percent while maintaining top-tier accuracy. Nvidia used the AI-Q Blueprint to build what it claims is the top-ranking AI agent on both the DeepResearch Bench and DeepResearch Bench II leaderboards — benchmarks that, if they hold under independent validation, position the toolkit as not merely convenient but competitively necessary.

Advertisement

OpenShell tackles what has been the single biggest obstacle in every boardroom conversation about letting AI agents loose inside corporate systems: trust. The runtime creates isolated sandboxes that enforce strict policies around data access, network reach and privacy boundaries. Nvidia is collaborating with Cisco, CrowdStrike, Google, Microsoft Security and TrendAI to integrate OpenShell with their existing security tools — a calculated move that enlists the cybersecurity industry as a validation layer for Nvidia’s approach rather than a competing one.

The partner list that reads like the Fortune 500: who signed on and what they’re building

The breadth of Monday’s enterprise adoption announcements reveals Nvidia’s ambitions more clearly than any specification sheet could.

Adobe, in a simultaneously announced strategic partnership, will adopt Agent Toolkit software as the foundation for running hybrid, long-running creativity, productivity and marketing agents. Shantanu Narayen, Adobe’s chair and CEO, said the companies will bring together “our Firefly models, CUDA libraries into our applications, 3D digital twins for marketing, and Agent Toolkit and Nemotron to our agentic frameworks to deliver high-quality, controllable and enterprise-grade AI workflows of the future.” The partnership extends deep: Adobe will explore OpenShell and Nemotron as foundations for personalized, secure agentic loops, and will evaluate the toolkit for large-scale workflows powered by Adobe Experience Platform. Nvidia will provide engineering expertise, early access to software and targeted go-to-market support.

Salesforce’s integration may be the one enterprise IT leaders parse most carefully. The company is working with Nvidia Agent Toolkit software including Nemotron models, enabling customers to build, customize and deploy AI agents using Agentforce for service, sales and marketing. The collaboration introduces a reference architecture where employees can use Slack as the primary conversational interface and orchestration layer for Agentforce agents — powered by Nvidia infrastructure — that participate directly in business workflows and pull from data stores in both on-premises and cloud environments. For the millions of knowledge workers who already conduct their professional lives inside Slack, this turns a messaging app into the command center for corporate AI.

Advertisement

SAP, whose software underpins the financial and operational plumbing of most Global 2000 companies, is using open Agent Toolkit software including NeMo for enabling AI agents through Joule Studio on SAP Business Technology Platform, enabling customers and partners to design agents tailored to their own business needs. ServiceNow’s Autonomous Workforce of AI Specialists leverage Agent Toolkit software, the AI-Q Blueprint and a combination of closed and open models, including Nemotron and ServiceNow’s own Apriel models — a hybrid approach that suggests the toolkit is designed not to replace existing AI investments but to become the connective tissue between them.

From chip design to clinical trials: how agentic AI is reshaping specialized industries

The partner list extends well beyond horizontal software platforms into deeply specialized verticals where autonomous agents could compress timelines measured in years.

In semiconductor design — where a single advanced chip can cost billions of dollars and take half a decade to develop — three of the four major electronic design automation companies are building agents on Nvidia’s stack. Cadence will leverage Agent Toolkit and Nemotron with its ChipStack AI SuperAgent for semiconductor design and verification. Siemens is launching its Fuse EDA AI Agent, which uses Nemotron to autonomously orchestrate workflows across its entire electronic design automation portfolio, from design conception through manufacturing sign-off. Synopsys is building a multi-agent framework powered by its AgentEngineer technology using Nemotron and Nemo Agent Toolkit.

Healthcare and life sciences present perhaps the most consequential use case. IQVIA is integrating Nemotron and other Agent Toolkit software with IQVIA.ai, a unified agentic AI platform designed to help life sciences organizations work more efficiently across clinical, commercial and real-world operations. The scale is already significant: IQVIA has deployed more than 150 agents across internal teams and client environments, including 19 of the top 20 pharmaceutical companies.

Advertisement

The security sector is embedding itself into the architecture from the ground floor. CrowdStrike unveiled a Secure-by-Design AI Blueprint that embeds its Falcon platform protection directly into Nvidia AI agent architectures — including agents built on AI-Q and OpenShell — and is advancing agentic managed detection and response using Nemotron reasoning models. Cisco AI Defense will provide AI security protection for OpenShell, adding controls and guardrails to govern agent actions. These are not aftermarket bolt-ons; they are foundational integrations that signal the security industry views Nvidia’s agent platform as the substrate it needs to protect.

Dassault Systèmes is exploring Agent Toolkit software and Nemotron for its role-based AI agents, called Virtual Companions, on its 3DEXPERIENCE agentic platform. Atlassian is working with the toolkit as it evolves its Rovo AI agentic strategy for Jira and Confluence. Box is using it to enable enterprise agents to securely execute long-running business processes. Palantir is developing AI agents on Nemotron that run on its sovereign AI Operating System Reference Architecture.

The open-source gambit: why giving software away is Nvidia’s most aggressive business move

There is something almost paradoxical about a company with a multi-trillion-dollar market capitalization giving away its most strategically important software. But Nvidia’s open-source approach to Agent Toolkit is less an act of generosity than a carefully constructed competitive moat.

OpenShell is open source. Nemotron models are open. AI-Q blueprints are publicly available. LangChain, the agent engineering company whose open-source frameworks have been downloaded over 1 billion times, is working with Nvidia to integrate Agent Toolkit components into the LangChain deep agent library for developing advanced, accurate enterprise AI agents at scale. When the most popular independent framework for building AI agents absorbs your toolkit, you have transcended the category of vendor and entered the category of infrastructure.

Advertisement

But openness in AI has a way of being strategically selective. The models are open, but they are optimized for Nvidia’s CUDA libraries — the proprietary software layer that has locked developers into Nvidia GPUs for two decades. The runtime is open, but it integrates most deeply with Nvidia’s security partners. The blueprints are open, but they perform best on Nvidia hardware. Developers can explore Agent Toolkit and OpenShell on build.nvidia.com today, running on inference providers and Nvidia Cloud Partners including Baseten, CoreWeave, DeepInfra, DigitalOcean and others — all of which run Nvidia GPUs.

The strategy has a historical analog in Google’s approach to Android: give away the operating system to ensure that the entire mobile ecosystem generates demand for your core services. Nvidia is giving away the agent operating system to ensure that the entire enterprise AI ecosystem generates demand for its core product — the GPU. Every Salesforce agent running Nemotron, every SAP workflow orchestrated through OpenShell, every Adobe creative pipeline accelerated by CUDA creates another strand of dependency on Nvidia silicon.

This also explains the Nemotron Coalition announced Monday — a global collaboration of model builders including Mistral AI, Cursor, LangChain, Perplexity, Reflection AI, Sarvam and Thinking Machines Lab, all working to advance open frontier models. The coalition’s first project will be a base model codeveloped by Mistral AI and Nvidia, trained on Nvidia DGX Cloud, that will underpin the upcoming Nemotron 4 family. By seeding the open model ecosystem with Nvidia-optimized foundations, the company ensures that even models it does not build will run best on its hardware.

What could go wrong: the risks enterprise buyers should weigh before going all-in

For all the ambition on display Monday, several realities temper the narrative.

Advertisement

Adoption announcements are not deployment announcements. Many of the partner disclosures use carefully hedged language — “exploring,” “evaluating,” “working with” — that is standard in embargoed press releases but should not be confused with production systems serving millions of users. Adobe’s own forward-looking statements note that “due to the non-binding nature of the agreement, there are no assurances that Adobe will successfully negotiate and execute definitive documentation with Nvidia on favorable terms or at all.” The gap between a GTC keynote demonstration and an enterprise-grade rollout remains substantial.

Nvidia is not the only company chasing this market. Microsoft, with its Copilot ecosystem and Azure AI infrastructure, pursues a parallel strategy with the advantage of owning the operating systems and productivity software that most enterprises already use. Google, through Gemini and its cloud platform, has its own agent vision. Amazon, via Bedrock and AWS, is building comparable primitives. The question is not whether enterprise AI agents will be built on some platform but whether the market will consolidate around one stack or fragment across several.

The security claims, while architecturally sound, remain unproven at scale. OpenShell’s policy-based guardrails are a promising design pattern, but autonomous agents operating in complex enterprise environments will inevitably encounter edge cases that no policy framework has anticipated. CrowdStrike’s Secure-by-Design AI Blueprint and Cisco AI Defense’s OpenShell integration are exactly the kind of layered defense enterprise buyers will demand — but both are newly unveiled, not battle-hardened through years of adversarial testing. Deploying agents that can autonomously access data, execute code and interact with production systems introduces a threat surface that the industry has barely begun to map.

And there is the question of whether enterprises are ready for agents at all. The technology may be available, but organizational readiness — the governance structures, the change management, the regulatory frameworks, the human trust — often lags years behind what the platforms can deliver.

Advertisement

Beyond agents: the full scope of what Nvidia announced at GTC 2026

Monday’s Agent Toolkit announcement did not arrive in isolation. It landed amid an avalanche of product launches that, taken together, describe a company remaking itself at every layer of the computing stack.

Nvidia unveiled the Vera Rubin platform — seven new chips in full production, including the Vera CPU purpose-built for agentic AI, the Rubin GPU, and the newly integrated Groq 3 LPU inference accelerator — designed to power every phase of AI from pretraining to real-time agentic inference. The Vera Rubin NVL72 rack integrates 72 Rubin GPUs and 36 Vera CPUs, delivering what Nvidia claims is up to 10x higher inference throughput per watt at one-tenth the cost per token compared with the Blackwell platform. Dynamo 1.0, an open-source inference operating system that Nvidia describes as the “operating system for AI factories,” entered production with adoption from AWS, Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure alongside companies like Cursor, Perplexity, PayPal and Pinterest.

The BlueField-4 STX storage architecture promises up to 5x token throughput for the long-context reasoning that agents demand, with early adopters including CoreWeave, Crusoe, Lambda, Mistral AI and Nebius. BYD, Geely, Isuzu and Nissan announced Level 4 autonomous vehicle programs on Nvidia’s DRIVE Hyperion platform, and Uber disclosed plans to launch Nvidia-powered robotaxis across 28 cities and four continents by 2028, beginning with Los Angeles and San Francisco in the first half of 2027.

Roche, the pharmaceutical giant, announced it is deploying more than 3,500 Nvidia Blackwell GPUs across hybrid cloud and on-premises environments in the U.S. and Europe — what it calls the largest announced GPU footprint available to a pharmaceutical company. Nvidia also launched physical AI tools for healthcare robotics, with CMR Surgical, Johnson & Johnson MedTech and others adopting the platform, and released Open-H, the world’s largest healthcare robotics dataset with over 700 hours of surgical video. And Nvidia even announced a Space Module based on the Vera Rubin architecture, promising to bring data-center-class AI to orbital environments.

Advertisement

The real meaning of GTC 2026: Nvidia is no longer selling picks and shovels

Strip away the product specifications and benchmark claims and what emerges from GTC 2026 is a single, clarifying thesis: Nvidia believes the era of AI agents will be larger than the era of AI models, and it intends to own the platform layer of that transition the way it already owns the hardware layer of the current one.

The 17 enterprise software companies that signed on Monday are making a bet of their own. They are wagering that building on Nvidia’s agent infrastructure will let them move faster than building alone — and that the benefits of a shared platform outweigh the risks of shared dependency. For Salesforce, it means Agentforce agents that can draw from both cloud and on-premises data through a single Slack interface. For Adobe, it means creative AI pipelines that span image, video, 3D and document intelligence. For SAP, it means agents woven into the transactional fabric of global commerce. Each partnership is rational on its own terms. Together, they form something larger: an industry-wide endorsement of Nvidia as the default substrate for enterprise intelligence.

Huang, who opened his career designing graphics chips for video games, closed his keynote by gesturing toward a future in which AI agents do not just assist human workers but operate as autonomous colleagues — reasoning through problems, building their own tools, learning from their mistakes. He compared the moment to the birth of the personal computer, the dawn of the internet, the rise of mobile computing.

Technology executives have a professional obligation to describe every product cycle as a revolution. But here is what made Monday different: this time, 17 of the world’s most important software companies showed up to agree with him. Whether they did so out of conviction or out of a calculated fear of being left behind may be the most important question in enterprise technology — and it is one that only the next few years can answer.

Advertisement

Source link

Continue Reading

Tech

OpenClaw can bypass your EDR, DLP and IAM without triggering a single alert

Published

on

An attacker embeds a single instruction inside a forwarded email. An OpenClaw agent summarizes that email as part of a normal task. The hidden instruction tells the agent to forward credentials to an external endpoint. The agent complies — through a sanctioned API call, using its own OAuth tokens.

The firewall logs HTTP 200. EDR records a normal process. No signature fires. Nothing went wrong by any definition your security stack understands.
That is the problem. Six independent security teams shipped six OpenClaw defense tools in 14 days. Three attack surfaces survived every one of them.

The exposure picture is already worse than most security teams know. Token Security found that 22% of its enterprise customers have employees running OpenClaw without IT approval, and Bitsight counted more than 30,000 publicly exposed instances in two weeks, up from roughly 1,000. Snyk’s ToxicSkills audit adds another dimension: 36% of all ClawHub skills contain security flaws.

Jamieson O’Reilly, founder of Dvuln and now security adviser to the OpenClaw project, has been one of the researchers pushing fixes hardest from inside. His credential leakage research on exposed instances was among the earliest warnings the community received. Since then, he has worked directly with founder Peter Steinberger to ship dual-layer malicious skill detection and is now driving a capabilities specification proposal through the agentskills standards body.

Advertisement

The team is clear-eyed about the security gaps, he told VentureBeat. “It wasn’t designed from the ground up to be as secure as possible,” O’Reilly said. “That’s understandable given the origins, and we’re owning it without excuses.”

None of it closes the three gaps that matter most.

Three attack surfaces your stack cannot see

The first is runtime semantic exfiltration. The attack encodes malicious behavior in meaning, not in binary patterns, which is exactly what the current defense stack cannot see.

Palo Alto Networks mapped OpenClaw to every category in the OWASP Top 10 for Agentic Applications and identified what security researcher Simon Willison calls a “lethal trifecta”: private data access, untrusted content exposure, and external communication capabilities in a single process. EDR monitors process behavior. The agent’s behavior looks normal because it is normal. The credentials are real, and the API calls are sanctioned, so EDR reads it as a credentialed user doing expected work. Nothing in the current defense ecosystem tracks what the agent decided to do with that access, or why.

Advertisement

The second is cross-agent context leakage. When multiple agents or skills share session context, a prompt injection in one channel poisons decisions across the entire chain. Giskard researchers demonstrated this in January 2026, showing that agents silently appended attacker-controlled instructions to their own workspace files and waited for commands from external servers. The injected prompt becomes a sleeper payload. Palo Alto Networks researchers Sailesh Mishra and Sean P. Morgan warned that persistent memory turns these attacks into stateful, delayed-execution chains. A malicious instruction hidden inside a forwarded message sits in the agent’s context weeks later, activating during an unrelated task.

O’Reilly identified cross-agent context leakage as the hardest of these gaps to close. “This one is especially difficult because it is so tightly bound to prompt injection, a systemic vulnerability that is far bigger than OpenClaw and affects every LLM-powered agent system in the industry,” he told VentureBeat. “When context flows unchecked between agents and skills, a single injected prompt can poison or hijack behavior across the entire chain.” No tool in the current ecosystem provides cross-agent context isolation. IronClaw sandboxes individual skill execution. ClawSec monitors file integrity. Neither tracks how context propagates between agents in the same workflow.

The third is agent-to-agent trust chains with zero mutual authentication. When OpenClaw agents delegate tasks to other agents or external MCP servers, no identity verification exists between them. A compromised agent in a multi-agent workflow inherits the trust of every agent it communicates with. Compromise one through prompt injection, and it can issue instructions to every agent in the chain using trust relationships that the legitimate agent already built.

Microsoft’s security team published guidance in February calling OpenClaw untrusted code execution with persistent credentials, noting the runtime ingests untrusted text, downloads and executes skills from external sources, and performs actions using whatever credentials it holds. Kaspersky’s enterprise risk assessment added that even agents on personal devices threaten organizational security because those devices store VPN configs, browser tokens, and credentials for corporate services. The Moltbook social network for OpenClaw agents already demonstrated the spillover risk: Wiz researchers found a misconfigured database that exposed 1.5 million API authentication tokens and 35,000 email addresses.

Advertisement

What 14 days of emergency patching actually closed

The defense ecosystem split into three approaches. Two tools harden OpenClaw in place. ClawSec, from Prompt Security (a SentinelOne company), wraps agents in continuous verification, monitoring critical files for drift and enforcing zero-trust egress by default. OpenClaw’s VirusTotal integration, shipped jointly by Steinberger, O’Reilly, and VirusTotal’s Bernardo Quintero, scans every published ClawHub skill and blocks known malicious packages.

Two tools are full architectural rewrites. IronClaw, NEAR AI’s Rust reimplementation, runs all untrusted tools inside WebAssembly sandboxes where tool code starts with zero permissions and must explicitly request network, filesystem, or API access. Credentials get injected at the host boundary and never touch agent code, with built-in leak detection scanning requests and responses. Carapace, an independent open-source project, inverts every dangerous OpenClaw default with fail-closed authentication and OS-level subprocess sandboxing.

Two tools focus on scanning and auditability: Cisco’s open-source scanner combines static, behavioral, and LLM semantic analysis, while NanoClaw reduces the entire codebase to roughly 500 lines of TypeScript, running each session in an isolated Docker container.

O’Reilly put the supply chain failure in direct terms. “Right now, the industry basically created a brand-new executable format written in plain human language and forgot every control that should come with it,” he said. His response has been hands-on. He shipped the VirusTotal integration before skills.sh, a much larger repository, adopted a similar pattern. Koi Security’s audit validates the urgency: 341 malicious skills found in early February grew to 824 out of 10,700 on ClawHub by mid-month, with the ClawHavoc campaign planting the Atomic Stealer macOS infostealer inside skills disguised as cryptocurrency trading tools, harvesting crypto wallets, SSH credentials, and browser passwords.

Advertisement

OpenClaw Security Defense Evaluation Matrix

Dimension

ClawSec

VirusTotal Integration

IronClaw

Advertisement

Carapace

NanoClaw

Cisco Scanner

Discovery

Advertisement

Agents only

ClawHub only

No

mDNS scan

Advertisement

No

No

Runtime Protection

Config drift

Advertisement

No

WASM sandbox

OS sandbox + prompt guard

Container isolation

Advertisement

No

Supply Chain

Checksum verify

Signature scan

Advertisement

Capability grants

Ed25519 signed

Manual audit (~500 LOC)

Static + LLM + behavioral

Advertisement

Credential Isolation

No

No

WASM boundary injection

Advertisement

OS keychain + AES-256-GCM

Mount-restricted dirs

No

Auditability

Advertisement

Drift logs

Scan verdicts

Permission grant logs

Prometheus + audit log

Advertisement

500 lines total

Scan reports

Semantic Monitoring

No

Advertisement

No

No

No

No

Advertisement

No

Source: VentureBeat analysis based on published documentation and security audits, March 2026.

The capabilities spec that treats skills like executables

O’Reilly submitted a skills specification standards update to the agentskills maintainers, led primarily by Anthropic and Vercel, that is in active discussion. The proposal requires every skill to declare explicit, user-visible capabilities before execution. Think mobile app permission manifests. He noted the proposal is getting strong early feedback from the security community because it finally treats skills like the executables they are.

“The other two gaps can be meaningfully hardened with better isolation primitives and runtime guardrails, but truly closing context leakage requires deep architectural changes to how untrusted multi-agent memory and prompting are handled,” O’Reilly said. “The new capabilities spec is the first real step toward solving these challenges proactively instead of bolting on band-aids later.”

Advertisement

What to do on Monday morning

Assume OpenClaw is already in your environment. The 22% shadow deployment rate is a floor. These six steps close what can be closed and document what cannot.

  1. Inventory what is running. Scan for WebSocket traffic on port 18789 and mDNS broadcasts on port 5353. Watch corporate authentication logs for new App ID registrations, OAuth consent events, and Node.js User-Agent strings. Any instance running a version before v2026.2.25 is vulnerable to the ClawJacked remote takeover flaw.

  2. Mandate isolated execution. No agent runs on a device connected to production infrastructure. Require container-based deployment with scoped credentials and explicit tool whitelists.

  3. Deploy ClawSec on every agent instance and run every ClawHub skill through VirusTotal and Cisco’s open-source scanner before installation. Both are free. Treat skills as third-party executables, because that is what they are.

  4. Require human-in-the-loop approval for sensitive agent actions. OpenClaw’s exec approval settings support three modes: security, ask, and allowlist. Set sensitive tools to ask so the agent pauses and requests confirmation before executing shell commands, writing to external APIs, or modifying files outside its workspace. Any action that touches credentials, changes configurations, or sends data to an external endpoint should stop and wait for a human to approve it.

  5. Map the three surviving gaps against your risk register. Document whether your organization accepts, mitigates, or blocks each one: runtime semantic exfiltration, cross-agent context leakage, and agent-to-agent trust chains.

  6. Bring the evaluation table to your next board meeting. Frame it not as an AI experiment but as a critical bypass of your existing DLP and IAM investments. Every agentic AI platform that follows will face this same defense cycle. The framework transfers to every agent tool your team will assess for the next two years.

The security stack you built for applications and endpoints catches malicious code. It does not catch an agent following a malicious instruction through a legitimate API call. That is where these three gaps live.

Source link

Advertisement
Continue Reading

Tech

Ternary RISC Processor Achieves Non-Binary Computing Via FPGA

Published

on

You would be very hard pressed to find any sort of CPU or microcontroller in a commercial product that uses anything but binary to do its work. And yet, other options exist! Ternary computing involves using trits with three states instead of bits with two. It’s not popular, but there is now a design available for a ternary processor that you could potentially get your hands on.

The device in question is called the 5500FP, as outlined in a research paper from [Claudio Lorenzo La Rosa.] Very few ternary processors exist, and little effort has ever been made to fabricate such a device in real silicon. However, [Claudio] explains that it’s entirely possible to implement a ternary logic processor based on RISC principles by using modern FPGA hardware. The impetus to do so is because of the perceived benefits of ternary computing—notably, that with three states, each “trit” can store more information than regular old binary “bits.” Beyond that, the use of a “balanced ternary” system, based on logical values of -1, 0 , and 1, allows storing both negative and positive numbers without a wasted sign bit, and allows numbers to be negated trivially simply by inverting all trits together.

The research paper does a good job of outlining the basis of this method of computing, as well as the mode of operation of the 5500FP processor. For now, it’s a 24-trit device operating at a frequency of 20MHz, but the hope is that in future it would be possible to move to custom silicon to improve performance and capability. The hope is that further development of ternary computing hardware could lead to parts capable of higher information density and lower power consumption, both highly useful in this day and age where improvements to conventional processor designs are ever hard to find.

Advertisement

Head over to the Ternary Computing website if you’re intrigued by the Ways of Three and want to learn more. We perhaps don’t expect ternary computing to take over any time soon, given the Soviets didn’t get far with it in the 1950s. Still, the concept exists and is fun to contemplate if you like the mental challenge. Maybe you can even start a rumor that the next iPhone is using an all-ternary processor and spread it across a few tech blogs before the week is out. Let us know how you get on.

Source link

Advertisement
Continue Reading

Tech

This Modern Gold Mining Method Begins With Your Old Electronics

Published

on





For many people, gold mining conjures images of an old prospector sifting sandy water through a metal pan in the blazing sun. But these days, the process is far more advanced than the 1800s gold rush era of the western United States. In fact, researchers have actually developed a method to recover gold from electronic waste. This means that yes, there’s gold inside your household electronics. So your drawer of outdated devices may be a goldmine—at least in theory.

A study published in Advanced Materials describes how this was achieved with a process using protein amyloid nanofibrils. Extracted from whey, these materials are tiny, thin protein fibers with a huge surface area. This allows them to precisely remove gold from dissolved electronic components like computer motherboards. The process then converts gold ions into single particles, resulting in high-purity gold nuggets.

The study shows that this method of gold recovery costs around $1.10 per gram, a far cry from the market value of about $50 per gram for 22-carat gold. The process is also more eco-friendly than traditional mining methods, as it uses fewer organic materials and produces less waste overall. Additionally, the protein gels used to extract the gold are reusable, and represent a circular approach. The end result is that electronic waste, as well as food waste, is recycled and repurposed into a different substance.

Advertisement

The value and history of gold in electronic devices

A typical smartphone has anywhere from 7 to 34 milligrams of gold in its circuit boards and connectors. This equals around $1.16 to $5.81 total value as of this writing. Of course, larger devices like desktop computers can have more gold, though it’s still not an impressive amount. While it’s illegal to throw away electronics in many states, millions of devices are tossed every year, which means the value of the gold inside can add up very quickly.

The reason gold is often used in electronic devices is because of its physical and chemical properties. First, gold conducts electricity very well. It’s also durable and doesn’t corrode over time as other metals can. Plus, it can easily be shaped into thin wires without breaking. All of these features combined better ensure reliable signal transmission, and smooth, extended performance. That’s why gold is the ideal substance for circuit boards, connectors, and other components, inside smartphones, computers, and more.

Advertisement

The use of gold in electronic devices dates back to the mid-20th century. Both computers and military communications equipment required more reliable and longer-lasting connections than what were available at the time. So gold was eventually integrated, becoming an important addition to these devices. As time went on, the military defense sector of the US utilized the precious metal extensively. This led to widespread adoption by NASA, who used the metal in golden records on the Voyager missions, and in various equipment as well.



Advertisement

Source link

Continue Reading

Tech

Podcast: Chord DACs Explained with Rob Watts

Published

on

Chord Electronics’ digital audio consultant Rob Watts takes us on a deep dive into the challenges of reproducing lifelike sound from 16-bit/44.1kHz PCM digital audio (CD quality music from compact discs or streaming). From his groundbreaking DAC designs priced from $650 to $20,000, to why off-the-shelf chips can’t compete, Rob explains how his unique approach to D/A (digital to analog) conversion goes beyond conventional measurement-based audio engineering. He also previews Chord’s next flagship product, the Quartet M Scaler, which will build on the Hugo M Scaler, and shares his thoughts on DSD, the importance of cables, and hidden sonic factors like RF and power supply issues. Even 45 years after the CD’s debut, there’s still high-resolution audio left to uncover.

Sponsor: Thank you to our sponsor SVS for your support.

This episode was recorded on October 28, 2025.

Advertisement

Where to listen:

On the Panel:

  • Rob Watts, Digital Audio Consultant, Chord Electronics
  • Brian Mitchell, eCoustics Founder
chord-dacs-2026

Where to buy Chord DACs:

Credits:

Advertisement. Scroll to continue reading.

Source link

Continue Reading

Tech

Stryker attack wiped tens of thousands of devices, no malware needed

Published

on

Stryker attack wiped tens of thousands of devices, no malware needed

Last week’s cyberattack on medical technology giant Stryker was limited to its internal Microsoft environment and remotely wiped tens of thousands of employee devices.

The organization says in an update on Sunday that all its medical devices are safe to use but electronic ordering systems remain offline, and customers must place orders manually through sales representatives.

Stryker emphasizes that the incident was not a ransomware attack and that the threat actor did not deploy any malware on its systems.

Last week, Stryker was the target of a cyberattack claimed by the Handala hacktivist group, believed to be linked to Iran.

Advertisement

The attacker alleged that they wiped “over 200,000 systems, servers, and mobile devices” and stole 50 terabytes of data. However, investigators did not find any indication that data was exfiltrated.

Following the disruption, Stryker employees in multiple countries started to complain that their managed devices had been remotely wiped overnight.

Some employees had their personal devices enrolled in the company network and lost personal data during the wiping process.

Hackers had Global Admin privileges

A source familiar with the attack told BleepingComputer that the threat actor used the wipe command in Intune, Microsoft’s cloud-based endpoint management service, to erase data from nearly 80,000 devices between 5:00 and 8:00 a.m. UTC on March 11.

Advertisement

The attacker carried out the action after compromising an administrator account and creating a new Global Administrator account.

The investigation is being conducted by the Microsoft Detection and Response Team (DART) in collaboration with cybersecurity experts from Palo Alto Unit 42.

Stryker’s update highlights that the attack did not impact any of its products, connected or otherwise, and was limited exclusively to the internal Microsoft corporate environment.

“All Stryker products across our global portfolio, including connected, digital, and life-saving technologies, remain safe to use,” the company says.

Advertisement

Restoration efforts are currently underway, the main focus being on resuming shipping and transactional services. Customers are encouraged to maintain normal communication with company personnel while the infrastructure is steadily recovered.

Any order placed before the cyberattack will be honored as systems are restored, while those placed during the disruption will be processed when systems are back online, and the supply flow resumes to normal.

The company is working with its global manufacturing sites to deal with potential operational impact.

Stryker’s current priority is to restore the supply-chain system and resume customer orders and shipping. “Our core transactional systems are already on a clear path to full recovery,” the company says.

Advertisement

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Continue Reading

Tech

Fujifilm’s Instax Mini 13 Finally Gets It Right, Captures Selfies That Actually Look Good

Published

on

Fujifilm Instax Mini 13 Instant Camera
Fujifilm has just introduced the Instax Mini 13, the latest addition to one of the best selling instant camera lines in the world. It sits comfortably in the palm of your hand from the moment you pick it up, and a metallic silver logo on the front adds a subtle touch of shine without making the whole thing feel fussy or overcomplicated.



Simply twist the lens ring and you’re good to go in one fluid action. Twist it again, and the close-up mode appears, allowing you to take close-up photographs of whatever is directly in front of you. Because of built-in parallax correction, the viewfinder lines up perfectly with the lens, ensuring that everything is centered.

Sale


Fujifilm Instax Mini 12 Instant Film Camera – Pastel Blue
  • Compact and cute design. Easily twist the lens to turn on and off
  • Built-in selfie mirror for easy selfies Close-up mode with parallax correction
  • Features automatic exposure and flash control for bright photos that are not “washed-out”


There’s a tiny mirror on the front to help you line up your own pictures perfectly. You have dual timers built in that clock down from two to ten seconds depending on whether you’re taking a group shot or flying solo. Fujifilm also includes a small wedge piece that snaps into the strap and then be used to raise the camera up on a level surface. The countdown will run automatically while you get everything in place. The exposure settings are also automatically adjusted, regardless of the lighting conditions. The flash has its own small control mechanism that performs an excellent job of balancing the results whether you’re in full light or just in the shade.

Advertisement


If you put two regular AA batteries in the bottom, it will print about a hundred times before needing to be replaced. The camera also includes a feature that allows it to turn off after five minutes of inactivity, which helps to extend the battery life. The film loads at the back, just like any other Instax Mini. Each finished print measures approximately 3.5 x 2 inches overall, with a photo area of 2.5 x 1.75 inches. You’ll have to wait around 90 seconds for the colors to appear properly, however if you’re in cool air, it may take a little longer.

Fujifilm Instax Mini 13 Instant Camera
Fujifilm Instax Mini 13 Instant Camera
There are five colors, including Dreamy Purple, Frost Blue, Candy Pink, Lagoon Green, and Clay White. The camera alone costs $94 MSRP, and Fujifilm is also releasing a new film pack called Pastel Galaxy, which includes sparkling cosmic motifs in ultra delicate pastel tones along the edges. That gives a fun touch to each print. If you scan prints into your phone, you will find that the companion app now does a better job of isolating the image from the background, resulting in cleaner-looking digital copies. Availability begins in late June.
[Source]

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025