Connect with us
DAPA Banner

Tech

Meta weighing up 20pc global layoffs

Published

on

‘This is a speculative report about theoretical approaches,’ Meta responds.

Meta is planning a fresh round of layoffs that could affect 20pc or more of the company’s global workforce, reported Reuters.

The layoffs, which could affect around 15,800 jobs, are meant to offset Meta’s massive AI spend and prepare the company for AI-assisted work instead, sources told the publication. Meta employs nearly 79,000 globally, with around 1,700 in Ireland.

Meta did not verify the contents of the report. A company spokesperson told SiliconRepublic.com: “This is a speculative report about theoretical approaches.”

Advertisement

The Facebook parent, much like many in the Big Tech league, has cut thousands of jobs in recent years in favour of spending billions for its AI build-out.

Meta laid off 5pc of global staff, targeting its “lowest performers”, in early 2025, amounting to around 3,600 people at the time. In October, it cut 600 jobs from its AI division, Superintelligence Labs. In 2022, it cut 11,000 jobs globally, with Irish workers affected, and in 2023, it laid off 10,000 worldwide.

Headcount in Ireland was cut by 20pc in 2024, which followed an 18pc decline during 2023. In early 2025, Meta employed around 2,000 in the country. That number is now down by around 300.

Reports from January 2026 suggested that Meta could cut 10pc of its Reality Labs division, which employs roughly 15,000. In December, it was speculated that the company would be reducing the budget and cutting staff for its ‘metaverse’, to include the ‘Horizon Worlds’ project and its Quest virtual reality unit.

Advertisement

Meta expects its total expenses for the year to be as high as $135bn, driven by an increased investment to support its Superintelligence Labs efforts as well as its core business. The company has been building its own in-house hardware, and poaching key talent from its rivals to boost its AI efforts.

In February, Meta announced a multi-year chip deal with Nvidia that would reportedly cost the company billions of dollars. It struck a $14.2bn deal with CoreWeave for its cloud compute power four months earlier. Meanwhile, a $10bn deal with Google for its cloud services is also speculated.

Recently, Meta spent as much as $3bn to acquire the Chinese-founded AI start-up Manus. Earlier this month, it acquired the viral social platform for AI bots, Moltbook, for an undisclosed amount.

Meta stocks fell by as much as 23pc from their August peak since last Friday (13 March). Investors are also concerned after the company delayed its much anticipated AI model Avocado.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Hydropower Line From Quebec Could Power a Million NYC Homes

Published

on

The Champlain Hudson Power Express, a $6 billion, 339-mile buried transmission line, will soon deliver Canadian hydropower from Hydro-Quebec to New York City. The project could supply up to 20% of the city’s electricity and power roughly one million homes throughout the year. “This is far and away the largest project I have ever worked on,” said Bob Harrison, who has worked in infrastructure for 40 years and is the head of engineering for the Champlain Hudson Power Express. “We like to say it’s the largest project you’ll never see.” The New York Times reports: The massive power project, expected to provide energy to a million New York City customers a year, travels underground and underwater, from the northern plains at the Canadian border to the filled-in marshlands of coastal Queens, much of it loosely following the Hudson River. Its construction included the underwater installation of more than two million feet of cable imported from Sweden. It also required special boats, loaded with equipment that could shoot water jets deep into the sediment, to create trenches for the cable. Then, when it came to placing cable beneath the landscape, more than 700 land-use easements were needed, plus an additional 1.55 million feet of cable.

The Champlain Hudson Power Express has found a way to plug into the city, but it wasn’t easy. The work included 10 new manholes and more than three miles of new underground circuitry, according to Con Edison, the city’s primary electricity provider. “It was literally a hand weave under the streets of Queens,” said Jennifer Laird-White, the head of external affairs for Transmission Developers. The hydropower travels from Canada via two buried cables that are as round as cantaloupes. Those lines snake for hundreds of miles under a lake, several rivers (including the Hudson for about 90 miles) and through buried trenches alongside train tracks and roads. The cables resurface in Astoria, Queens, where a converter station shapes, filters and refines the raw power into a product that New Yorkers can consume.

In two cavernous rooms that could be mistaken for “Star Wars” sets, the electricity flows through 30 hanging structures encased in what look like metallic, dinosaurlike exoskeletons. Each one weighs about as much as a small humpback whale and contains microprocessors, thousands of valves and fiber wires. “I am still wowed when I walk into that facility,” said Mr. Harrison, the engineer. “I mean, it is just mind-boggling.”

Source link

Advertisement
Continue Reading

Tech

Jensen Huang just put Nvidia’s Blackwell and Vera Rubin sales projections into the $1 trillion stratosphere

Published

on

Nvidia CEO Jensen Huang threw out a lot of numbers — mostly of the technical variety — during his keynote Monday to kick off the company’s annual GTC Conference in San Jose, California.

But there was one financial figure that investors surely took notice of: his projection that there will be $1 trillion worth of orders for Nvidia’s Blackwell and Vera Rubin chips, a monetary reflection of a booming AI business.

About an hour into his keynote, Huang noted that last year Nvidia saw about $500 billion in demand for its Blackwell and upcoming Rubin chips through 2026.

“Now, I don’t know if you guys feel the same way, but $500 billion is an enormous amount of revenue,” he said. “Well, I’m here to tell you that right now where I stand — a few short months after GTC DC, one year after last GTC — right here where I stand, I see through 2027, at least $1 trillion.”

Advertisement

The Rubin computing chip architecture, which was first announced in 2024, has been described by Huang as the state of the art in AI hardware that outperforms its Blackwell predecessor. The company said in January, when it officially started production of Rubin, it would operate 3.5x faster than the Blackwell architecture on model-training tasks and 5x faster on inference tasks, reaching as high as 50 petaflops.

Nvidia has said it expects to ramp up production in the second half of the year.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Advertisement

Source link

Advertisement
Continue Reading

Tech

I tested the ‘future self’ prompt in ChatGPT and couldn’t believe how personal the advice it gave me was

Published

on

Viral AI prompts are usually just a little party trick, but a new one shared on Reddit promised to evoke actual feelings, simply by asking ChatGPT to travel to the future on your behalf and send a letter from a more successful version of yourself.

Specifically, the prompt designed by the user was:

Source link

Advertisement
Continue Reading

Tech

Who Really Owns Maverik Gas Stations?

Published

on





While Maverik isn’t officially the best gas station convenience store, slotting into 7th place overall for 2025 according to the American Customer Satisfaction Index (ACSI), the chain offers fueling stations in 20 states with over 800 stores. Ownership of Maverik gas stations remains with the family that started it under FJ Management, a Utah-based private holding company. Piecing together information from FJ Management’s online history and news reports reveals Maverik’s lineage.

The first Maverik gas station opened in 1928 with two gas pumps in Afton, Wyoming. The entrepreneur responsible was 20-year-old Reuel Call, using funds earned from renting roller skates. Ultimately, Reuel teamed up with his brother Osborne, a partnership that lasted until Reuel bought Osborne’s share of the business in 1965.

Osborne’s son, O. Jay Call, having learned the gas station business working for his father and uncle Reuel, opened his first gas station in Ontario Oregon in 1965, with another in Lewiston, Idaho coming later. Seeing success in the business, Jay founded Flying J in 1968, running the chain of truck stops travel plazas until his death in 2008.

Advertisement

Following Jay’s death, Flying J landed in Chapter 11 bankruptcy. Crystal Call Maggelet, Jay’s daughter, was named President and CEO of Flying J in 2009, leading the company out of bankruptcy with full repayment to creditors in 2010. Rebranded as FJ Management, the company, under Maggelet’s leadership, acquired Maverik, with Mike Call, Reuel’s grandson, at the helm in 2012.

Advertisement

What’s special about Maverick gas stations?

Maverik advertises itself as “adventure’s first stop.” With its mountain-themed tri-peak logo, Adventure Club rewards program, and in-house BonFire menu, the chain is turning its gas stations into massive convenience stores like many of its competitors.

The number of Maverik gas stations expanded rapidly in 2023. That’s the year Maverik bought Kum & Go, a rival chain with 400 locations in 13 Midwest and southern states. At the time, the acquisition made Maverik the 12th largest in its class. Prior to that, Maverik gas stations were mostly limited to the American west.

Maverik’s Adventure Club is free to join and offers a rewards program and everyday savings on fuel. By joining the Adventure Club and entering your associated phone number at the pump, you’ll save 2-cents on every gallon of the fuel of your choice. In addition, you’ll earn trail points that can be used for purchases inside Maverik or through the Maverik app. You’ll get a point for each gallon of fuel purchased and two points for every dollar spent in store, but a BonFire burrito will set you back 500 trail points, so don’t expect instant gratification from points.

Inside most Maverik gas stations, you’ll find a BonFire Grill featuring made-to-order selections like tacos, quesadillas, nachos, salads, and pizza. There are also a number of sweet snacks like cookies, donuts, and muffins, breakfast options featuring bowls, burritos, sandwiches, and biscuits, and lunch selections including burgers, corn dogs, and wraps. However, please keep one of the primary rules of gas station etiquette in mind and pull away from the gas pump before going inside to order.

Advertisement



Source link

Advertisement
Continue Reading

Tech

Seattle puts Microsoft Copilot expansion on hold as new mayor takes stock of AI

Published

on

The downtown Seattle skyline. (GeekWire Photo / Lisa Stiffler)

Five months after releasing its “responsible AI plan” providing guidelines for the municipality’s use of artificial intelligence, the City of Seattle has tapped the brakes on the tech’s official deployment for city employees.

Mayor Katie Wilson last month paused the planned citywide rollout of Microsoft Copilot, as first reported Monday in The Seattle Times. Her predecessor, Mayor Bruce Harrell, had approved the launch before leaving office in December.

“While implementation of the technology is delayed, the education and governance work continues,” Megan Erb, spokesperson for the Seattle Information Technology Department, told GeekWire. “The City is still conducting educational roadshows for departments, as well as working to advance our foundational work in data governance and data readiness.” 

In September, Seattle released its AI plan, which covers training and skill-building opportunities for city employees, and establishes a framework to facilitate and evaluate the use of AI tools in city operations. The city also conducted a pilot test of Copilot with 500 employees. The technology is available at no additional cost for Microsoft 365 users under Seattle’s enterprise agreement.

Participants reported:

Advertisement
  • Collectively saving more than 450 hours of work per week, such as drafting communications, report preparation, document analysis and research.
  • The technology proved most helpful for writing more clearly, producing faster summaries of documents and meeting notes, and quick access to policies and regulations.
  • 83% said Copilot Chat provided “business value.”
  • 79% said it was a positive user experience.

Seattle has been a leader in efforts to adopt next-gen AI tools, and says it issued the nation’s first generative AI policy in fall 2023. Even before the recently released AI plan, Seattle already had policies requiring “human-in-the-loop” oversight, meaning employees must review generative AI outputs before official use and disclose when work is AI assisted. The city also identified prohibited applications, such as AI in hiring decisions and facial recognition, due to concerns about bias and reliability.

Concerns about municipal AI regulations and oversight are widespread. An investigative series published earlier this year by the news organization Cascade PBS found that multiple Washington cities had limited guardrails around AI use, raising public trust and privacy concerns. Seattle was not among the cities scrutinized.

Seattle leaders in the past have framed their effort as a balance between embracing new technology and upholding their fundamental obligation to serve the public, emphasizing that AI is a tool — not a replacement for employees.

Erb said the delayed deployment of Copilot is a part of a “phased approach” to ensure “the City responsibly tests and adopts artificial intelligence tools, meets all privacy and security requirements, and deploys solutions that provide clear benefits to employees while upholding the City’s Responsible AI commitments.” 

Rob Lloyd, Seattle’s chief technology officer, resigned last month, effective March 27, to become executive director of the Center for Digital Government. The city is recruiting a replacement.

Advertisement

In December, the city appointed Lisa Qian as its first AI Officer. Her experience includes serving as a senior manager of data science at LinkedIn, as well other tech company leadership positions.

During the fall budget process, the Seattle City Council asked the Seattle IT Department to provide quarterly reports on the use of AI, and that information will be submitted April 1.

The city previously identified 41 priority projects in which AI could potentially improve government performance and public services. Updates on those efforts will be included in the upcoming report, Erb said.

Source link

Advertisement
Continue Reading

Tech

Polyphonic Tunes On The Sharp PC-E500

Published

on

If you’re a diehard fan of the chiptune scene, you’ve probably heard endless beautiful compositions on the Nintendo Game Boy, Commodore 64, and a few phat FM tracks from Segas of years later. What the scene is yet to see is a breakout artist ripping hot tracks on the Sharp PC-E500. If you wanted to, though, you’d probably find use in this 3-voice music driver for the ancient 1993 mini-PC. 

This comes to us from [gikonekos], who dug up the “PLAY3” code from the Japanese magazine “Pocket Computer Journal” published in November 1993. Over on GitHub, the original articles have been scanned, and the assembly source code for the PLAY3 driver has been reconstructed. There’s also documentation of how the driver actually works, along with verification against RAM dumps from actual Sharp PC-E500 hardware. The driver itself runs as a machine code extension to the BASIC interpreter on the machine. The “PLAY” command can then be used to specify a string of notes to play at a given tempo and octave. Polyphony is simulated using time-division sound generation, with output via the device’s rather pathetic single piezo buzzer.

It’s very cool to see this code preserved for the future. That said, don’t expect to see it on stage at the next Boston Bitdown or anything—as this example video shows, it’s not exactly the punchiest chiptune monster out there. We’ll probably stick to our luscious fake-bit creations for now, while Nintendo hardware will still remain the bedrock of the movement.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Ramp acquires Juno, a corporate travel expense startup backed by Madrona

Published

on

How YouGov's Reform Deficit Was Created as Farage Forces Pollster to Climb Down – Guido Fawkes
Juno co-founders Devon Tivona (left) and Sam Felsenthal. (Juno Photo)

Fintech giant Ramp announced the acquisition of Juno, a startup founded in 2024 that built a corporate travel platform to help manage non-employee expenses.

Terms of the deal were not disclosed. Juno will maintain its brand and employees.

Juno’s platform guides coordinators and their guests through booking, logistics, payments, reimbursements, and reconciliation.

“A bad candidate travel experience can cost you a hire,” Ramp CTO Karim Atiyeh said in a statement. “Juno built something strong in a category that matters. Our job now is to give them leverage and stay out of the way.”

Portland, Ore.-based Devon Tivona and Denver-based Sam Felsenthal co-founded Juno and they’ll continue in their leadership roles as co-CEOs. They previously co-founded Pana, another corporate guest travel platform, that was acquired by Coupa in 2021.

Advertisement

Juno raised a $2 million seed round last year led by Seattle-based Madrona along with Bungalow Ventures.

“Joining Ramp gives Devon and Sam the resources to pursue the vision they’ve been working toward all along: guest travel, payments, and expenses operating as one coherent system,” Madrona Managing Director Steve Singh wrote on LinkedIn.

Singh co-founded the travel and expense management giant Concur, which was acquired by SAP in 2014 for $8.3 billion. He led a group of investors in the April 2024 acquisition of Direct Travel Inc., a Colorado-based corporate travel management company, and is executive chairman of Otto, a Seattle-based startup developing an AI virtual assistant for business travel booking.

Singh also serves as executive chairman at Spotnana, a travel-as-a-service technology platform (he’s currently also interim CEO); Troop, a group meetings and events company; and Center, a corporate card and expense management platform that was acquired by American Express. 

Advertisement

Source link

Continue Reading

Tech

Nvidia launches enterprise AI agent platform with Adobe, Salesforce, SAP among 17 adopters at GTC 2026

Published

on

Jensen Huang walked onto the GTC stage Monday wearing his trademark leather jacket and carrying, as it turned out, the blueprints for a new kind of monopoly.

The Nvidia CEO unveiled the Agent Toolkit, an open-source platform for building autonomous AI agents, and then rattled off the names of the companies that will use it: Adobe, Salesforce, SAP, ServiceNow, Siemens, CrowdStrike, Atlassian, Cadence, Synopsys, IQVIA, Palantir, Box, Cohesity, Dassault Systèmes, Red Hat, Cisco and Amdocs. Seventeen enterprise software companies, touching virtually every industry and every Fortune 500 corporation, all agreeing to build their next generation of AI products on a shared foundation that Nvidia designed, Nvidia optimizes and Nvidia maintains.

The toolkit provides the models, the runtime, the security framework and the optimization libraries that AI agents need to operate autonomously inside organizations — resolving customer service tickets, designing semiconductors, managing clinical trials, orchestrating marketing campaigns. Each component is open source. Each is optimized for Nvidia hardware. The combination means that as AI agents proliferate across the corporate world, they will generate demand for Nvidia GPUs not because companies choose to buy them but because the software they depend on was engineered to require them.

“The enterprise software industry will evolve into specialized agentic platforms,” Huang told the crowd, “and the IT industry is on the brink of its next great expansion.” What he left unsaid is that Nvidia has just positioned itself as the tollbooth at the entrance to that expansion — open to all, owned by one.

Advertisement

Inside Nvidia’s Agent Toolkit: the software stack designed to power every corporate AI worker

To grasp the significance of Monday’s announcements, it helps to understand the problem Nvidia is solving.

Building an enterprise AI agent today is an exercise in frustration. A company that wants to deploy an autonomous system — one that can, say, monitor a telecommunications network and proactively resolve customer issues before anyone calls to complain — must assemble a language model, a retrieval system, a security layer, an orchestration framework and a runtime environment, typically from different vendors whose products were never designed to work together.

Nvidia’s Agent Toolkit collapses that complexity into a unified platform. It includes Nemotron, a family of open models optimized for agentic reasoning; AI-Q, an open blueprint that lets agents perceive, reason and act on enterprise knowledge; OpenShell, an open-source runtime enforcing policy-based security, network and privacy guardrails; and cuOpt, an optimization skill library. Developers can use the toolkit to create specialized AI agents that act autonomously while using and building other software to complete tasks.

The AI-Q component addresses a pain point that has dogged enterprise AI adoption: cost. Its hybrid architecture routes complex orchestration tasks to frontier models while delegating research tasks to Nemotron’s open models, which Nvidia says can cut query costs by more than 50 percent while maintaining top-tier accuracy. Nvidia used the AI-Q Blueprint to build what it claims is the top-ranking AI agent on both the DeepResearch Bench and DeepResearch Bench II leaderboards — benchmarks that, if they hold under independent validation, position the toolkit as not merely convenient but competitively necessary.

Advertisement

OpenShell tackles what has been the single biggest obstacle in every boardroom conversation about letting AI agents loose inside corporate systems: trust. The runtime creates isolated sandboxes that enforce strict policies around data access, network reach and privacy boundaries. Nvidia is collaborating with Cisco, CrowdStrike, Google, Microsoft Security and TrendAI to integrate OpenShell with their existing security tools — a calculated move that enlists the cybersecurity industry as a validation layer for Nvidia’s approach rather than a competing one.

The partner list that reads like the Fortune 500: who signed on and what they’re building

The breadth of Monday’s enterprise adoption announcements reveals Nvidia’s ambitions more clearly than any specification sheet could.

Adobe, in a simultaneously announced strategic partnership, will adopt Agent Toolkit software as the foundation for running hybrid, long-running creativity, productivity and marketing agents. Shantanu Narayen, Adobe’s chair and CEO, said the companies will bring together “our Firefly models, CUDA libraries into our applications, 3D digital twins for marketing, and Agent Toolkit and Nemotron to our agentic frameworks to deliver high-quality, controllable and enterprise-grade AI workflows of the future.” The partnership extends deep: Adobe will explore OpenShell and Nemotron as foundations for personalized, secure agentic loops, and will evaluate the toolkit for large-scale workflows powered by Adobe Experience Platform. Nvidia will provide engineering expertise, early access to software and targeted go-to-market support.

Salesforce’s integration may be the one enterprise IT leaders parse most carefully. The company is working with Nvidia Agent Toolkit software including Nemotron models, enabling customers to build, customize and deploy AI agents using Agentforce for service, sales and marketing. The collaboration introduces a reference architecture where employees can use Slack as the primary conversational interface and orchestration layer for Agentforce agents — powered by Nvidia infrastructure — that participate directly in business workflows and pull from data stores in both on-premises and cloud environments. For the millions of knowledge workers who already conduct their professional lives inside Slack, this turns a messaging app into the command center for corporate AI.

Advertisement

SAP, whose software underpins the financial and operational plumbing of most Global 2000 companies, is using open Agent Toolkit software including NeMo for enabling AI agents through Joule Studio on SAP Business Technology Platform, enabling customers and partners to design agents tailored to their own business needs. ServiceNow’s Autonomous Workforce of AI Specialists leverage Agent Toolkit software, the AI-Q Blueprint and a combination of closed and open models, including Nemotron and ServiceNow’s own Apriel models — a hybrid approach that suggests the toolkit is designed not to replace existing AI investments but to become the connective tissue between them.

From chip design to clinical trials: how agentic AI is reshaping specialized industries

The partner list extends well beyond horizontal software platforms into deeply specialized verticals where autonomous agents could compress timelines measured in years.

In semiconductor design — where a single advanced chip can cost billions of dollars and take half a decade to develop — three of the four major electronic design automation companies are building agents on Nvidia’s stack. Cadence will leverage Agent Toolkit and Nemotron with its ChipStack AI SuperAgent for semiconductor design and verification. Siemens is launching its Fuse EDA AI Agent, which uses Nemotron to autonomously orchestrate workflows across its entire electronic design automation portfolio, from design conception through manufacturing sign-off. Synopsys is building a multi-agent framework powered by its AgentEngineer technology using Nemotron and Nemo Agent Toolkit.

Healthcare and life sciences present perhaps the most consequential use case. IQVIA is integrating Nemotron and other Agent Toolkit software with IQVIA.ai, a unified agentic AI platform designed to help life sciences organizations work more efficiently across clinical, commercial and real-world operations. The scale is already significant: IQVIA has deployed more than 150 agents across internal teams and client environments, including 19 of the top 20 pharmaceutical companies.

Advertisement

The security sector is embedding itself into the architecture from the ground floor. CrowdStrike unveiled a Secure-by-Design AI Blueprint that embeds its Falcon platform protection directly into Nvidia AI agent architectures — including agents built on AI-Q and OpenShell — and is advancing agentic managed detection and response using Nemotron reasoning models. Cisco AI Defense will provide AI security protection for OpenShell, adding controls and guardrails to govern agent actions. These are not aftermarket bolt-ons; they are foundational integrations that signal the security industry views Nvidia’s agent platform as the substrate it needs to protect.

Dassault Systèmes is exploring Agent Toolkit software and Nemotron for its role-based AI agents, called Virtual Companions, on its 3DEXPERIENCE agentic platform. Atlassian is working with the toolkit as it evolves its Rovo AI agentic strategy for Jira and Confluence. Box is using it to enable enterprise agents to securely execute long-running business processes. Palantir is developing AI agents on Nemotron that run on its sovereign AI Operating System Reference Architecture.

The open-source gambit: why giving software away is Nvidia’s most aggressive business move

There is something almost paradoxical about a company with a multi-trillion-dollar market capitalization giving away its most strategically important software. But Nvidia’s open-source approach to Agent Toolkit is less an act of generosity than a carefully constructed competitive moat.

OpenShell is open source. Nemotron models are open. AI-Q blueprints are publicly available. LangChain, the agent engineering company whose open-source frameworks have been downloaded over 1 billion times, is working with Nvidia to integrate Agent Toolkit components into the LangChain deep agent library for developing advanced, accurate enterprise AI agents at scale. When the most popular independent framework for building AI agents absorbs your toolkit, you have transcended the category of vendor and entered the category of infrastructure.

Advertisement

But openness in AI has a way of being strategically selective. The models are open, but they are optimized for Nvidia’s CUDA libraries — the proprietary software layer that has locked developers into Nvidia GPUs for two decades. The runtime is open, but it integrates most deeply with Nvidia’s security partners. The blueprints are open, but they perform best on Nvidia hardware. Developers can explore Agent Toolkit and OpenShell on build.nvidia.com today, running on inference providers and Nvidia Cloud Partners including Baseten, CoreWeave, DeepInfra, DigitalOcean and others — all of which run Nvidia GPUs.

The strategy has a historical analog in Google’s approach to Android: give away the operating system to ensure that the entire mobile ecosystem generates demand for your core services. Nvidia is giving away the agent operating system to ensure that the entire enterprise AI ecosystem generates demand for its core product — the GPU. Every Salesforce agent running Nemotron, every SAP workflow orchestrated through OpenShell, every Adobe creative pipeline accelerated by CUDA creates another strand of dependency on Nvidia silicon.

This also explains the Nemotron Coalition announced Monday — a global collaboration of model builders including Mistral AI, Cursor, LangChain, Perplexity, Reflection AI, Sarvam and Thinking Machines Lab, all working to advance open frontier models. The coalition’s first project will be a base model codeveloped by Mistral AI and Nvidia, trained on Nvidia DGX Cloud, that will underpin the upcoming Nemotron 4 family. By seeding the open model ecosystem with Nvidia-optimized foundations, the company ensures that even models it does not build will run best on its hardware.

What could go wrong: the risks enterprise buyers should weigh before going all-in

For all the ambition on display Monday, several realities temper the narrative.

Advertisement

Adoption announcements are not deployment announcements. Many of the partner disclosures use carefully hedged language — “exploring,” “evaluating,” “working with” — that is standard in embargoed press releases but should not be confused with production systems serving millions of users. Adobe’s own forward-looking statements note that “due to the non-binding nature of the agreement, there are no assurances that Adobe will successfully negotiate and execute definitive documentation with Nvidia on favorable terms or at all.” The gap between a GTC keynote demonstration and an enterprise-grade rollout remains substantial.

Nvidia is not the only company chasing this market. Microsoft, with its Copilot ecosystem and Azure AI infrastructure, pursues a parallel strategy with the advantage of owning the operating systems and productivity software that most enterprises already use. Google, through Gemini and its cloud platform, has its own agent vision. Amazon, via Bedrock and AWS, is building comparable primitives. The question is not whether enterprise AI agents will be built on some platform but whether the market will consolidate around one stack or fragment across several.

The security claims, while architecturally sound, remain unproven at scale. OpenShell’s policy-based guardrails are a promising design pattern, but autonomous agents operating in complex enterprise environments will inevitably encounter edge cases that no policy framework has anticipated. CrowdStrike’s Secure-by-Design AI Blueprint and Cisco AI Defense’s OpenShell integration are exactly the kind of layered defense enterprise buyers will demand — but both are newly unveiled, not battle-hardened through years of adversarial testing. Deploying agents that can autonomously access data, execute code and interact with production systems introduces a threat surface that the industry has barely begun to map.

And there is the question of whether enterprises are ready for agents at all. The technology may be available, but organizational readiness — the governance structures, the change management, the regulatory frameworks, the human trust — often lags years behind what the platforms can deliver.

Advertisement

Beyond agents: the full scope of what Nvidia announced at GTC 2026

Monday’s Agent Toolkit announcement did not arrive in isolation. It landed amid an avalanche of product launches that, taken together, describe a company remaking itself at every layer of the computing stack.

Nvidia unveiled the Vera Rubin platform — seven new chips in full production, including the Vera CPU purpose-built for agentic AI, the Rubin GPU, and the newly integrated Groq 3 LPU inference accelerator — designed to power every phase of AI from pretraining to real-time agentic inference. The Vera Rubin NVL72 rack integrates 72 Rubin GPUs and 36 Vera CPUs, delivering what Nvidia claims is up to 10x higher inference throughput per watt at one-tenth the cost per token compared with the Blackwell platform. Dynamo 1.0, an open-source inference operating system that Nvidia describes as the “operating system for AI factories,” entered production with adoption from AWS, Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure alongside companies like Cursor, Perplexity, PayPal and Pinterest.

The BlueField-4 STX storage architecture promises up to 5x token throughput for the long-context reasoning that agents demand, with early adopters including CoreWeave, Crusoe, Lambda, Mistral AI and Nebius. BYD, Geely, Isuzu and Nissan announced Level 4 autonomous vehicle programs on Nvidia’s DRIVE Hyperion platform, and Uber disclosed plans to launch Nvidia-powered robotaxis across 28 cities and four continents by 2028, beginning with Los Angeles and San Francisco in the first half of 2027.

Roche, the pharmaceutical giant, announced it is deploying more than 3,500 Nvidia Blackwell GPUs across hybrid cloud and on-premises environments in the U.S. and Europe — what it calls the largest announced GPU footprint available to a pharmaceutical company. Nvidia also launched physical AI tools for healthcare robotics, with CMR Surgical, Johnson & Johnson MedTech and others adopting the platform, and released Open-H, the world’s largest healthcare robotics dataset with over 700 hours of surgical video. And Nvidia even announced a Space Module based on the Vera Rubin architecture, promising to bring data-center-class AI to orbital environments.

Advertisement

The real meaning of GTC 2026: Nvidia is no longer selling picks and shovels

Strip away the product specifications and benchmark claims and what emerges from GTC 2026 is a single, clarifying thesis: Nvidia believes the era of AI agents will be larger than the era of AI models, and it intends to own the platform layer of that transition the way it already owns the hardware layer of the current one.

The 17 enterprise software companies that signed on Monday are making a bet of their own. They are wagering that building on Nvidia’s agent infrastructure will let them move faster than building alone — and that the benefits of a shared platform outweigh the risks of shared dependency. For Salesforce, it means Agentforce agents that can draw from both cloud and on-premises data through a single Slack interface. For Adobe, it means creative AI pipelines that span image, video, 3D and document intelligence. For SAP, it means agents woven into the transactional fabric of global commerce. Each partnership is rational on its own terms. Together, they form something larger: an industry-wide endorsement of Nvidia as the default substrate for enterprise intelligence.

Huang, who opened his career designing graphics chips for video games, closed his keynote by gesturing toward a future in which AI agents do not just assist human workers but operate as autonomous colleagues — reasoning through problems, building their own tools, learning from their mistakes. He compared the moment to the birth of the personal computer, the dawn of the internet, the rise of mobile computing.

Technology executives have a professional obligation to describe every product cycle as a revolution. But here is what made Monday different: this time, 17 of the world’s most important software companies showed up to agree with him. Whether they did so out of conviction or out of a calculated fear of being left behind may be the most important question in enterprise technology — and it is one that only the next few years can answer.

Advertisement

Source link

Continue Reading

Tech

OpenClaw can bypass your EDR, DLP and IAM without triggering a single alert

Published

on

An attacker embeds a single instruction inside a forwarded email. An OpenClaw agent summarizes that email as part of a normal task. The hidden instruction tells the agent to forward credentials to an external endpoint. The agent complies — through a sanctioned API call, using its own OAuth tokens.

The firewall logs HTTP 200. EDR records a normal process. No signature fires. Nothing went wrong by any definition your security stack understands.
That is the problem. Six independent security teams shipped six OpenClaw defense tools in 14 days. Three attack surfaces survived every one of them.

The exposure picture is already worse than most security teams know. Token Security found that 22% of its enterprise customers have employees running OpenClaw without IT approval, and Bitsight counted more than 30,000 publicly exposed instances in two weeks, up from roughly 1,000. Snyk’s ToxicSkills audit adds another dimension: 36% of all ClawHub skills contain security flaws.

Jamieson O’Reilly, founder of Dvuln and now security adviser to the OpenClaw project, has been one of the researchers pushing fixes hardest from inside. His credential leakage research on exposed instances was among the earliest warnings the community received. Since then, he has worked directly with founder Peter Steinberger to ship dual-layer malicious skill detection and is now driving a capabilities specification proposal through the agentskills standards body.

Advertisement

The team is clear-eyed about the security gaps, he told VentureBeat. “It wasn’t designed from the ground up to be as secure as possible,” O’Reilly said. “That’s understandable given the origins, and we’re owning it without excuses.”

None of it closes the three gaps that matter most.

Three attack surfaces your stack cannot see

The first is runtime semantic exfiltration. The attack encodes malicious behavior in meaning, not in binary patterns, which is exactly what the current defense stack cannot see.

Palo Alto Networks mapped OpenClaw to every category in the OWASP Top 10 for Agentic Applications and identified what security researcher Simon Willison calls a “lethal trifecta”: private data access, untrusted content exposure, and external communication capabilities in a single process. EDR monitors process behavior. The agent’s behavior looks normal because it is normal. The credentials are real, and the API calls are sanctioned, so EDR reads it as a credentialed user doing expected work. Nothing in the current defense ecosystem tracks what the agent decided to do with that access, or why.

Advertisement

The second is cross-agent context leakage. When multiple agents or skills share session context, a prompt injection in one channel poisons decisions across the entire chain. Giskard researchers demonstrated this in January 2026, showing that agents silently appended attacker-controlled instructions to their own workspace files and waited for commands from external servers. The injected prompt becomes a sleeper payload. Palo Alto Networks researchers Sailesh Mishra and Sean P. Morgan warned that persistent memory turns these attacks into stateful, delayed-execution chains. A malicious instruction hidden inside a forwarded message sits in the agent’s context weeks later, activating during an unrelated task.

O’Reilly identified cross-agent context leakage as the hardest of these gaps to close. “This one is especially difficult because it is so tightly bound to prompt injection, a systemic vulnerability that is far bigger than OpenClaw and affects every LLM-powered agent system in the industry,” he told VentureBeat. “When context flows unchecked between agents and skills, a single injected prompt can poison or hijack behavior across the entire chain.” No tool in the current ecosystem provides cross-agent context isolation. IronClaw sandboxes individual skill execution. ClawSec monitors file integrity. Neither tracks how context propagates between agents in the same workflow.

The third is agent-to-agent trust chains with zero mutual authentication. When OpenClaw agents delegate tasks to other agents or external MCP servers, no identity verification exists between them. A compromised agent in a multi-agent workflow inherits the trust of every agent it communicates with. Compromise one through prompt injection, and it can issue instructions to every agent in the chain using trust relationships that the legitimate agent already built.

Microsoft’s security team published guidance in February calling OpenClaw untrusted code execution with persistent credentials, noting the runtime ingests untrusted text, downloads and executes skills from external sources, and performs actions using whatever credentials it holds. Kaspersky’s enterprise risk assessment added that even agents on personal devices threaten organizational security because those devices store VPN configs, browser tokens, and credentials for corporate services. The Moltbook social network for OpenClaw agents already demonstrated the spillover risk: Wiz researchers found a misconfigured database that exposed 1.5 million API authentication tokens and 35,000 email addresses.

Advertisement

What 14 days of emergency patching actually closed

The defense ecosystem split into three approaches. Two tools harden OpenClaw in place. ClawSec, from Prompt Security (a SentinelOne company), wraps agents in continuous verification, monitoring critical files for drift and enforcing zero-trust egress by default. OpenClaw’s VirusTotal integration, shipped jointly by Steinberger, O’Reilly, and VirusTotal’s Bernardo Quintero, scans every published ClawHub skill and blocks known malicious packages.

Two tools are full architectural rewrites. IronClaw, NEAR AI’s Rust reimplementation, runs all untrusted tools inside WebAssembly sandboxes where tool code starts with zero permissions and must explicitly request network, filesystem, or API access. Credentials get injected at the host boundary and never touch agent code, with built-in leak detection scanning requests and responses. Carapace, an independent open-source project, inverts every dangerous OpenClaw default with fail-closed authentication and OS-level subprocess sandboxing.

Two tools focus on scanning and auditability: Cisco’s open-source scanner combines static, behavioral, and LLM semantic analysis, while NanoClaw reduces the entire codebase to roughly 500 lines of TypeScript, running each session in an isolated Docker container.

O’Reilly put the supply chain failure in direct terms. “Right now, the industry basically created a brand-new executable format written in plain human language and forgot every control that should come with it,” he said. His response has been hands-on. He shipped the VirusTotal integration before skills.sh, a much larger repository, adopted a similar pattern. Koi Security’s audit validates the urgency: 341 malicious skills found in early February grew to 824 out of 10,700 on ClawHub by mid-month, with the ClawHavoc campaign planting the Atomic Stealer macOS infostealer inside skills disguised as cryptocurrency trading tools, harvesting crypto wallets, SSH credentials, and browser passwords.

Advertisement

OpenClaw Security Defense Evaluation Matrix

Dimension

ClawSec

VirusTotal Integration

IronClaw

Advertisement

Carapace

NanoClaw

Cisco Scanner

Discovery

Advertisement

Agents only

ClawHub only

No

mDNS scan

Advertisement

No

No

Runtime Protection

Config drift

Advertisement

No

WASM sandbox

OS sandbox + prompt guard

Container isolation

Advertisement

No

Supply Chain

Checksum verify

Signature scan

Advertisement

Capability grants

Ed25519 signed

Manual audit (~500 LOC)

Static + LLM + behavioral

Advertisement

Credential Isolation

No

No

WASM boundary injection

Advertisement

OS keychain + AES-256-GCM

Mount-restricted dirs

No

Auditability

Advertisement

Drift logs

Scan verdicts

Permission grant logs

Prometheus + audit log

Advertisement

500 lines total

Scan reports

Semantic Monitoring

No

Advertisement

No

No

No

No

Advertisement

No

Source: VentureBeat analysis based on published documentation and security audits, March 2026.

The capabilities spec that treats skills like executables

O’Reilly submitted a skills specification standards update to the agentskills maintainers, led primarily by Anthropic and Vercel, that is in active discussion. The proposal requires every skill to declare explicit, user-visible capabilities before execution. Think mobile app permission manifests. He noted the proposal is getting strong early feedback from the security community because it finally treats skills like the executables they are.

“The other two gaps can be meaningfully hardened with better isolation primitives and runtime guardrails, but truly closing context leakage requires deep architectural changes to how untrusted multi-agent memory and prompting are handled,” O’Reilly said. “The new capabilities spec is the first real step toward solving these challenges proactively instead of bolting on band-aids later.”

Advertisement

What to do on Monday morning

Assume OpenClaw is already in your environment. The 22% shadow deployment rate is a floor. These six steps close what can be closed and document what cannot.

  1. Inventory what is running. Scan for WebSocket traffic on port 18789 and mDNS broadcasts on port 5353. Watch corporate authentication logs for new App ID registrations, OAuth consent events, and Node.js User-Agent strings. Any instance running a version before v2026.2.25 is vulnerable to the ClawJacked remote takeover flaw.

  2. Mandate isolated execution. No agent runs on a device connected to production infrastructure. Require container-based deployment with scoped credentials and explicit tool whitelists.

  3. Deploy ClawSec on every agent instance and run every ClawHub skill through VirusTotal and Cisco’s open-source scanner before installation. Both are free. Treat skills as third-party executables, because that is what they are.

  4. Require human-in-the-loop approval for sensitive agent actions. OpenClaw’s exec approval settings support three modes: security, ask, and allowlist. Set sensitive tools to ask so the agent pauses and requests confirmation before executing shell commands, writing to external APIs, or modifying files outside its workspace. Any action that touches credentials, changes configurations, or sends data to an external endpoint should stop and wait for a human to approve it.

  5. Map the three surviving gaps against your risk register. Document whether your organization accepts, mitigates, or blocks each one: runtime semantic exfiltration, cross-agent context leakage, and agent-to-agent trust chains.

  6. Bring the evaluation table to your next board meeting. Frame it not as an AI experiment but as a critical bypass of your existing DLP and IAM investments. Every agentic AI platform that follows will face this same defense cycle. The framework transfers to every agent tool your team will assess for the next two years.

The security stack you built for applications and endpoints catches malicious code. It does not catch an agent following a malicious instruction through a legitimate API call. That is where these three gaps live.

Source link

Advertisement
Continue Reading

Tech

Ternary RISC Processor Achieves Non-Binary Computing Via FPGA

Published

on

You would be very hard pressed to find any sort of CPU or microcontroller in a commercial product that uses anything but binary to do its work. And yet, other options exist! Ternary computing involves using trits with three states instead of bits with two. It’s not popular, but there is now a design available for a ternary processor that you could potentially get your hands on.

The device in question is called the 5500FP, as outlined in a research paper from [Claudio Lorenzo La Rosa.] Very few ternary processors exist, and little effort has ever been made to fabricate such a device in real silicon. However, [Claudio] explains that it’s entirely possible to implement a ternary logic processor based on RISC principles by using modern FPGA hardware. The impetus to do so is because of the perceived benefits of ternary computing—notably, that with three states, each “trit” can store more information than regular old binary “bits.” Beyond that, the use of a “balanced ternary” system, based on logical values of -1, 0 , and 1, allows storing both negative and positive numbers without a wasted sign bit, and allows numbers to be negated trivially simply by inverting all trits together.

The research paper does a good job of outlining the basis of this method of computing, as well as the mode of operation of the 5500FP processor. For now, it’s a 24-trit device operating at a frequency of 20MHz, but the hope is that in future it would be possible to move to custom silicon to improve performance and capability. The hope is that further development of ternary computing hardware could lead to parts capable of higher information density and lower power consumption, both highly useful in this day and age where improvements to conventional processor designs are ever hard to find.

Advertisement

Head over to the Ternary Computing website if you’re intrigued by the Ways of Three and want to learn more. We perhaps don’t expect ternary computing to take over any time soon, given the Soviets didn’t get far with it in the 1950s. Still, the concept exists and is fun to contemplate if you like the mental challenge. Maybe you can even start a rumor that the next iPhone is using an all-ternary processor and spread it across a few tech blogs before the week is out. Let us know how you get on.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025