Connect with us
DAPA Banner

Tech

Nvidia launches enterprise AI agent platform with Adobe, Salesforce, SAP among 17 adopters at GTC 2026

Published

on

Jensen Huang walked onto the GTC stage Monday wearing his trademark leather jacket and carrying, as it turned out, the blueprints for a new kind of monopoly.

The Nvidia CEO unveiled the Agent Toolkit, an open-source platform for building autonomous AI agents, and then rattled off the names of the companies that will use it: Adobe, Salesforce, SAP, ServiceNow, Siemens, CrowdStrike, Atlassian, Cadence, Synopsys, IQVIA, Palantir, Box, Cohesity, Dassault Systèmes, Red Hat, Cisco and Amdocs. Seventeen enterprise software companies, touching virtually every industry and every Fortune 500 corporation, all agreeing to build their next generation of AI products on a shared foundation that Nvidia designed, Nvidia optimizes and Nvidia maintains.

The toolkit provides the models, the runtime, the security framework and the optimization libraries that AI agents need to operate autonomously inside organizations — resolving customer service tickets, designing semiconductors, managing clinical trials, orchestrating marketing campaigns. Each component is open source. Each is optimized for Nvidia hardware. The combination means that as AI agents proliferate across the corporate world, they will generate demand for Nvidia GPUs not because companies choose to buy them but because the software they depend on was engineered to require them.

“The enterprise software industry will evolve into specialized agentic platforms,” Huang told the crowd, “and the IT industry is on the brink of its next great expansion.” What he left unsaid is that Nvidia has just positioned itself as the tollbooth at the entrance to that expansion — open to all, owned by one.

Advertisement

Inside Nvidia’s Agent Toolkit: the software stack designed to power every corporate AI worker

To grasp the significance of Monday’s announcements, it helps to understand the problem Nvidia is solving.

Building an enterprise AI agent today is an exercise in frustration. A company that wants to deploy an autonomous system — one that can, say, monitor a telecommunications network and proactively resolve customer issues before anyone calls to complain — must assemble a language model, a retrieval system, a security layer, an orchestration framework and a runtime environment, typically from different vendors whose products were never designed to work together.

Nvidia’s Agent Toolkit collapses that complexity into a unified platform. It includes Nemotron, a family of open models optimized for agentic reasoning; AI-Q, an open blueprint that lets agents perceive, reason and act on enterprise knowledge; OpenShell, an open-source runtime enforcing policy-based security, network and privacy guardrails; and cuOpt, an optimization skill library. Developers can use the toolkit to create specialized AI agents that act autonomously while using and building other software to complete tasks.

The AI-Q component addresses a pain point that has dogged enterprise AI adoption: cost. Its hybrid architecture routes complex orchestration tasks to frontier models while delegating research tasks to Nemotron’s open models, which Nvidia says can cut query costs by more than 50 percent while maintaining top-tier accuracy. Nvidia used the AI-Q Blueprint to build what it claims is the top-ranking AI agent on both the DeepResearch Bench and DeepResearch Bench II leaderboards — benchmarks that, if they hold under independent validation, position the toolkit as not merely convenient but competitively necessary.

Advertisement

OpenShell tackles what has been the single biggest obstacle in every boardroom conversation about letting AI agents loose inside corporate systems: trust. The runtime creates isolated sandboxes that enforce strict policies around data access, network reach and privacy boundaries. Nvidia is collaborating with Cisco, CrowdStrike, Google, Microsoft Security and TrendAI to integrate OpenShell with their existing security tools — a calculated move that enlists the cybersecurity industry as a validation layer for Nvidia’s approach rather than a competing one.

The partner list that reads like the Fortune 500: who signed on and what they’re building

The breadth of Monday’s enterprise adoption announcements reveals Nvidia’s ambitions more clearly than any specification sheet could.

Adobe, in a simultaneously announced strategic partnership, will adopt Agent Toolkit software as the foundation for running hybrid, long-running creativity, productivity and marketing agents. Shantanu Narayen, Adobe’s chair and CEO, said the companies will bring together “our Firefly models, CUDA libraries into our applications, 3D digital twins for marketing, and Agent Toolkit and Nemotron to our agentic frameworks to deliver high-quality, controllable and enterprise-grade AI workflows of the future.” The partnership extends deep: Adobe will explore OpenShell and Nemotron as foundations for personalized, secure agentic loops, and will evaluate the toolkit for large-scale workflows powered by Adobe Experience Platform. Nvidia will provide engineering expertise, early access to software and targeted go-to-market support.

Salesforce’s integration may be the one enterprise IT leaders parse most carefully. The company is working with Nvidia Agent Toolkit software including Nemotron models, enabling customers to build, customize and deploy AI agents using Agentforce for service, sales and marketing. The collaboration introduces a reference architecture where employees can use Slack as the primary conversational interface and orchestration layer for Agentforce agents — powered by Nvidia infrastructure — that participate directly in business workflows and pull from data stores in both on-premises and cloud environments. For the millions of knowledge workers who already conduct their professional lives inside Slack, this turns a messaging app into the command center for corporate AI.

Advertisement

SAP, whose software underpins the financial and operational plumbing of most Global 2000 companies, is using open Agent Toolkit software including NeMo for enabling AI agents through Joule Studio on SAP Business Technology Platform, enabling customers and partners to design agents tailored to their own business needs. ServiceNow’s Autonomous Workforce of AI Specialists leverage Agent Toolkit software, the AI-Q Blueprint and a combination of closed and open models, including Nemotron and ServiceNow’s own Apriel models — a hybrid approach that suggests the toolkit is designed not to replace existing AI investments but to become the connective tissue between them.

From chip design to clinical trials: how agentic AI is reshaping specialized industries

The partner list extends well beyond horizontal software platforms into deeply specialized verticals where autonomous agents could compress timelines measured in years.

In semiconductor design — where a single advanced chip can cost billions of dollars and take half a decade to develop — three of the four major electronic design automation companies are building agents on Nvidia’s stack. Cadence will leverage Agent Toolkit and Nemotron with its ChipStack AI SuperAgent for semiconductor design and verification. Siemens is launching its Fuse EDA AI Agent, which uses Nemotron to autonomously orchestrate workflows across its entire electronic design automation portfolio, from design conception through manufacturing sign-off. Synopsys is building a multi-agent framework powered by its AgentEngineer technology using Nemotron and Nemo Agent Toolkit.

Healthcare and life sciences present perhaps the most consequential use case. IQVIA is integrating Nemotron and other Agent Toolkit software with IQVIA.ai, a unified agentic AI platform designed to help life sciences organizations work more efficiently across clinical, commercial and real-world operations. The scale is already significant: IQVIA has deployed more than 150 agents across internal teams and client environments, including 19 of the top 20 pharmaceutical companies.

Advertisement

The security sector is embedding itself into the architecture from the ground floor. CrowdStrike unveiled a Secure-by-Design AI Blueprint that embeds its Falcon platform protection directly into Nvidia AI agent architectures — including agents built on AI-Q and OpenShell — and is advancing agentic managed detection and response using Nemotron reasoning models. Cisco AI Defense will provide AI security protection for OpenShell, adding controls and guardrails to govern agent actions. These are not aftermarket bolt-ons; they are foundational integrations that signal the security industry views Nvidia’s agent platform as the substrate it needs to protect.

Dassault Systèmes is exploring Agent Toolkit software and Nemotron for its role-based AI agents, called Virtual Companions, on its 3DEXPERIENCE agentic platform. Atlassian is working with the toolkit as it evolves its Rovo AI agentic strategy for Jira and Confluence. Box is using it to enable enterprise agents to securely execute long-running business processes. Palantir is developing AI agents on Nemotron that run on its sovereign AI Operating System Reference Architecture.

The open-source gambit: why giving software away is Nvidia’s most aggressive business move

There is something almost paradoxical about a company with a multi-trillion-dollar market capitalization giving away its most strategically important software. But Nvidia’s open-source approach to Agent Toolkit is less an act of generosity than a carefully constructed competitive moat.

OpenShell is open source. Nemotron models are open. AI-Q blueprints are publicly available. LangChain, the agent engineering company whose open-source frameworks have been downloaded over 1 billion times, is working with Nvidia to integrate Agent Toolkit components into the LangChain deep agent library for developing advanced, accurate enterprise AI agents at scale. When the most popular independent framework for building AI agents absorbs your toolkit, you have transcended the category of vendor and entered the category of infrastructure.

Advertisement

But openness in AI has a way of being strategically selective. The models are open, but they are optimized for Nvidia’s CUDA libraries — the proprietary software layer that has locked developers into Nvidia GPUs for two decades. The runtime is open, but it integrates most deeply with Nvidia’s security partners. The blueprints are open, but they perform best on Nvidia hardware. Developers can explore Agent Toolkit and OpenShell on build.nvidia.com today, running on inference providers and Nvidia Cloud Partners including Baseten, CoreWeave, DeepInfra, DigitalOcean and others — all of which run Nvidia GPUs.

The strategy has a historical analog in Google’s approach to Android: give away the operating system to ensure that the entire mobile ecosystem generates demand for your core services. Nvidia is giving away the agent operating system to ensure that the entire enterprise AI ecosystem generates demand for its core product — the GPU. Every Salesforce agent running Nemotron, every SAP workflow orchestrated through OpenShell, every Adobe creative pipeline accelerated by CUDA creates another strand of dependency on Nvidia silicon.

This also explains the Nemotron Coalition announced Monday — a global collaboration of model builders including Mistral AI, Cursor, LangChain, Perplexity, Reflection AI, Sarvam and Thinking Machines Lab, all working to advance open frontier models. The coalition’s first project will be a base model codeveloped by Mistral AI and Nvidia, trained on Nvidia DGX Cloud, that will underpin the upcoming Nemotron 4 family. By seeding the open model ecosystem with Nvidia-optimized foundations, the company ensures that even models it does not build will run best on its hardware.

What could go wrong: the risks enterprise buyers should weigh before going all-in

For all the ambition on display Monday, several realities temper the narrative.

Advertisement

Adoption announcements are not deployment announcements. Many of the partner disclosures use carefully hedged language — “exploring,” “evaluating,” “working with” — that is standard in embargoed press releases but should not be confused with production systems serving millions of users. Adobe’s own forward-looking statements note that “due to the non-binding nature of the agreement, there are no assurances that Adobe will successfully negotiate and execute definitive documentation with Nvidia on favorable terms or at all.” The gap between a GTC keynote demonstration and an enterprise-grade rollout remains substantial.

Nvidia is not the only company chasing this market. Microsoft, with its Copilot ecosystem and Azure AI infrastructure, pursues a parallel strategy with the advantage of owning the operating systems and productivity software that most enterprises already use. Google, through Gemini and its cloud platform, has its own agent vision. Amazon, via Bedrock and AWS, is building comparable primitives. The question is not whether enterprise AI agents will be built on some platform but whether the market will consolidate around one stack or fragment across several.

The security claims, while architecturally sound, remain unproven at scale. OpenShell’s policy-based guardrails are a promising design pattern, but autonomous agents operating in complex enterprise environments will inevitably encounter edge cases that no policy framework has anticipated. CrowdStrike’s Secure-by-Design AI Blueprint and Cisco AI Defense’s OpenShell integration are exactly the kind of layered defense enterprise buyers will demand — but both are newly unveiled, not battle-hardened through years of adversarial testing. Deploying agents that can autonomously access data, execute code and interact with production systems introduces a threat surface that the industry has barely begun to map.

And there is the question of whether enterprises are ready for agents at all. The technology may be available, but organizational readiness — the governance structures, the change management, the regulatory frameworks, the human trust — often lags years behind what the platforms can deliver.

Advertisement

Beyond agents: the full scope of what Nvidia announced at GTC 2026

Monday’s Agent Toolkit announcement did not arrive in isolation. It landed amid an avalanche of product launches that, taken together, describe a company remaking itself at every layer of the computing stack.

Nvidia unveiled the Vera Rubin platform — seven new chips in full production, including the Vera CPU purpose-built for agentic AI, the Rubin GPU, and the newly integrated Groq 3 LPU inference accelerator — designed to power every phase of AI from pretraining to real-time agentic inference. The Vera Rubin NVL72 rack integrates 72 Rubin GPUs and 36 Vera CPUs, delivering what Nvidia claims is up to 10x higher inference throughput per watt at one-tenth the cost per token compared with the Blackwell platform. Dynamo 1.0, an open-source inference operating system that Nvidia describes as the “operating system for AI factories,” entered production with adoption from AWS, Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure alongside companies like Cursor, Perplexity, PayPal and Pinterest.

The BlueField-4 STX storage architecture promises up to 5x token throughput for the long-context reasoning that agents demand, with early adopters including CoreWeave, Crusoe, Lambda, Mistral AI and Nebius. BYD, Geely, Isuzu and Nissan announced Level 4 autonomous vehicle programs on Nvidia’s DRIVE Hyperion platform, and Uber disclosed plans to launch Nvidia-powered robotaxis across 28 cities and four continents by 2028, beginning with Los Angeles and San Francisco in the first half of 2027.

Roche, the pharmaceutical giant, announced it is deploying more than 3,500 Nvidia Blackwell GPUs across hybrid cloud and on-premises environments in the U.S. and Europe — what it calls the largest announced GPU footprint available to a pharmaceutical company. Nvidia also launched physical AI tools for healthcare robotics, with CMR Surgical, Johnson & Johnson MedTech and others adopting the platform, and released Open-H, the world’s largest healthcare robotics dataset with over 700 hours of surgical video. And Nvidia even announced a Space Module based on the Vera Rubin architecture, promising to bring data-center-class AI to orbital environments.

Advertisement

The real meaning of GTC 2026: Nvidia is no longer selling picks and shovels

Strip away the product specifications and benchmark claims and what emerges from GTC 2026 is a single, clarifying thesis: Nvidia believes the era of AI agents will be larger than the era of AI models, and it intends to own the platform layer of that transition the way it already owns the hardware layer of the current one.

The 17 enterprise software companies that signed on Monday are making a bet of their own. They are wagering that building on Nvidia’s agent infrastructure will let them move faster than building alone — and that the benefits of a shared platform outweigh the risks of shared dependency. For Salesforce, it means Agentforce agents that can draw from both cloud and on-premises data through a single Slack interface. For Adobe, it means creative AI pipelines that span image, video, 3D and document intelligence. For SAP, it means agents woven into the transactional fabric of global commerce. Each partnership is rational on its own terms. Together, they form something larger: an industry-wide endorsement of Nvidia as the default substrate for enterprise intelligence.

Huang, who opened his career designing graphics chips for video games, closed his keynote by gesturing toward a future in which AI agents do not just assist human workers but operate as autonomous colleagues — reasoning through problems, building their own tools, learning from their mistakes. He compared the moment to the birth of the personal computer, the dawn of the internet, the rise of mobile computing.

Technology executives have a professional obligation to describe every product cycle as a revolution. But here is what made Monday different: this time, 17 of the world’s most important software companies showed up to agree with him. Whether they did so out of conviction or out of a calculated fear of being left behind may be the most important question in enterprise technology — and it is one that only the next few years can answer.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

FullSpectrum Is Like HueForge For 3D Models, But Bring Your Toolchanger

Published

on

Full-color 3D printing is something of a holy grail, if nothing else just because of how much it impresses the normies. We’ve seen a lot of multi-material units the past few years, and with Snapmaker’s U1 and the Prusa XL it looks like tool changers are coming back into vogue. Just in time, [Radoux] has a fork of OrcaSlicer called FullSpectrum that brings HueForge-like color mixing to tool changing printers.

The hook behind FullSpectrum is very simple: stacking thin layers of colors, preferably with semi-translucent filament, allows for a surprising degree of mixing. The towers in the image above have only three colors: red, blue, and yellow. It’s not literally full-spectrum, but you can generate surprisingly large palettes this way. You aren’t limited to single-layer mixes, either: A-A-B repeats and even arbitrary patterns of four colors are possible, assuming you have a four-head tool changing printer like the Snapmaker U1 this is being developed for.

FullSpectrum is in fact a fork of Snapmaker’s fork of OrcaSlicer, which is itself forked from Bambu Slicer, which forked off of PrusaSlicer, which originated as a fork of Slic3r. Some complain about the open-source chaos of endless forking, but you can see in that chain how much innovation it gets us — including this technique of color mixing by alternating layers.

[Wombly Wonders] shows the limits of this in his video: you really want layer heights of 0.8 mm to 0.12 mm, as the standard 0.2 mm height introduces striping, particularly with opaque filaments. Depending on the colors and the overhang, you might get away with it, but thinner layers generally going to be a safer bet. Fully translucent filaments can blend a little too well at the edges, but the HueForge community — that we’ve covered previously — has already got a good handle on characterizing translucency and we’ll likely see a lot of that knowledge applied to FullSpectrum OrcaSlicer as time goes on.

Advertisement

Now, you could probably use this technique with an multi-material unit (MMU), but the tool-changing printers are where it is going to shine because they’re so much faster at it. With the right tool-changer, it’s actually faster to run off a model mixing colors from the cyan-yellow-magenta color space that it is to print the same model with the exact colors needed loaded on an MMU. That’s unexpected, but [Wombly] does demonstrate in his video with a chicken that’s listed as taking nineteen hours on Bambu’s MakerWorld as taking under seven hours.

Could this be the killer app that pushes tool-change printers into the spotlight? Maybe! Tool changing printers are nothing new, after all. We’ve even seen it done with a delta, and lots of other DIY options if you don’t fancy buying the big Prusa. If you’ve been lusting after such a beast, though, you might finally have your excuse.

Advertisement

Source link

Continue Reading

Tech

Oukitel WP61 Plus rugged phone review

Published

on

Why you can trust TechRadar


We spend hours testing every product or service we review, so you can be sure you’re buying the best. Find out more about how we test.

Oukitel WP61 Plus: 30-second review

Unveiled at IFA 2025 in Berlin, the Oukitel WP61 Plus is the brand’s flagship all-in-one rugged smartphone, featuring a 20,000 mAh battery, an integrated 2W DMR walkie-talkie, and a high-powered camping flashlight.

Advertisement

Source link

Continue Reading

Tech

Not a fan of Liquid Glass? This isn’t the news for you

Published

on

If you were hoping Apple might rethink its Liquid Glass interface any time soon, the latest reports suggest that’s unlikely.

According to Bloomberg’s Mark Gurman, early internal builds of upcoming Apple software show no major design changes to the visual overhaul introduced with iOS 26.

Liquid Glass first arrived across Apple’s recent platforms, including iOS 26 and macOS Tahoe. This brought a translucent, layered look to menus, widgets and system UI elements. While the redesign sparked mixed reactions from users, it appears Apple is committed to refining the style. They are refining it rather than replacing it.

The report says internal versions of iOS 27 and macOS 27 largely stick with the same design direction. That’s partly because the interface has strong backing internally. Apple’s new software design chief Steve Lemay — who took over the role after Alan Dye departed for Meta — was closely involved in developing Liquid Glass. In fact, he is expected to continue evolving the concept rather than replacing it outright.

Advertisement

That approach mirrors how Apple handled another major visual shift in the past. When iOS 7 abandoned skeuomorphic textures for a flat design, Apple spent several years gradually refining the look. Instead of dramatically changing it again, they chose to refine it.

Advertisement

In the meantime, Apple has already started offering small tweaks for users who find the effect too strong. Updates like iOS 26.1 introduced a “Tinted” option that increases the opacity of Liquid Glass elements across the system. Additionally, iOS 26.2 added a slider to adjust the transparency of the Lock Screen clock.

Apple had reportedly explored a system-wide Liquid Glass opacity slider during the development of iOS 26. However, they ran into engineering challenges when trying to apply the setting consistently across the entire interface. According to Gurman, the company could revisit that idea in a future version of iOS 27.

Advertisement

For now, though, the direction seems clear: Liquid Glass isn’t a short-lived experiment; it’s the foundation of Apple’s next generation of software design.

Source link

Advertisement
Continue Reading

Tech

Best Data Broker Removal Services (2026): Which One Really Reduces Your Online Exposure?

Published

on

No matter how much we don’t like and oppose it, personal data is now a commodity. Our phone numbers, addresses, shopping habits, or employment history details are collected, analyzed, and traded among data brokers, marketers, recruiters, insurers, and countless other buyers, not to mention frauds and thieves.

However, trying to remove your online presence manually means tracking down every single company that holds your data (which can be hundreds), submitting legal deletion requests, and repeating the process when your data reappears or your request is ignored. This can easily become a full-time job.

That’s why data broker removal services exist: to automate, manage, and repeat those requests on your behalf.

But how to choose the best provider? Below, you will find a 2026 evaluation of the most recognized names in the industry.

Advertisement

Top Data Broker Removal Services at a Glance

Category Incogni Aura DeleteMe Optery OneRep
Pricing (monthly when billed annually) From $7.99 From $9.99 From $6.97 From $3.25 From $8.33
Free option 30-day money-back guarantee 14-day free trial, 60-day money-back guarantee Free scan Basicself-service, 30-day money-back guarantee 5-day trial, 30-day money-back guarantee
Automation Level High Medium-High Medium-Low Medium Medium-High
Broker coverage 420+ public and private brokers 200+ brokers, mainly private up to 850+ brokers (varies by plan), mainly public 120-640+sites (varies by plan) 310+ sites, mainly public
Verification Dashboard, Deloitte Limited Assurance Report App alerts and screenshots Quarterly reports and screenshots Screenshots and exposure scans Dashboard and monthly reports
Best for Long-term, low-effort privacy Identity + privacy bundle Detailed proof and control Data exposure prediction Public removals, Families

Incogni: Best for Balanced Automation, Coverage, and Accountability

Overview and Pricing

Incogni focuses on the continuous removal of personal data from data brokers, including both public people-search sites and private commercial databases.

Incogni’s plans start at $7.99/month when billed annually, and even the basic option contains all you need for effective data removal. Higher-tier plans only change prioritization and scope. There’s no free option, but you can take advantage of its 30-day money-back guarantee to see if the service suits your needs.

Features

  • Fully-automated opt-out and deletion requests across 420+ data brokers
  • Recurring removal cycles: 60 days for public, 90 days for private brokers
  • Real-time dashboard tracking
  • Unlimited custom removal requests (plan-dependent)
  • Family plans and multiple-user accounts
  • Operational processes audited via a limited assurance report by Deloitte

Effectiveness

Supported by Deloitte’s limited assurance assessment, Incogni officially reports that it has processed 245+ million removal requests from 2022 to mid-2025, indicating sustained operations rather than one-time cleanups. As data brokers can reacquire information and their databases refresh regularly, the recurring cycle is vital if you want to protect your online footprint in the long run.

Transparency and Reputation

Apart from a limited assurance report by Deloitte, the service also holds Editors’ Choice Awards from PCMag and PCWorld, which praise its automation system and wide coverage.

On Trustpilot, Incogni has generally positive feedback, with an average rating of 4.4 based on over 2,000 reviews. Users often note actual reductions in spam and visible listings.

Advertisement

User Experience

Once you set up your account, you need to verify your identity. After that, Incogni will handle most data removal activity in the background without involving you directly. The clear, straightforward dashboard will show you all the brokers Incogni has contacted, confirmed removals, responses, and next scheduled cycles. You can peek into it whenever you like, but you don’t have to engage to make the process effective.

Advantages Disadvantages
High automation No screenshots
Broad coverage No free trial
Deloitte Limited Assurance Report Basic reporting
30-day money-back guarantee Phone support only on Unlimited plans
Industry recognition
Recurring cycles and resubmitted requests
Clear interface, straightforward user experience

Aura: Best All-in-One Identity and Privacy Suite

Overview and Pricing

Aura is not a provider like others on this list, as it combines data removal service with broader digital protection features, including credit alerts, antivirus, VPN, device security, and identity theft monitoring.

Aura’s prices begin at $9.99/month when billed annually. What’s more, you get a 14-day free trial and a 60-day money-back guarantee for risk-free testing.

Features

  • Automated data removal across 200+ data brokers (mainly private)
  • Identity theft monitoring
  • Dark web monitoring
  • Credit score and breach alerts
  • Antivirus/anti-malware protection
  • VPN
  • Family and multi-device plans

Effectiveness

When it comes to data removal itself, this Aura functionality is automated. The platform first scans broker and people-search sites, submits deletion requests whenever finding your

information, and re-checks for reappearances. However, as it’s not its main focus, its data removal coverage is quite narrow compared to dedicated solutions. Aura’s value is the strongest only if combined with the whole toolkit.

Advertisement

Transparency and Reputation

Aura has been widely described in the identity protection space with overall positive sentiments. You can find Aura reviews on PCMag, Forbes, and NerdWallet. On Trustpilot, it holds an average rating of 4.2 based on almost 1,000 reviews. Users appreciate its all-in-one service, but broker removal results themselves don’t match those ensured by services focused exclusively on that problem.

User Experience

Aura’s interface contains all the features offered by the providers, showing alerts, scans, security postures, removal status, and more. This holistic view appeals to people who seek central management of their online presence, but for many users, it can be overwhelming.

Advantages Disadvantages
Privacy+security bundle Narrower coverage
Insurance Manual approval steps
60-day money-back guarantee Overwhelming user experience
14-day free trial No third-party verification
Comprehensive alerts

DeleteMe: Strong for Proved Public People-Search Listing Deletion

Overview and Pricing

DeleteMe focuses on public people-search sites and background information databases. These are mentions that usually appear in search results when someone Googles your name.

The cheapest DeleteMe plan is $6.97/month when billed annually and can be used by 1 person.

Advertisement

Features

  • Automated scans of people-search sites (up to 850+, depending on the plan)
  • Expert manual handling
  • Quarterly detailed reports
  • Coverage for individuals, couples, and families
  • Limited custom removal requests (40-60 per year, plan-dependent)
  • DIY opt-out tutorials

Effectiveness

DeleteMe is quite effective at removing visible information from many major public listings. The company was a pioneer when, in 2010, it entered the industry with its

part-automatic, part-human-assisted approach. The team submits requests and tracks

progress, then provides you with scheduled, detailed reports that include, for example, even screenshots.

Transparency and Reputation

DeleteMe has been in the industry since 2010, which says a lot about its reliability. It has generally positive user reviews, especially when it comes to its detailed reporting system and exhaustive explanations about what was removed. There have been no third-party assessments of its services, but the provider has a good reputation in the industry, as seen in the review in PCMag or praise from Forbes. When it comes to user feedback, it has a rating of

4.0 on Trustpilot, though based only on 180+ reviews.

Advertisement

User Experience

Contrary to Incogni’s live and always-on progress monitoring, which you can check but don’t have to, DeleteMe is more report-centric. Users receive quarterly PDF summaries that show what sites were contacted, where their information was removed, and what remains pending.

Many people appreciate their human approach.

Advantages Disadvantages
Clear, detailed reporting Slower cycles
Long-standing service Less automation
Human expertise Narrower broker reach
30-day money-back guarantee US-mainly coverage

Optery: Best for Exposure Visibility

Overview and Pricing

Optery’s main field of expertise is discovering where your personal data exists, providing users with insight into exposures before and during removal attempts.

Optery’s offer starts at $3.25/month when billed annually. The company also has a free, self-service version. Apart from that, you get a free scan and a 30-day money-back guarantee.

Advertisement

Features

  • Exposure dashboard displaying where your personal data exists
  • Automated removal from up to 630+ brokers with paid plans
  • Initial free scan across 120+ sites and free self-service plan
  • Guided removal request sending process
  • Custom removal submissions
  • Manual tracking of opt-outs and their status

Effectiveness

Optery is most effective at identifying where your personal data has been exposed. Then, for its removal, it blends automatic attempts with user-guided actions and manual tracking.

It doesn’t have the same automated recurring cycles as, for example, Incogni, but it may be helpful if you want to truly understand data exposures.

Transparency and Reputation

Optery is often highlighted for its exposure insights and transparency. Users appreciate the “seeing where my data lives” model, but many note that broader coverage comes only with more expensive plans, while manual user input is still needed.

On Trustpilot, Optery has 171 reviews with an average rating of 4.1. It has also been reviewed by PCMag quite enthusiastically, though they mentioned that the service doesn’t distinguish between removed data and never-found data. TechRadar praised it for its ease of use.

User Experience

Optery is more interactive and gives you more control of the process (which can be both an advantage and a disadvantage, depending on how much time you’re willing to sacrifice). Its dashboard clearly shows where your personal data is, and then you need to decide which removals are more important and what to do next. You also get before and after screenshots as visual proof, while reports are AI-improved to make them more accurate and detailed.

Advertisement
Advantages Disadvantages
Free scan Broader coverage with more expensive plans
Free self-service US-focused
30-day money-back guarantee Slower with cheaper plans
Clear interface & control No phone support

Onerep: Best for Public Listing Removal and Families

What It Does

OneRep automates removal requests issued to public people-search sites. Its focus is on high-risk databases like Intelius and Whitepages. The service also ensures quarterly recurring checks to combat resurfacing of your data.

However, there’s significant controversy around the company (more of that below).

Onerep’s prices start at $8.33/month when billed annually. It also offers a 5-day free trial. What makes it attractive and more affordable is its family plans that cover up to 6 members.

Features

  • Automated scans and removal requests across 310+ data brokers
  • Quarterly re-scanning
  • Great family value
  • Clear and straightforward dashboard tracking

Effectiveness

Optery is effective when it comes to reducing online visibility on many public sites, including those deemed high-risk. However, this provider doesn’t focus on private commercial

brokers that are responsible for a large portion of the spam. It makes Optery’s reach much narrower.

Advertisement

Transparency and Reputation

OneRep has a mixed reputation in the privacy protection community.

User reviews vary: some praise successful public listing removals, while others complain about slow relisting or only partial effects. Still, it holds a quite impressive average rating of

4.7 on Trustpilot based on almost 400 reviews.

However, it’s essential to know that Krebs on Security revealed that in March 2024, Mozilla decided to drop OneRep from its list of recommendations due to the company’s CEO’s involvement in running people-search networks. This raised serious questions about conflict of interest in the industry. While the provider stated that Onerep operates completely independently and never sells user information, it is still often referenced in privacy circles.

Advertisement

User Experience

Onerep’s dashboard is pretty simple to manage. It shows progress on targeted sites and all removal requests, though it’s not really an automated model, so it only suits users who don’t mind handling the process.

Advantages Disadvantages
Great family value Industry controversies
5-day free trial 30-day money-back guarantee highly conditioned
Quarterly re-scans US focus
Public listing coverage Little customization
No third-party verification
Narrower scope

Final Perspective for 2026

When it comes to choosing a data removal service, the main difference is usually in scope and depth. Some providers focus on visible people-search listings, while others dig deeper to find your personal information in harder-to-find databases. They also vary in the recurring cycles they offer (or not).

Managing your overall online visibility is vital, but if you really want to reduce the amount of your information circulating on the web, you need to focus on less visible broker networks. Or rather, choose a provider built around large-scale broker coverage. Only then will you be able to enjoy more sustained results.

In 2026, Incogni stands out among its competition, as it combines a wide broker reach, continuous removal cycles, and a streamlined, low-maintenance experience. Not to mention that it was independently assessed. While other providers are not to be altogether dismissed, Incogni’s focused, automated approach offers the most comprehensive way out.

Advertisement

FAQ

Why can’t I just remove my data from brokers myself?

Manual removal means identifying hundreds of brokers, submitting individual opt-out requests, repeatedly verifying your identity, and rechecking when your data reappears. For most people, that quickly becomes too time-consuming to manage consistently.

How often does my data reappear after removal?
Advertisement

Data brokers regularly refresh and repurchase data, which means listings can resurface even after deletion. That’s why recurring removal cycles are critical for long-term results.

What’s the difference between public and private data brokers?

Public brokers (like people-search sites) display your information in search results, while private brokers trade data behind the scenes with marketers, insurers, and other businesses. Private databases often contribute more to spam and profiling, even if you don’t see them.

Advertisement
Do all services provide proof that removals were completed?

No. Some providers offer screenshots or quarterly reports, while others rely on dashboards or summary updates. The level of transparency varies significantly by service.

Advertisement
Is a bundled identity protection service enough for data removal?

All-in-one tools can help, but their broker coverage is often narrower than services dedicated specifically to data removal. If reducing online exposure is your main goal, specialized coverage may deliver stronger results.

Advertisement

Source link

Continue Reading

Tech

Recycled Plastic Compression Molding With 3D-Printed Molds

Published

on

Recycling plastic at home using 3D printed molds is relatively accessible these days, but if you do not wish to invest a lot of money into specialized equipment, what’s the most minimal setup that you can get away with? In a recent [future things] video DIY plastic recycling is explored using only equipment that the average home is likely to have around.

Lest anyone complain, you should always wear PPE such as gloves and a suitable respirator whenever you’re dealing with hot plastic in this manner, just to avoid a trip to the emergency room. Once taken care of that issue, there are a few ways of doing molding, with compression molding being one of the most straightforward types.

With compression molding you got two halves of a mold, of which one compresses the material inside the other half. This means that you do not require any complex devices like with injection molding, just a toaster oven or equivalent to melt the plastic, which is LDPE in this example. The scrap plastic is placed in a silicone cup before it’s heated so that it doesn’t stick to the container.

The wad of goopy plastic is then put inside the bottom part of the mold before the top part is put in place and squeezed by hand until molten plastic comes out of the overflow opening(s). After letting it fully cool down, the mold is opened and the part released. Although the demonstrated process can be improved upon, it seems to work well enough if you are aware of the limitations. In terms of costs and parts required it’s definitely hard to come up with a cheaper way to do plastic molding.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Creative's Sound Blaster Audigy FX Pro brings discrete audio back from the grave

Published

on


While the traditional discrete sound card has largely become a niche product for enthusiasts and hardware obsessives, Creative is attempting to attract new customers with a fresh model. The newly launched Sound Blaster Audigy Fx Pro can significantly upgrade the audio experience, the company says, and includes an additional layer…
Read Entire Article
Source link

Continue Reading

Tech

Dell revives Precision laptop line with a sleeker design and serious power boost

Published

on

Dell has officially refreshed its Precision lineup with four new laptops, and there’s something here for every professional. The new range includes the Precision 5 Series in 14-inch and 16-inch sizes and the more powerful Precision 7 Series in the same size configurations. All four laptops are designed for users who demand more than a standard laptop can deliver. 

All models are powered by the Series 3 Intel Core Ultra processors, with a built-in NPU that handles local AI tasks at up to 50 TOPS. You can spec the Precision 5 laptops with Intel Core Ultra 5, Ultra 7, and Ultra 9, and the Precision 7 laptops with Core Ultra 7, Ultra 9, and Ultra X7 processors. 

Both Precision series can be equipped with up to 64GB of RAM, with the only difference being that the Precision 7 series gets onboard RAM. You also get NVIDIA RTX PRO Blackwell graphics across the range, though the top-end Precision 7 pushes all the way up to the RTX PRO 3000 with 12GB of dedicated video memory.

Which one should you pick?

The Precision 5 is the friendlier entry point. The 14-inch version weighs only 3.98 lbs and offers a QHD+ display (non-touch) option, up to 2TB of Gen5 NVMe storage, and a 72Wh battery. 

Advertisement

The larger 16-inch model is essentially the same, with the only difference being an option to get a more powerful NVIDIA RTX PRO 2000 Blackwell graphics, a bigger display, and a larger 96Wh battery. 

The Precision 7 is where things get serious. The 14-inch model weighs 3.51 lbs, which is impressive given what’s packed inside, including an optional QHD+ Tandem OLED display with VESA HDR TrueBlack 500 support. 

The 16-inch bumps up to a 4K Tandem OLED with a 120Hz refresh rate and HDR TrueBlack 1000. If you work with color-critical content, that display alone is worth a serious look. You can also spec both these models with up to 4TB Gen5 NVMe SSDs.

Both 7 Series models feature Thunderbolt 5 ports, a significant upgrade for anyone who regularly transfers large files or relies on high-bandwidth accessories. You also get Wi-Fi 7, Bluetooth 6.0, and an 8MP IR camera on all models.

Is it worth the upgrade?

There’s no doubt that the Dell Precision laptops are packed with features and the latest hardware. However, whether they are worth buying will totally depend on their price.

Dell hasn’t announced pricing yet, but with specs like these, expect the Precision 7 Series to carry a premium. With Apple’s M5 Pro and M5 Max laptops leaving competition in the dust, Dell will have to price these laptops aggressively to compete with them.

Advertisement

Source link

Continue Reading

Tech

Hydropower Line From Quebec Could Power a Million NYC Homes

Published

on

The Champlain Hudson Power Express, a $6 billion, 339-mile buried transmission line, will soon deliver Canadian hydropower from Hydro-Quebec to New York City. The project could supply up to 20% of the city’s electricity and power roughly one million homes throughout the year. “This is far and away the largest project I have ever worked on,” said Bob Harrison, who has worked in infrastructure for 40 years and is the head of engineering for the Champlain Hudson Power Express. “We like to say it’s the largest project you’ll never see.” The New York Times reports: The massive power project, expected to provide energy to a million New York City customers a year, travels underground and underwater, from the northern plains at the Canadian border to the filled-in marshlands of coastal Queens, much of it loosely following the Hudson River. Its construction included the underwater installation of more than two million feet of cable imported from Sweden. It also required special boats, loaded with equipment that could shoot water jets deep into the sediment, to create trenches for the cable. Then, when it came to placing cable beneath the landscape, more than 700 land-use easements were needed, plus an additional 1.55 million feet of cable.

The Champlain Hudson Power Express has found a way to plug into the city, but it wasn’t easy. The work included 10 new manholes and more than three miles of new underground circuitry, according to Con Edison, the city’s primary electricity provider. “It was literally a hand weave under the streets of Queens,” said Jennifer Laird-White, the head of external affairs for Transmission Developers. The hydropower travels from Canada via two buried cables that are as round as cantaloupes. Those lines snake for hundreds of miles under a lake, several rivers (including the Hudson for about 90 miles) and through buried trenches alongside train tracks and roads. The cables resurface in Astoria, Queens, where a converter station shapes, filters and refines the raw power into a product that New Yorkers can consume.

In two cavernous rooms that could be mistaken for “Star Wars” sets, the electricity flows through 30 hanging structures encased in what look like metallic, dinosaurlike exoskeletons. Each one weighs about as much as a small humpback whale and contains microprocessors, thousands of valves and fiber wires. “I am still wowed when I walk into that facility,” said Mr. Harrison, the engineer. “I mean, it is just mind-boggling.”

Source link

Advertisement
Continue Reading

Tech

Jensen Huang just put Nvidia’s Blackwell and Vera Rubin sales projections into the $1 trillion stratosphere

Published

on

Nvidia CEO Jensen Huang threw out a lot of numbers — mostly of the technical variety — during his keynote Monday to kick off the company’s annual GTC Conference in San Jose, California.

But there was one financial figure that investors surely took notice of: his projection that there will be $1 trillion worth of orders for Nvidia’s Blackwell and Vera Rubin chips, a monetary reflection of a booming AI business.

About an hour into his keynote, Huang noted that last year Nvidia saw about $500 billion in demand for its Blackwell and upcoming Rubin chips through 2026.

“Now, I don’t know if you guys feel the same way, but $500 billion is an enormous amount of revenue,” he said. “Well, I’m here to tell you that right now where I stand — a few short months after GTC DC, one year after last GTC — right here where I stand, I see through 2027, at least $1 trillion.”

Advertisement

The Rubin computing chip architecture, which was first announced in 2024, has been described by Huang as the state of the art in AI hardware that outperforms its Blackwell predecessor. The company said in January, when it officially started production of Rubin, it would operate 3.5x faster than the Blackwell architecture on model-training tasks and 5x faster on inference tasks, reaching as high as 50 petaflops.

Nvidia has said it expects to ramp up production in the second half of the year.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Advertisement

Source link

Advertisement
Continue Reading

Tech

I tested the ‘future self’ prompt in ChatGPT and couldn’t believe how personal the advice it gave me was

Published

on

Viral AI prompts are usually just a little party trick, but a new one shared on Reddit promised to evoke actual feelings, simply by asking ChatGPT to travel to the future on your behalf and send a letter from a more successful version of yourself.

Specifically, the prompt designed by the user was:

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025