Connect with us
DAPA Banner

Tech

This Is Considered The Best Oil Change Chain By Customer Satisfaction

Published

on





Need an oil change place that isn’t going to leave you feeling frustrated (or worse, ripped off)? Based on multiple customer satisfaction surveys, Valvoline Instant Oil Change is the place to go. It ranked number one among automotive retailers on both Yelp’s “Most Loved Brands 2025” list and Forbes’ “Best Customer Service 2025” list. Anybody who’s ever had to get out of their car and wait around for their oil change to be done knows at least one reason why: Valvoline lets you chill in the driver’s seat the whole time.

Across both surveys, Valvoline Instant Oil Change stood out for the same reasons: reliable service, approachable staff, clean facilities, and an all-around positive oil change experience. As a company, you don’t get there by excelling in a single region or location, either. Valvoline’s laurels come from consistent high-quality customer service across its network of over 2,000 franchise locations nationwide. Its motor oil is made in America too, if that’s important to you.

Advertisement

Why Valvoline ranks the highest

The internet seems to agree, even beyond these two surveys. A quick look at Reddit threads in r/Cartalk or similar, you’ll find plenty of people vouching for Valvoline Instant Oil Change over the other guys. The caveat, of course, is that you can always change your own oil without having to visit an oil change chain in the first place. That “just do it yourself” recommendation is fine for people with the skills to do so (not to mention a jack or a ramp plus a paved driveway or parking spot where you can safely get under the vehicle), but for the millions who don’t, Valvoline is a worthy alternative.

It makes sense why. Valvoline’s business is built around the idea that preventive maintenance should be quick and transparent, and that includes letting you stay in your car while their trained technicians change your oil. Their approach soothes a few common pains in one: fewer long waits, less awkward waiting room interactions, and usually a nice coupon or two waiting in your inbox, as well. It’s enough to get them first place in both Yelp and Forbes’s surveys as well as a recent top 40 ranking in the Franchise Times Top 400.

Advertisement

Methodology

To come up with a verdict, Yelp looked at all sorts of brand loyalty markers from customers on its site: Repeat visits to brand pages, high volumes of favorable reviews, strong photo engagement, and continued search interest, just to name a handful or so. Forbes, on the other hand, took a nationwide survey data. They gathered feedback from more than 180,000 consumers evaluating their interactions with thousands of different brands. We also looked to Reddit and other publications (like the Franchise Times, mentioned above) to back up Yelp and Forbes’s praise for Valvoline.

Advertisement



Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Creative's Sound Blaster Audigy FX Pro brings discrete audio back from the grave

Published

on


While the traditional discrete sound card has largely become a niche product for enthusiasts and hardware obsessives, Creative is attempting to attract new customers with a fresh model. The newly launched Sound Blaster Audigy Fx Pro can significantly upgrade the audio experience, the company says, and includes an additional layer…
Read Entire Article
Source link

Continue Reading

Tech

Dell revives Precision laptop line with a sleeker design and serious power boost

Published

on

Dell has officially refreshed its Precision lineup with four new laptops, and there’s something here for every professional. The new range includes the Precision 5 Series in 14-inch and 16-inch sizes and the more powerful Precision 7 Series in the same size configurations. All four laptops are designed for users who demand more than a standard laptop can deliver. 

All models are powered by the Series 3 Intel Core Ultra processors, with a built-in NPU that handles local AI tasks at up to 50 TOPS. You can spec the Precision 5 laptops with Intel Core Ultra 5, Ultra 7, and Ultra 9, and the Precision 7 laptops with Core Ultra 7, Ultra 9, and Ultra X7 processors. 

Both Precision series can be equipped with up to 64GB of RAM, with the only difference being that the Precision 7 series gets onboard RAM. You also get NVIDIA RTX PRO Blackwell graphics across the range, though the top-end Precision 7 pushes all the way up to the RTX PRO 3000 with 12GB of dedicated video memory.

Which one should you pick?

The Precision 5 is the friendlier entry point. The 14-inch version weighs only 3.98 lbs and offers a QHD+ display (non-touch) option, up to 2TB of Gen5 NVMe storage, and a 72Wh battery. 

Advertisement

The larger 16-inch model is essentially the same, with the only difference being an option to get a more powerful NVIDIA RTX PRO 2000 Blackwell graphics, a bigger display, and a larger 96Wh battery. 

The Precision 7 is where things get serious. The 14-inch model weighs 3.51 lbs, which is impressive given what’s packed inside, including an optional QHD+ Tandem OLED display with VESA HDR TrueBlack 500 support. 

The 16-inch bumps up to a 4K Tandem OLED with a 120Hz refresh rate and HDR TrueBlack 1000. If you work with color-critical content, that display alone is worth a serious look. You can also spec both these models with up to 4TB Gen5 NVMe SSDs.

Both 7 Series models feature Thunderbolt 5 ports, a significant upgrade for anyone who regularly transfers large files or relies on high-bandwidth accessories. You also get Wi-Fi 7, Bluetooth 6.0, and an 8MP IR camera on all models.

Is it worth the upgrade?

There’s no doubt that the Dell Precision laptops are packed with features and the latest hardware. However, whether they are worth buying will totally depend on their price.

Dell hasn’t announced pricing yet, but with specs like these, expect the Precision 7 Series to carry a premium. With Apple’s M5 Pro and M5 Max laptops leaving competition in the dust, Dell will have to price these laptops aggressively to compete with them.

Advertisement

Source link

Continue Reading

Tech

Hydropower Line From Quebec Could Power a Million NYC Homes

Published

on

The Champlain Hudson Power Express, a $6 billion, 339-mile buried transmission line, will soon deliver Canadian hydropower from Hydro-Quebec to New York City. The project could supply up to 20% of the city’s electricity and power roughly one million homes throughout the year. “This is far and away the largest project I have ever worked on,” said Bob Harrison, who has worked in infrastructure for 40 years and is the head of engineering for the Champlain Hudson Power Express. “We like to say it’s the largest project you’ll never see.” The New York Times reports: The massive power project, expected to provide energy to a million New York City customers a year, travels underground and underwater, from the northern plains at the Canadian border to the filled-in marshlands of coastal Queens, much of it loosely following the Hudson River. Its construction included the underwater installation of more than two million feet of cable imported from Sweden. It also required special boats, loaded with equipment that could shoot water jets deep into the sediment, to create trenches for the cable. Then, when it came to placing cable beneath the landscape, more than 700 land-use easements were needed, plus an additional 1.55 million feet of cable.

The Champlain Hudson Power Express has found a way to plug into the city, but it wasn’t easy. The work included 10 new manholes and more than three miles of new underground circuitry, according to Con Edison, the city’s primary electricity provider. “It was literally a hand weave under the streets of Queens,” said Jennifer Laird-White, the head of external affairs for Transmission Developers. The hydropower travels from Canada via two buried cables that are as round as cantaloupes. Those lines snake for hundreds of miles under a lake, several rivers (including the Hudson for about 90 miles) and through buried trenches alongside train tracks and roads. The cables resurface in Astoria, Queens, where a converter station shapes, filters and refines the raw power into a product that New Yorkers can consume.

In two cavernous rooms that could be mistaken for “Star Wars” sets, the electricity flows through 30 hanging structures encased in what look like metallic, dinosaurlike exoskeletons. Each one weighs about as much as a small humpback whale and contains microprocessors, thousands of valves and fiber wires. “I am still wowed when I walk into that facility,” said Mr. Harrison, the engineer. “I mean, it is just mind-boggling.”

Source link

Advertisement
Continue Reading

Tech

Jensen Huang just put Nvidia’s Blackwell and Vera Rubin sales projections into the $1 trillion stratosphere

Published

on

Nvidia CEO Jensen Huang threw out a lot of numbers — mostly of the technical variety — during his keynote Monday to kick off the company’s annual GTC Conference in San Jose, California.

But there was one financial figure that investors surely took notice of: his projection that there will be $1 trillion worth of orders for Nvidia’s Blackwell and Vera Rubin chips, a monetary reflection of a booming AI business.

About an hour into his keynote, Huang noted that last year Nvidia saw about $500 billion in demand for its Blackwell and upcoming Rubin chips through 2026.

“Now, I don’t know if you guys feel the same way, but $500 billion is an enormous amount of revenue,” he said. “Well, I’m here to tell you that right now where I stand — a few short months after GTC DC, one year after last GTC — right here where I stand, I see through 2027, at least $1 trillion.”

Advertisement

The Rubin computing chip architecture, which was first announced in 2024, has been described by Huang as the state of the art in AI hardware that outperforms its Blackwell predecessor. The company said in January, when it officially started production of Rubin, it would operate 3.5x faster than the Blackwell architecture on model-training tasks and 5x faster on inference tasks, reaching as high as 50 petaflops.

Nvidia has said it expects to ramp up production in the second half of the year.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Advertisement

Source link

Advertisement
Continue Reading

Tech

I tested the ‘future self’ prompt in ChatGPT and couldn’t believe how personal the advice it gave me was

Published

on

Viral AI prompts are usually just a little party trick, but a new one shared on Reddit promised to evoke actual feelings, simply by asking ChatGPT to travel to the future on your behalf and send a letter from a more successful version of yourself.

Specifically, the prompt designed by the user was:

Source link

Advertisement
Continue Reading

Tech

Who Really Owns Maverik Gas Stations?

Published

on





While Maverik isn’t officially the best gas station convenience store, slotting into 7th place overall for 2025 according to the American Customer Satisfaction Index (ACSI), the chain offers fueling stations in 20 states with over 800 stores. Ownership of Maverik gas stations remains with the family that started it under FJ Management, a Utah-based private holding company. Piecing together information from FJ Management’s online history and news reports reveals Maverik’s lineage.

The first Maverik gas station opened in 1928 with two gas pumps in Afton, Wyoming. The entrepreneur responsible was 20-year-old Reuel Call, using funds earned from renting roller skates. Ultimately, Reuel teamed up with his brother Osborne, a partnership that lasted until Reuel bought Osborne’s share of the business in 1965.

Osborne’s son, O. Jay Call, having learned the gas station business working for his father and uncle Reuel, opened his first gas station in Ontario Oregon in 1965, with another in Lewiston, Idaho coming later. Seeing success in the business, Jay founded Flying J in 1968, running the chain of truck stops travel plazas until his death in 2008.

Advertisement

Following Jay’s death, Flying J landed in Chapter 11 bankruptcy. Crystal Call Maggelet, Jay’s daughter, was named President and CEO of Flying J in 2009, leading the company out of bankruptcy with full repayment to creditors in 2010. Rebranded as FJ Management, the company, under Maggelet’s leadership, acquired Maverik, with Mike Call, Reuel’s grandson, at the helm in 2012.

Advertisement

What’s special about Maverick gas stations?

Maverik advertises itself as “adventure’s first stop.” With its mountain-themed tri-peak logo, Adventure Club rewards program, and in-house BonFire menu, the chain is turning its gas stations into massive convenience stores like many of its competitors.

The number of Maverik gas stations expanded rapidly in 2023. That’s the year Maverik bought Kum & Go, a rival chain with 400 locations in 13 Midwest and southern states. At the time, the acquisition made Maverik the 12th largest in its class. Prior to that, Maverik gas stations were mostly limited to the American west.

Maverik’s Adventure Club is free to join and offers a rewards program and everyday savings on fuel. By joining the Adventure Club and entering your associated phone number at the pump, you’ll save 2-cents on every gallon of the fuel of your choice. In addition, you’ll earn trail points that can be used for purchases inside Maverik or through the Maverik app. You’ll get a point for each gallon of fuel purchased and two points for every dollar spent in store, but a BonFire burrito will set you back 500 trail points, so don’t expect instant gratification from points.

Inside most Maverik gas stations, you’ll find a BonFire Grill featuring made-to-order selections like tacos, quesadillas, nachos, salads, and pizza. There are also a number of sweet snacks like cookies, donuts, and muffins, breakfast options featuring bowls, burritos, sandwiches, and biscuits, and lunch selections including burgers, corn dogs, and wraps. However, please keep one of the primary rules of gas station etiquette in mind and pull away from the gas pump before going inside to order.

Advertisement



Source link

Advertisement
Continue Reading

Tech

Seattle puts Microsoft Copilot expansion on hold as new mayor takes stock of AI

Published

on

The downtown Seattle skyline. (GeekWire Photo / Lisa Stiffler)

Five months after releasing its “responsible AI plan” providing guidelines for the municipality’s use of artificial intelligence, the City of Seattle has tapped the brakes on the tech’s official deployment for city employees.

Mayor Katie Wilson last month paused the planned citywide rollout of Microsoft Copilot, as first reported Monday in The Seattle Times. Her predecessor, Mayor Bruce Harrell, had approved the launch before leaving office in December.

“While implementation of the technology is delayed, the education and governance work continues,” Megan Erb, spokesperson for the Seattle Information Technology Department, told GeekWire. “The City is still conducting educational roadshows for departments, as well as working to advance our foundational work in data governance and data readiness.” 

In September, Seattle released its AI plan, which covers training and skill-building opportunities for city employees, and establishes a framework to facilitate and evaluate the use of AI tools in city operations. The city also conducted a pilot test of Copilot with 500 employees. The technology is available at no additional cost for Microsoft 365 users under Seattle’s enterprise agreement.

Participants reported:

Advertisement
  • Collectively saving more than 450 hours of work per week, such as drafting communications, report preparation, document analysis and research.
  • The technology proved most helpful for writing more clearly, producing faster summaries of documents and meeting notes, and quick access to policies and regulations.
  • 83% said Copilot Chat provided “business value.”
  • 79% said it was a positive user experience.

Seattle has been a leader in efforts to adopt next-gen AI tools, and says it issued the nation’s first generative AI policy in fall 2023. Even before the recently released AI plan, Seattle already had policies requiring “human-in-the-loop” oversight, meaning employees must review generative AI outputs before official use and disclose when work is AI assisted. The city also identified prohibited applications, such as AI in hiring decisions and facial recognition, due to concerns about bias and reliability.

Concerns about municipal AI regulations and oversight are widespread. An investigative series published earlier this year by the news organization Cascade PBS found that multiple Washington cities had limited guardrails around AI use, raising public trust and privacy concerns. Seattle was not among the cities scrutinized.

Seattle leaders in the past have framed their effort as a balance between embracing new technology and upholding their fundamental obligation to serve the public, emphasizing that AI is a tool — not a replacement for employees.

Erb said the delayed deployment of Copilot is a part of a “phased approach” to ensure “the City responsibly tests and adopts artificial intelligence tools, meets all privacy and security requirements, and deploys solutions that provide clear benefits to employees while upholding the City’s Responsible AI commitments.” 

Rob Lloyd, Seattle’s chief technology officer, resigned last month, effective March 27, to become executive director of the Center for Digital Government. The city is recruiting a replacement.

Advertisement

In December, the city appointed Lisa Qian as its first AI Officer. Her experience includes serving as a senior manager of data science at LinkedIn, as well other tech company leadership positions.

During the fall budget process, the Seattle City Council asked the Seattle IT Department to provide quarterly reports on the use of AI, and that information will be submitted April 1.

The city previously identified 41 priority projects in which AI could potentially improve government performance and public services. Updates on those efforts will be included in the upcoming report, Erb said.

Source link

Advertisement
Continue Reading

Tech

Polyphonic Tunes On The Sharp PC-E500

Published

on

If you’re a diehard fan of the chiptune scene, you’ve probably heard endless beautiful compositions on the Nintendo Game Boy, Commodore 64, and a few phat FM tracks from Segas of years later. What the scene is yet to see is a breakout artist ripping hot tracks on the Sharp PC-E500. If you wanted to, though, you’d probably find use in this 3-voice music driver for the ancient 1993 mini-PC. 

This comes to us from [gikonekos], who dug up the “PLAY3” code from the Japanese magazine “Pocket Computer Journal” published in November 1993. Over on GitHub, the original articles have been scanned, and the assembly source code for the PLAY3 driver has been reconstructed. There’s also documentation of how the driver actually works, along with verification against RAM dumps from actual Sharp PC-E500 hardware. The driver itself runs as a machine code extension to the BASIC interpreter on the machine. The “PLAY” command can then be used to specify a string of notes to play at a given tempo and octave. Polyphony is simulated using time-division sound generation, with output via the device’s rather pathetic single piezo buzzer.

It’s very cool to see this code preserved for the future. That said, don’t expect to see it on stage at the next Boston Bitdown or anything—as this example video shows, it’s not exactly the punchiest chiptune monster out there. We’ll probably stick to our luscious fake-bit creations for now, while Nintendo hardware will still remain the bedrock of the movement.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Ramp acquires Juno, a corporate travel expense startup backed by Madrona

Published

on

How YouGov's Reform Deficit Was Created as Farage Forces Pollster to Climb Down – Guido Fawkes
Juno co-founders Devon Tivona (left) and Sam Felsenthal. (Juno Photo)

Fintech giant Ramp announced the acquisition of Juno, a startup founded in 2024 that built a corporate travel platform to help manage non-employee expenses.

Terms of the deal were not disclosed. Juno will maintain its brand and employees.

Juno’s platform guides coordinators and their guests through booking, logistics, payments, reimbursements, and reconciliation.

“A bad candidate travel experience can cost you a hire,” Ramp CTO Karim Atiyeh said in a statement. “Juno built something strong in a category that matters. Our job now is to give them leverage and stay out of the way.”

Portland, Ore.-based Devon Tivona and Denver-based Sam Felsenthal co-founded Juno and they’ll continue in their leadership roles as co-CEOs. They previously co-founded Pana, another corporate guest travel platform, that was acquired by Coupa in 2021.

Advertisement

Juno raised a $2 million seed round last year led by Seattle-based Madrona along with Bungalow Ventures.

“Joining Ramp gives Devon and Sam the resources to pursue the vision they’ve been working toward all along: guest travel, payments, and expenses operating as one coherent system,” Madrona Managing Director Steve Singh wrote on LinkedIn.

Singh co-founded the travel and expense management giant Concur, which was acquired by SAP in 2014 for $8.3 billion. He led a group of investors in the April 2024 acquisition of Direct Travel Inc., a Colorado-based corporate travel management company, and is executive chairman of Otto, a Seattle-based startup developing an AI virtual assistant for business travel booking.

Singh also serves as executive chairman at Spotnana, a travel-as-a-service technology platform (he’s currently also interim CEO); Troop, a group meetings and events company; and Center, a corporate card and expense management platform that was acquired by American Express. 

Advertisement

Source link

Continue Reading

Tech

Nvidia launches enterprise AI agent platform with Adobe, Salesforce, SAP among 17 adopters at GTC 2026

Published

on

Jensen Huang walked onto the GTC stage Monday wearing his trademark leather jacket and carrying, as it turned out, the blueprints for a new kind of monopoly.

The Nvidia CEO unveiled the Agent Toolkit, an open-source platform for building autonomous AI agents, and then rattled off the names of the companies that will use it: Adobe, Salesforce, SAP, ServiceNow, Siemens, CrowdStrike, Atlassian, Cadence, Synopsys, IQVIA, Palantir, Box, Cohesity, Dassault Systèmes, Red Hat, Cisco and Amdocs. Seventeen enterprise software companies, touching virtually every industry and every Fortune 500 corporation, all agreeing to build their next generation of AI products on a shared foundation that Nvidia designed, Nvidia optimizes and Nvidia maintains.

The toolkit provides the models, the runtime, the security framework and the optimization libraries that AI agents need to operate autonomously inside organizations — resolving customer service tickets, designing semiconductors, managing clinical trials, orchestrating marketing campaigns. Each component is open source. Each is optimized for Nvidia hardware. The combination means that as AI agents proliferate across the corporate world, they will generate demand for Nvidia GPUs not because companies choose to buy them but because the software they depend on was engineered to require them.

“The enterprise software industry will evolve into specialized agentic platforms,” Huang told the crowd, “and the IT industry is on the brink of its next great expansion.” What he left unsaid is that Nvidia has just positioned itself as the tollbooth at the entrance to that expansion — open to all, owned by one.

Advertisement

Inside Nvidia’s Agent Toolkit: the software stack designed to power every corporate AI worker

To grasp the significance of Monday’s announcements, it helps to understand the problem Nvidia is solving.

Building an enterprise AI agent today is an exercise in frustration. A company that wants to deploy an autonomous system — one that can, say, monitor a telecommunications network and proactively resolve customer issues before anyone calls to complain — must assemble a language model, a retrieval system, a security layer, an orchestration framework and a runtime environment, typically from different vendors whose products were never designed to work together.

Nvidia’s Agent Toolkit collapses that complexity into a unified platform. It includes Nemotron, a family of open models optimized for agentic reasoning; AI-Q, an open blueprint that lets agents perceive, reason and act on enterprise knowledge; OpenShell, an open-source runtime enforcing policy-based security, network and privacy guardrails; and cuOpt, an optimization skill library. Developers can use the toolkit to create specialized AI agents that act autonomously while using and building other software to complete tasks.

The AI-Q component addresses a pain point that has dogged enterprise AI adoption: cost. Its hybrid architecture routes complex orchestration tasks to frontier models while delegating research tasks to Nemotron’s open models, which Nvidia says can cut query costs by more than 50 percent while maintaining top-tier accuracy. Nvidia used the AI-Q Blueprint to build what it claims is the top-ranking AI agent on both the DeepResearch Bench and DeepResearch Bench II leaderboards — benchmarks that, if they hold under independent validation, position the toolkit as not merely convenient but competitively necessary.

Advertisement

OpenShell tackles what has been the single biggest obstacle in every boardroom conversation about letting AI agents loose inside corporate systems: trust. The runtime creates isolated sandboxes that enforce strict policies around data access, network reach and privacy boundaries. Nvidia is collaborating with Cisco, CrowdStrike, Google, Microsoft Security and TrendAI to integrate OpenShell with their existing security tools — a calculated move that enlists the cybersecurity industry as a validation layer for Nvidia’s approach rather than a competing one.

The partner list that reads like the Fortune 500: who signed on and what they’re building

The breadth of Monday’s enterprise adoption announcements reveals Nvidia’s ambitions more clearly than any specification sheet could.

Adobe, in a simultaneously announced strategic partnership, will adopt Agent Toolkit software as the foundation for running hybrid, long-running creativity, productivity and marketing agents. Shantanu Narayen, Adobe’s chair and CEO, said the companies will bring together “our Firefly models, CUDA libraries into our applications, 3D digital twins for marketing, and Agent Toolkit and Nemotron to our agentic frameworks to deliver high-quality, controllable and enterprise-grade AI workflows of the future.” The partnership extends deep: Adobe will explore OpenShell and Nemotron as foundations for personalized, secure agentic loops, and will evaluate the toolkit for large-scale workflows powered by Adobe Experience Platform. Nvidia will provide engineering expertise, early access to software and targeted go-to-market support.

Salesforce’s integration may be the one enterprise IT leaders parse most carefully. The company is working with Nvidia Agent Toolkit software including Nemotron models, enabling customers to build, customize and deploy AI agents using Agentforce for service, sales and marketing. The collaboration introduces a reference architecture where employees can use Slack as the primary conversational interface and orchestration layer for Agentforce agents — powered by Nvidia infrastructure — that participate directly in business workflows and pull from data stores in both on-premises and cloud environments. For the millions of knowledge workers who already conduct their professional lives inside Slack, this turns a messaging app into the command center for corporate AI.

Advertisement

SAP, whose software underpins the financial and operational plumbing of most Global 2000 companies, is using open Agent Toolkit software including NeMo for enabling AI agents through Joule Studio on SAP Business Technology Platform, enabling customers and partners to design agents tailored to their own business needs. ServiceNow’s Autonomous Workforce of AI Specialists leverage Agent Toolkit software, the AI-Q Blueprint and a combination of closed and open models, including Nemotron and ServiceNow’s own Apriel models — a hybrid approach that suggests the toolkit is designed not to replace existing AI investments but to become the connective tissue between them.

From chip design to clinical trials: how agentic AI is reshaping specialized industries

The partner list extends well beyond horizontal software platforms into deeply specialized verticals where autonomous agents could compress timelines measured in years.

In semiconductor design — where a single advanced chip can cost billions of dollars and take half a decade to develop — three of the four major electronic design automation companies are building agents on Nvidia’s stack. Cadence will leverage Agent Toolkit and Nemotron with its ChipStack AI SuperAgent for semiconductor design and verification. Siemens is launching its Fuse EDA AI Agent, which uses Nemotron to autonomously orchestrate workflows across its entire electronic design automation portfolio, from design conception through manufacturing sign-off. Synopsys is building a multi-agent framework powered by its AgentEngineer technology using Nemotron and Nemo Agent Toolkit.

Healthcare and life sciences present perhaps the most consequential use case. IQVIA is integrating Nemotron and other Agent Toolkit software with IQVIA.ai, a unified agentic AI platform designed to help life sciences organizations work more efficiently across clinical, commercial and real-world operations. The scale is already significant: IQVIA has deployed more than 150 agents across internal teams and client environments, including 19 of the top 20 pharmaceutical companies.

Advertisement

The security sector is embedding itself into the architecture from the ground floor. CrowdStrike unveiled a Secure-by-Design AI Blueprint that embeds its Falcon platform protection directly into Nvidia AI agent architectures — including agents built on AI-Q and OpenShell — and is advancing agentic managed detection and response using Nemotron reasoning models. Cisco AI Defense will provide AI security protection for OpenShell, adding controls and guardrails to govern agent actions. These are not aftermarket bolt-ons; they are foundational integrations that signal the security industry views Nvidia’s agent platform as the substrate it needs to protect.

Dassault Systèmes is exploring Agent Toolkit software and Nemotron for its role-based AI agents, called Virtual Companions, on its 3DEXPERIENCE agentic platform. Atlassian is working with the toolkit as it evolves its Rovo AI agentic strategy for Jira and Confluence. Box is using it to enable enterprise agents to securely execute long-running business processes. Palantir is developing AI agents on Nemotron that run on its sovereign AI Operating System Reference Architecture.

The open-source gambit: why giving software away is Nvidia’s most aggressive business move

There is something almost paradoxical about a company with a multi-trillion-dollar market capitalization giving away its most strategically important software. But Nvidia’s open-source approach to Agent Toolkit is less an act of generosity than a carefully constructed competitive moat.

OpenShell is open source. Nemotron models are open. AI-Q blueprints are publicly available. LangChain, the agent engineering company whose open-source frameworks have been downloaded over 1 billion times, is working with Nvidia to integrate Agent Toolkit components into the LangChain deep agent library for developing advanced, accurate enterprise AI agents at scale. When the most popular independent framework for building AI agents absorbs your toolkit, you have transcended the category of vendor and entered the category of infrastructure.

Advertisement

But openness in AI has a way of being strategically selective. The models are open, but they are optimized for Nvidia’s CUDA libraries — the proprietary software layer that has locked developers into Nvidia GPUs for two decades. The runtime is open, but it integrates most deeply with Nvidia’s security partners. The blueprints are open, but they perform best on Nvidia hardware. Developers can explore Agent Toolkit and OpenShell on build.nvidia.com today, running on inference providers and Nvidia Cloud Partners including Baseten, CoreWeave, DeepInfra, DigitalOcean and others — all of which run Nvidia GPUs.

The strategy has a historical analog in Google’s approach to Android: give away the operating system to ensure that the entire mobile ecosystem generates demand for your core services. Nvidia is giving away the agent operating system to ensure that the entire enterprise AI ecosystem generates demand for its core product — the GPU. Every Salesforce agent running Nemotron, every SAP workflow orchestrated through OpenShell, every Adobe creative pipeline accelerated by CUDA creates another strand of dependency on Nvidia silicon.

This also explains the Nemotron Coalition announced Monday — a global collaboration of model builders including Mistral AI, Cursor, LangChain, Perplexity, Reflection AI, Sarvam and Thinking Machines Lab, all working to advance open frontier models. The coalition’s first project will be a base model codeveloped by Mistral AI and Nvidia, trained on Nvidia DGX Cloud, that will underpin the upcoming Nemotron 4 family. By seeding the open model ecosystem with Nvidia-optimized foundations, the company ensures that even models it does not build will run best on its hardware.

What could go wrong: the risks enterprise buyers should weigh before going all-in

For all the ambition on display Monday, several realities temper the narrative.

Advertisement

Adoption announcements are not deployment announcements. Many of the partner disclosures use carefully hedged language — “exploring,” “evaluating,” “working with” — that is standard in embargoed press releases but should not be confused with production systems serving millions of users. Adobe’s own forward-looking statements note that “due to the non-binding nature of the agreement, there are no assurances that Adobe will successfully negotiate and execute definitive documentation with Nvidia on favorable terms or at all.” The gap between a GTC keynote demonstration and an enterprise-grade rollout remains substantial.

Nvidia is not the only company chasing this market. Microsoft, with its Copilot ecosystem and Azure AI infrastructure, pursues a parallel strategy with the advantage of owning the operating systems and productivity software that most enterprises already use. Google, through Gemini and its cloud platform, has its own agent vision. Amazon, via Bedrock and AWS, is building comparable primitives. The question is not whether enterprise AI agents will be built on some platform but whether the market will consolidate around one stack or fragment across several.

The security claims, while architecturally sound, remain unproven at scale. OpenShell’s policy-based guardrails are a promising design pattern, but autonomous agents operating in complex enterprise environments will inevitably encounter edge cases that no policy framework has anticipated. CrowdStrike’s Secure-by-Design AI Blueprint and Cisco AI Defense’s OpenShell integration are exactly the kind of layered defense enterprise buyers will demand — but both are newly unveiled, not battle-hardened through years of adversarial testing. Deploying agents that can autonomously access data, execute code and interact with production systems introduces a threat surface that the industry has barely begun to map.

And there is the question of whether enterprises are ready for agents at all. The technology may be available, but organizational readiness — the governance structures, the change management, the regulatory frameworks, the human trust — often lags years behind what the platforms can deliver.

Advertisement

Beyond agents: the full scope of what Nvidia announced at GTC 2026

Monday’s Agent Toolkit announcement did not arrive in isolation. It landed amid an avalanche of product launches that, taken together, describe a company remaking itself at every layer of the computing stack.

Nvidia unveiled the Vera Rubin platform — seven new chips in full production, including the Vera CPU purpose-built for agentic AI, the Rubin GPU, and the newly integrated Groq 3 LPU inference accelerator — designed to power every phase of AI from pretraining to real-time agentic inference. The Vera Rubin NVL72 rack integrates 72 Rubin GPUs and 36 Vera CPUs, delivering what Nvidia claims is up to 10x higher inference throughput per watt at one-tenth the cost per token compared with the Blackwell platform. Dynamo 1.0, an open-source inference operating system that Nvidia describes as the “operating system for AI factories,” entered production with adoption from AWS, Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure alongside companies like Cursor, Perplexity, PayPal and Pinterest.

The BlueField-4 STX storage architecture promises up to 5x token throughput for the long-context reasoning that agents demand, with early adopters including CoreWeave, Crusoe, Lambda, Mistral AI and Nebius. BYD, Geely, Isuzu and Nissan announced Level 4 autonomous vehicle programs on Nvidia’s DRIVE Hyperion platform, and Uber disclosed plans to launch Nvidia-powered robotaxis across 28 cities and four continents by 2028, beginning with Los Angeles and San Francisco in the first half of 2027.

Roche, the pharmaceutical giant, announced it is deploying more than 3,500 Nvidia Blackwell GPUs across hybrid cloud and on-premises environments in the U.S. and Europe — what it calls the largest announced GPU footprint available to a pharmaceutical company. Nvidia also launched physical AI tools for healthcare robotics, with CMR Surgical, Johnson & Johnson MedTech and others adopting the platform, and released Open-H, the world’s largest healthcare robotics dataset with over 700 hours of surgical video. And Nvidia even announced a Space Module based on the Vera Rubin architecture, promising to bring data-center-class AI to orbital environments.

Advertisement

The real meaning of GTC 2026: Nvidia is no longer selling picks and shovels

Strip away the product specifications and benchmark claims and what emerges from GTC 2026 is a single, clarifying thesis: Nvidia believes the era of AI agents will be larger than the era of AI models, and it intends to own the platform layer of that transition the way it already owns the hardware layer of the current one.

The 17 enterprise software companies that signed on Monday are making a bet of their own. They are wagering that building on Nvidia’s agent infrastructure will let them move faster than building alone — and that the benefits of a shared platform outweigh the risks of shared dependency. For Salesforce, it means Agentforce agents that can draw from both cloud and on-premises data through a single Slack interface. For Adobe, it means creative AI pipelines that span image, video, 3D and document intelligence. For SAP, it means agents woven into the transactional fabric of global commerce. Each partnership is rational on its own terms. Together, they form something larger: an industry-wide endorsement of Nvidia as the default substrate for enterprise intelligence.

Huang, who opened his career designing graphics chips for video games, closed his keynote by gesturing toward a future in which AI agents do not just assist human workers but operate as autonomous colleagues — reasoning through problems, building their own tools, learning from their mistakes. He compared the moment to the birth of the personal computer, the dawn of the internet, the rise of mobile computing.

Technology executives have a professional obligation to describe every product cycle as a revolution. But here is what made Monday different: this time, 17 of the world’s most important software companies showed up to agree with him. Whether they did so out of conviction or out of a calculated fear of being left behind may be the most important question in enterprise technology — and it is one that only the next few years can answer.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025