Connect with us
DAPA Banner

Tech

Nvidia launches enterprise AI agent platform with Adobe, Salesforce, SAP among 17 adopters at GTC 2026

Published

on

Jensen Huang walked onto the GTC stage Monday wearing his trademark leather jacket and carrying, as it turned out, the blueprints for a new kind of monopoly.

The Nvidia CEO unveiled the Agent Toolkit, an open-source platform for building autonomous AI agents, and then rattled off the names of the companies that will use it: Adobe, Salesforce, SAP, ServiceNow, Siemens, CrowdStrike, Atlassian, Cadence, Synopsys, IQVIA, Palantir, Box, Cohesity, Dassault Systèmes, Red Hat, Cisco and Amdocs. Seventeen enterprise software companies, touching virtually every industry and every Fortune 500 corporation, all agreeing to build their next generation of AI products on a shared foundation that Nvidia designed, Nvidia optimizes and Nvidia maintains.

The toolkit provides the models, the runtime, the security framework and the optimization libraries that AI agents need to operate autonomously inside organizations — resolving customer service tickets, designing semiconductors, managing clinical trials, orchestrating marketing campaigns. Each component is open source. Each is optimized for Nvidia hardware. The combination means that as AI agents proliferate across the corporate world, they will generate demand for Nvidia GPUs not because companies choose to buy them but because the software they depend on was engineered to require them.

“The enterprise software industry will evolve into specialized agentic platforms,” Huang told the crowd, “and the IT industry is on the brink of its next great expansion.” What he left unsaid is that Nvidia has just positioned itself as the tollbooth at the entrance to that expansion — open to all, owned by one.

Advertisement

Inside Nvidia’s Agent Toolkit: the software stack designed to power every corporate AI worker

To grasp the significance of Monday’s announcements, it helps to understand the problem Nvidia is solving.

Building an enterprise AI agent today is an exercise in frustration. A company that wants to deploy an autonomous system — one that can, say, monitor a telecommunications network and proactively resolve customer issues before anyone calls to complain — must assemble a language model, a retrieval system, a security layer, an orchestration framework and a runtime environment, typically from different vendors whose products were never designed to work together.

Nvidia’s Agent Toolkit collapses that complexity into a unified platform. It includes Nemotron, a family of open models optimized for agentic reasoning; AI-Q, an open blueprint that lets agents perceive, reason and act on enterprise knowledge; OpenShell, an open-source runtime enforcing policy-based security, network and privacy guardrails; and cuOpt, an optimization skill library. Developers can use the toolkit to create specialized AI agents that act autonomously while using and building other software to complete tasks.

The AI-Q component addresses a pain point that has dogged enterprise AI adoption: cost. Its hybrid architecture routes complex orchestration tasks to frontier models while delegating research tasks to Nemotron’s open models, which Nvidia says can cut query costs by more than 50 percent while maintaining top-tier accuracy. Nvidia used the AI-Q Blueprint to build what it claims is the top-ranking AI agent on both the DeepResearch Bench and DeepResearch Bench II leaderboards — benchmarks that, if they hold under independent validation, position the toolkit as not merely convenient but competitively necessary.

Advertisement

OpenShell tackles what has been the single biggest obstacle in every boardroom conversation about letting AI agents loose inside corporate systems: trust. The runtime creates isolated sandboxes that enforce strict policies around data access, network reach and privacy boundaries. Nvidia is collaborating with Cisco, CrowdStrike, Google, Microsoft Security and TrendAI to integrate OpenShell with their existing security tools — a calculated move that enlists the cybersecurity industry as a validation layer for Nvidia’s approach rather than a competing one.

The partner list that reads like the Fortune 500: who signed on and what they’re building

The breadth of Monday’s enterprise adoption announcements reveals Nvidia’s ambitions more clearly than any specification sheet could.

Adobe, in a simultaneously announced strategic partnership, will adopt Agent Toolkit software as the foundation for running hybrid, long-running creativity, productivity and marketing agents. Shantanu Narayen, Adobe’s chair and CEO, said the companies will bring together “our Firefly models, CUDA libraries into our applications, 3D digital twins for marketing, and Agent Toolkit and Nemotron to our agentic frameworks to deliver high-quality, controllable and enterprise-grade AI workflows of the future.” The partnership extends deep: Adobe will explore OpenShell and Nemotron as foundations for personalized, secure agentic loops, and will evaluate the toolkit for large-scale workflows powered by Adobe Experience Platform. Nvidia will provide engineering expertise, early access to software and targeted go-to-market support.

Salesforce’s integration may be the one enterprise IT leaders parse most carefully. The company is working with Nvidia Agent Toolkit software including Nemotron models, enabling customers to build, customize and deploy AI agents using Agentforce for service, sales and marketing. The collaboration introduces a reference architecture where employees can use Slack as the primary conversational interface and orchestration layer for Agentforce agents — powered by Nvidia infrastructure — that participate directly in business workflows and pull from data stores in both on-premises and cloud environments. For the millions of knowledge workers who already conduct their professional lives inside Slack, this turns a messaging app into the command center for corporate AI.

Advertisement

SAP, whose software underpins the financial and operational plumbing of most Global 2000 companies, is using open Agent Toolkit software including NeMo for enabling AI agents through Joule Studio on SAP Business Technology Platform, enabling customers and partners to design agents tailored to their own business needs. ServiceNow’s Autonomous Workforce of AI Specialists leverage Agent Toolkit software, the AI-Q Blueprint and a combination of closed and open models, including Nemotron and ServiceNow’s own Apriel models — a hybrid approach that suggests the toolkit is designed not to replace existing AI investments but to become the connective tissue between them.

From chip design to clinical trials: how agentic AI is reshaping specialized industries

The partner list extends well beyond horizontal software platforms into deeply specialized verticals where autonomous agents could compress timelines measured in years.

In semiconductor design — where a single advanced chip can cost billions of dollars and take half a decade to develop — three of the four major electronic design automation companies are building agents on Nvidia’s stack. Cadence will leverage Agent Toolkit and Nemotron with its ChipStack AI SuperAgent for semiconductor design and verification. Siemens is launching its Fuse EDA AI Agent, which uses Nemotron to autonomously orchestrate workflows across its entire electronic design automation portfolio, from design conception through manufacturing sign-off. Synopsys is building a multi-agent framework powered by its AgentEngineer technology using Nemotron and Nemo Agent Toolkit.

Healthcare and life sciences present perhaps the most consequential use case. IQVIA is integrating Nemotron and other Agent Toolkit software with IQVIA.ai, a unified agentic AI platform designed to help life sciences organizations work more efficiently across clinical, commercial and real-world operations. The scale is already significant: IQVIA has deployed more than 150 agents across internal teams and client environments, including 19 of the top 20 pharmaceutical companies.

Advertisement

The security sector is embedding itself into the architecture from the ground floor. CrowdStrike unveiled a Secure-by-Design AI Blueprint that embeds its Falcon platform protection directly into Nvidia AI agent architectures — including agents built on AI-Q and OpenShell — and is advancing agentic managed detection and response using Nemotron reasoning models. Cisco AI Defense will provide AI security protection for OpenShell, adding controls and guardrails to govern agent actions. These are not aftermarket bolt-ons; they are foundational integrations that signal the security industry views Nvidia’s agent platform as the substrate it needs to protect.

Dassault Systèmes is exploring Agent Toolkit software and Nemotron for its role-based AI agents, called Virtual Companions, on its 3DEXPERIENCE agentic platform. Atlassian is working with the toolkit as it evolves its Rovo AI agentic strategy for Jira and Confluence. Box is using it to enable enterprise agents to securely execute long-running business processes. Palantir is developing AI agents on Nemotron that run on its sovereign AI Operating System Reference Architecture.

The open-source gambit: why giving software away is Nvidia’s most aggressive business move

There is something almost paradoxical about a company with a multi-trillion-dollar market capitalization giving away its most strategically important software. But Nvidia’s open-source approach to Agent Toolkit is less an act of generosity than a carefully constructed competitive moat.

OpenShell is open source. Nemotron models are open. AI-Q blueprints are publicly available. LangChain, the agent engineering company whose open-source frameworks have been downloaded over 1 billion times, is working with Nvidia to integrate Agent Toolkit components into the LangChain deep agent library for developing advanced, accurate enterprise AI agents at scale. When the most popular independent framework for building AI agents absorbs your toolkit, you have transcended the category of vendor and entered the category of infrastructure.

Advertisement

But openness in AI has a way of being strategically selective. The models are open, but they are optimized for Nvidia’s CUDA libraries — the proprietary software layer that has locked developers into Nvidia GPUs for two decades. The runtime is open, but it integrates most deeply with Nvidia’s security partners. The blueprints are open, but they perform best on Nvidia hardware. Developers can explore Agent Toolkit and OpenShell on build.nvidia.com today, running on inference providers and Nvidia Cloud Partners including Baseten, CoreWeave, DeepInfra, DigitalOcean and others — all of which run Nvidia GPUs.

The strategy has a historical analog in Google’s approach to Android: give away the operating system to ensure that the entire mobile ecosystem generates demand for your core services. Nvidia is giving away the agent operating system to ensure that the entire enterprise AI ecosystem generates demand for its core product — the GPU. Every Salesforce agent running Nemotron, every SAP workflow orchestrated through OpenShell, every Adobe creative pipeline accelerated by CUDA creates another strand of dependency on Nvidia silicon.

This also explains the Nemotron Coalition announced Monday — a global collaboration of model builders including Mistral AI, Cursor, LangChain, Perplexity, Reflection AI, Sarvam and Thinking Machines Lab, all working to advance open frontier models. The coalition’s first project will be a base model codeveloped by Mistral AI and Nvidia, trained on Nvidia DGX Cloud, that will underpin the upcoming Nemotron 4 family. By seeding the open model ecosystem with Nvidia-optimized foundations, the company ensures that even models it does not build will run best on its hardware.

What could go wrong: the risks enterprise buyers should weigh before going all-in

For all the ambition on display Monday, several realities temper the narrative.

Advertisement

Adoption announcements are not deployment announcements. Many of the partner disclosures use carefully hedged language — “exploring,” “evaluating,” “working with” — that is standard in embargoed press releases but should not be confused with production systems serving millions of users. Adobe’s own forward-looking statements note that “due to the non-binding nature of the agreement, there are no assurances that Adobe will successfully negotiate and execute definitive documentation with Nvidia on favorable terms or at all.” The gap between a GTC keynote demonstration and an enterprise-grade rollout remains substantial.

Nvidia is not the only company chasing this market. Microsoft, with its Copilot ecosystem and Azure AI infrastructure, pursues a parallel strategy with the advantage of owning the operating systems and productivity software that most enterprises already use. Google, through Gemini and its cloud platform, has its own agent vision. Amazon, via Bedrock and AWS, is building comparable primitives. The question is not whether enterprise AI agents will be built on some platform but whether the market will consolidate around one stack or fragment across several.

The security claims, while architecturally sound, remain unproven at scale. OpenShell’s policy-based guardrails are a promising design pattern, but autonomous agents operating in complex enterprise environments will inevitably encounter edge cases that no policy framework has anticipated. CrowdStrike’s Secure-by-Design AI Blueprint and Cisco AI Defense’s OpenShell integration are exactly the kind of layered defense enterprise buyers will demand — but both are newly unveiled, not battle-hardened through years of adversarial testing. Deploying agents that can autonomously access data, execute code and interact with production systems introduces a threat surface that the industry has barely begun to map.

And there is the question of whether enterprises are ready for agents at all. The technology may be available, but organizational readiness — the governance structures, the change management, the regulatory frameworks, the human trust — often lags years behind what the platforms can deliver.

Advertisement

Beyond agents: the full scope of what Nvidia announced at GTC 2026

Monday’s Agent Toolkit announcement did not arrive in isolation. It landed amid an avalanche of product launches that, taken together, describe a company remaking itself at every layer of the computing stack.

Nvidia unveiled the Vera Rubin platform — seven new chips in full production, including the Vera CPU purpose-built for agentic AI, the Rubin GPU, and the newly integrated Groq 3 LPU inference accelerator — designed to power every phase of AI from pretraining to real-time agentic inference. The Vera Rubin NVL72 rack integrates 72 Rubin GPUs and 36 Vera CPUs, delivering what Nvidia claims is up to 10x higher inference throughput per watt at one-tenth the cost per token compared with the Blackwell platform. Dynamo 1.0, an open-source inference operating system that Nvidia describes as the “operating system for AI factories,” entered production with adoption from AWS, Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure alongside companies like Cursor, Perplexity, PayPal and Pinterest.

The BlueField-4 STX storage architecture promises up to 5x token throughput for the long-context reasoning that agents demand, with early adopters including CoreWeave, Crusoe, Lambda, Mistral AI and Nebius. BYD, Geely, Isuzu and Nissan announced Level 4 autonomous vehicle programs on Nvidia’s DRIVE Hyperion platform, and Uber disclosed plans to launch Nvidia-powered robotaxis across 28 cities and four continents by 2028, beginning with Los Angeles and San Francisco in the first half of 2027.

Roche, the pharmaceutical giant, announced it is deploying more than 3,500 Nvidia Blackwell GPUs across hybrid cloud and on-premises environments in the U.S. and Europe — what it calls the largest announced GPU footprint available to a pharmaceutical company. Nvidia also launched physical AI tools for healthcare robotics, with CMR Surgical, Johnson & Johnson MedTech and others adopting the platform, and released Open-H, the world’s largest healthcare robotics dataset with over 700 hours of surgical video. And Nvidia even announced a Space Module based on the Vera Rubin architecture, promising to bring data-center-class AI to orbital environments.

Advertisement

The real meaning of GTC 2026: Nvidia is no longer selling picks and shovels

Strip away the product specifications and benchmark claims and what emerges from GTC 2026 is a single, clarifying thesis: Nvidia believes the era of AI agents will be larger than the era of AI models, and it intends to own the platform layer of that transition the way it already owns the hardware layer of the current one.

The 17 enterprise software companies that signed on Monday are making a bet of their own. They are wagering that building on Nvidia’s agent infrastructure will let them move faster than building alone — and that the benefits of a shared platform outweigh the risks of shared dependency. For Salesforce, it means Agentforce agents that can draw from both cloud and on-premises data through a single Slack interface. For Adobe, it means creative AI pipelines that span image, video, 3D and document intelligence. For SAP, it means agents woven into the transactional fabric of global commerce. Each partnership is rational on its own terms. Together, they form something larger: an industry-wide endorsement of Nvidia as the default substrate for enterprise intelligence.

Huang, who opened his career designing graphics chips for video games, closed his keynote by gesturing toward a future in which AI agents do not just assist human workers but operate as autonomous colleagues — reasoning through problems, building their own tools, learning from their mistakes. He compared the moment to the birth of the personal computer, the dawn of the internet, the rise of mobile computing.

Technology executives have a professional obligation to describe every product cycle as a revolution. But here is what made Monday different: this time, 17 of the world’s most important software companies showed up to agree with him. Whether they did so out of conviction or out of a calculated fear of being left behind may be the most important question in enterprise technology — and it is one that only the next few years can answer.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

What 2025 taught us about the importance of resilience in retail

Published

on

When it rains, it pours.

That phrase defined retail cybersecurity in 2025. What began as isolated incidents quickly became prolonged, intense disruptions, exposing just how interconnected — and fragile — modern retail operations really are.

Nadir Izrael

CTO and Co-Founder at Armis.

Source link

Continue Reading

Tech

14K+ jobs cut, with PMETs hit hard

Published

on

Singapore recorded a notable rise in retrenchments in 2025, with overall job cuts climbing to 14,490 for the year—an increase from 12,930 retrenchments in 2024.

On Mar 20, the Ministry of Manpower (MOM) released its latest quarterly Labour Market Report, revealing updated figures on retrenchments and broader employment trends.

The data showed that the incidence of retrenchment rose to 6.3 per 1,000 employees, up from 5.9 per 1,000 the year before.

And within this broader trend, white-collar workers have experienced disproportionate pressure.

Advertisement

PMETs are increasingly on the chopping block

Professional, managerial, executive, and technician (PMET) retrenchments have shown a steeper incline compared to the overall workforce.

In 2025, the incidence of retrenchment for this group rose to 10.1 per 1,000 resident PMETs—above the pre-recessionary average—from 8.6 per 1,000 in 2024.

The layoffs have been largely concentrated in three sectors:

  • Financial Services: Banking and insurance firms have cut headcount as market conditions tighten
  • Information and Communications: Tech and telecom companies are restructuring in response to changing demands
  • Professional Services: Consulting, legal, and accounting firms have undergone notable workforce adjustments

For this specific labour market report, MOM examined trends in PMET roles to assess concerns around AI-driven job disruptions.

While the evidence does not point conclusively to broad-based displacement, there are signs of restructuring that warrant continued monitoring.

Advertisement

Total employment continued to grow

If you’re working in a PMET role, these trends may naturally raise concerns. However, the broader data suggest that this is not necessarily a contraction in demand for these jobs.

The same sectors that saw the highest PMET layoffs also had relatively high PMET job vacancies in Dec 2025, with a combined total of 14,600, up from 13,900 in the year-ago period.

Data on the number of job vacancies are rounded to the nearest 100.

According to MOM, the overlap between higher retrenchments and higher PMET vacancies in these sectors suggests ongoing restructuring and skills transition, where some jobs are being displaced as firms restructure, while hiring continues for others.

For the full year of 2025, total employment grew by 55,500, up from 44,500 in 2024. Of this, resident employment grew by 11,600, driven largely by financial services as well as health and social services.

In 2026, resident employment is expected to grow at a similar or slightly slower pace, said MOM.

Advertisement
  • Read more articles we’ve written on Singapore’s job trends here.

Featured Image Credit: Shadow_of_light/ depositphotos

Source link

Advertisement
Continue Reading

Tech

Critical flaw in wolfSSL library enables forged certificate use

Published

on

Critical flaw in wolfSSL library enables forged certificate use

A critical vulnerability in the wolfSSL SSL/TLS library can weaken security via improper verification of the hash algorithm or its size when checking Elliptic Curve Digital Signature Algorithm (ECDSA) signatures.

Researchers warn that an attacker could exploit the issue to force a target device or application to accept forged certificates for malicious servers or connections.

wolfSSL is a lightweight TLS/SSL implementation written in C, designed for embedded systems, IoT devices, industrial control systems, routers, appliances, sensors, automotive systems, and even aerospace or military equipment.

Wiz

According to the project’s website, wolfSSL is used in more than 5 billion applications and devices worldwide.

The vulnerability, discovered by Nicholas Carlini of Anthropic and tracked as CVE-2026-5194, is a cryptographic validation flaw that affects multiple signature algorithms in wolfSSL, allowing improperly weak digests to be accepted during certificate verification.

Advertisement

The issue impacts multiple algorithms, including ECDSA/ECC, DSA, ML-DSA, Ed25519, and Ed448. For builds that have both ECC and EdDSA or ML-DSA active, it is recommended to upgrade to the latest wolfSSL release.

CVE-2026-5194 was addressed in wolfSSL version 5.9.1, released on April 8.

“Missing hash/digest size and OID checks allow digests smaller than allowed when verifying ECDSA certificates, or smaller than is appropriate for the relevant key type, to be accepted by signature verification functions,” reads the security advisory.

“This could lead to reduced security of ECDSA certificate-based authentication if the public CA [certificate authority] key used is also known.”

Advertisement

According to Lukasz Olejnik, independent security researcher and consultant, exploiting CVE-2026-5194 could trick applications or devices using a vulnerable wolfSSL version to “accept a forged digital identity as genuine, trusting a malicious server, file, or connection it should have rejected.”

An attacker can exploit this weakness by supplying a forged certificate with a smaller digest than cryptographically appropriate, so the system accepts a signature that is easier to falsify or reproduce.

While the vulnerability impacts the core signature verification routine, there may be prerequisites and deployment-specific conditions that might limit exploitation.

System administrators managing environments that do not use upstream wolfSSL releases but instead rely on Linux distribution packages, vendor firmware, and embedded SDKs should seek downstream vendor advisories for better clarity.

Advertisement

For example, Red Hat’s advisory, which assigns the flaw a maximum severity rating, states that MariaDB is not affected because it uses OpenSSL rather than wolfSSL for cryptographic operations.

Organizations using wolfSSL are advised to review their deployments and apply the security updates promptly to ensure certificate validation remains secure.

Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.

This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.

Source link

Advertisement
Continue Reading

Tech

Microsoft is officially killing its Outlook Lite app next month

Published

on

Microsoft is shutting down Outlook Lite on May 26, the company confirmed to TechCrunch on Monday. Launched in 2022, Outlook Lite is a lightweight version of the regular Outlook app, designed for Android phones with limited storage and regions with slower internet connections. 

The app had already been scheduled for retirement — Microsoft announced last year that the app would be removed from the Google Play Store in October 2025. Now the company has confirmed that the app will lose functionality for existing users next month.

The news was first reported by Neowin.

“To continue enjoying a secure and feature-rich email experience, we recommend switching to Outlook Mobile,” Microsoft says in an Outlook Lite support page.

Advertisement

Outlook Lite users will be able to access their existing email, calendar items, and attachments by signing into Outlook Mobile. Users will also be directed to the Google Play Store to download the standard Outlook app.

Source link

Continue Reading

Tech

5 Of The Best-Looking Mid-Engined Sports Cars We’ve Ever Seen

Published

on





If you talk to sports car fans and enthusiasts, you’ll probably hear differing opinions about which is better: front-engined or mid-engined sports cars. Both have their own pros and cons and unique driving characteristics, and the layouts will also impart a certain look, most evident in the radical exterior change the Chevrolet Corvette underwent after its switch from a front-engine to a mid-engine platform.

With their engines mounted behind the cabin, mid-engined sports have a distinct profile that often brings to mind high-end, exotic European supercars. It’s a look that survives even when scaled down to less expensive, more mainstream-oriented cars, resulting in some beautiful vehicles. So with that in mind, we’ve rounded up five of what we think are the best-looking, most handsomely designed mid-engined sports cars of the modern era.

Now, there are countless beautiful (and incredibly expensive) mid-engined exotics that we could include on a list like this, but we’ve left some of the obvious choices out to keep things interesting. Thus, no exotic Ferraris and Lamborghinis here. Even so, we have a diverse mix of machinery that includes mid-engined offerings from Japan, the United States, and Europe, with engines ranging from modest, low-power four-cylinders to fire-breathing V8s.

Advertisement

Toyota MR2 (second generation)

Along with the front-engined Mazda MX-5 Miata, the Toyota MR2 is one of the most popular lightweight sports cars to come out of Japan. The MR2 debuted in the early 1980s and was built over three distinct generations before being discontinued in the mid-2000s. Each generation of the MR2 has its own personality and following, but from a design and performance standpoint, it’s the second generation that represents the MR2 at its peak. 

The second-generation MR2 debuted in Japan in 1989 and was on sale around the world shortly after. With its wider profile, its flip-up headlights, and distinct side vents, the second-generation car had a more aggressive look that, to some eyes, looks a lot like a scaled-down version of the Ferrari 348. The second-gen MR2 also had the performance to back up its look. Thanks to the 3S-GTE engine under the hood, Car and Driver got the MR2 Turbo to 60 mph in just under six seconds — very impressive by early ’90s standards.

Advertisement

To do all this at a relatively affordable price — $20,000 or so for the Turbo in 1990 — shows just how powerful Toyota was during this time. Today, along with the Supra it shared showrooms with, the second-gen MR2 is considered one of the most desirable Toyotas of its time, and especially in turbocharged form, one of the most desirable Japanese sports cars of the ’90s.

Advertisement

2004-2006 Ford GT

When Ford designers started working on the automaker’s mid-2000s Ford GT revival, they had a pretty big head start in creating a beautiful car. That’s because the design of the Ford GT was heavily inspired by the attractive and legendary Ford GT40 race car of the 1960s. Still, retro design isn’t always as easy as it looks, and it doesn’t take much for retro cars to veer into the tacky, but the GT’s designers absolutely aced their mission.

The modern road-going Ford GT is a much larger car than the GT40 it’s based on, but the lines are so good that you don’t realize that until you actually see the two cars side by side. The GT’s attractiveness carries over to the interior as well, with a wonderfully executed modern interpretation of 1960s design. Of course, it also doesn’t hurt that it’s got a mid-mounted supercharged 5.4-liter V8 mated to a manual transmission. 

Because the initial design was executed so well, the 2000s Ford GT has never felt dated in the way other cars from its era might. Design-wise, it almost feels like a remastered car from the ’60s rather than a product of the 2000s. All of these are reasons why, despite only being a little over 20 years old, the value of the mid-2000s Ford GT has climbed tremendously, with the car now becoming a highly desirable modern classic in its own right.

Advertisement

Lotus Elise

A sports car’s appealing design need not be tied to its physical size or amount of horsepower. Case in point: the Lotus Elise. The Elise is considered one of the purest sports cars of the modern era, with a platform and design that stretches back to the mid-1990s. While some could argue that the Elise isn’t a traditionally beautiful sports car, much of the Elise’s beauty comes from its focus on simplicity. The Elise evolved significantly between its mid-’90s debut and the end of its production run in 2021, but the car never strayed from its mission of delivering lightness and response over all else. 

The later variants of the Elise sold in North America use modestly powered Toyota four-cylinder engines, with the Elise’s light weight meaning it didn’t need massive amounts of horsepower to offer a fast and highly enjoyable sports car experience — part of why drivers love this car. Design-wise, the Elise is all about compact minimalism, and its svelte body lines and distinct round tail lights helped give the Elise its signature look.

Its attractive looks and go-kart-like handling are just a couple of the reasons why both the Elise and its closely-related counterpart, the Lotus Exige, have emerged as genuine modern classics. With its focus only on the essentials, the Elise is the antidote to the high-horsepower, overweight, and often overstyled modern performance car.

Advertisement

Alpine A110

Like the reto-styled Ford GT, the modern Alpine A110 is a modern, mid-engined sports car that might technically be cheating with its good looks. That’s because, like the Ford, the A110 is a modern reinterpretation of an iconic 1960s design — and one that happens to be done very well. 

The modern Alpine A110 (which is built by Renault) debuted in the late 2010s to wide acclaim as a rival to the Porsche Cayman. Boasting a mid-mounted turbocharged four-cylinder engine and a low curb weight, the A110 took its design inspiration from the original, rear-engined Alpine A110 of the ’60s and ’70s. Among the styling traits that carried over to the new A110 are the original’s quad front headlights and wrap-around rear window.

Advertisement

To this point, the biggest problem with the A110 is that, like other French models, it’s not offered in North America. In fact, it might just be the coolest modern performance car that’s not currently sold here. There have been rumors and serious speculation that the A110 will eventually make its way to the United States, although we don’t yet know whether it will be as a gasoline model or as a next-generation electric Alpine sports car

Advertisement

Honda/Acura NSX

Sometimes a sports car is a hit from the moment it debuts; other times, it ages nicely and becomes a favorite for a new generation of enthusiasts. In the case of the highly unique Honda (or Acura) NSX, it’s both. When the NSX first debuted in 1989, the car was a game-changer. It wasn’t just an impressive Japanese sports car; instead, it was a bona fide, homegrown Japanese exotic laced with Honda’s racing DNA.

Thanks to design choices like an all-aluminum construction and a mid-mounted, naturally aspirated VTEC V6 engine, the NSX had the performance and feel of a Ferrari — but in a more affordable and more reliable package that could be serviced at your local Honda or Acura dealer. In comparison tests, it edged out its more established performance car competitors. Design-wise, the original NSX was somewhat restrained, but its clean lines have aged extremely well, making it a favorite even among those born too late to experience its original run. 

When new, the NSX had a relatively affordable price tag for what it delivered, but values have climbed substantially in recent years, with certain examples crossing the $300,000 mark at auction. While many subsequent Japanese sports cars have eclipsed the original NSX’s performance benchmarks, its aura is still unmatched.

Advertisement



Source link

Continue Reading

Tech

Meta is building an AI version of Mark Zuckerberg

Published

on

The photorealistic digital character is trained on Zuckerberg’s mannerisms, tone, and his own thinking on company strategy. He is personally involved in testing it. The effort, described by four people familiar with the matter, is separate from a ‘CEO agent’ that handles tasks for Zuckerberg directly.


Meta is building a photorealistic, AI-powered version of Mark Zuckerberg that can interact with employees in his place, the Financial Times reported on Monday, citing four people familiar with the matter.

The character is being developed by Meta’s Superintelligence Labs and is trained on Zuckerberg’s mannerisms, tone, and publicly available statements, as well as his own thinking on company strategy, so that employees, in the words of one person familiar with the project, ‘might feel more connected to the founder through interactions with it.’ Z

uckerberg is personally involved in training and testing the animated version of himself.

Advertisement

The effort is at an early stage and is separate from a different project, first reported by the Wall Street Journal, in which Meta is building a ‘CEO agent’ designed to help Zuckerberg himself retrieve information faster, a tool that assists him rather than stands in for him.

Advertisement

The AI character project is part of a broader push within Meta’s Superintelligence Labs to develop lifelike, AI-driven digital figures capable of real-time conversation. The technical challenge is substantial: achieving realism and preventing perceptible delays in conversation requires enormous computing power.

The project reflects a significant escalation of Zuckerberg’s own involvement in Meta’s AI work. According to people familiar with the matter, he has been spending five to ten hours a week writing code on various AI projects and attending technical engineering review sessions, an unusual level of hands-on engagement for a CEO running a $1.6 trillion company.

He has committed publicly to developing what he calls ‘personal superintelligence’ as Meta works to close the gap with OpenAI and Google. On a January earnings call, he said Meta was ‘elevating individual contributors and flattening teams’ through AI-native tooling.

Meta has a history with AI characters. In September 2023 it launched a range of celebrity-based chatbots, among them personas modelled on Snoop Dogg, Tom Brady, Kendall Jenner, and Naomi Osaka, all of whom licensed their likenesses, but these were discontinued in the summer of 2024 after failing to gain meaningful traction.

Advertisement

Meta then opened an AI Studio allowing users and creators to build their own AI characters, but ran into controversy when users began generating sexually explicit personas. Since January, Meta has restricted teenager access to AI characters. Zuckerberg’s interest in the format was reportedly sharpened by the success of AI companion startup Character.AI, particularly with younger users.

Meta is not the only company exploring AI versions of its leadership. Uber CEO Dara Khosrowshahi said during a podcast interview earlier this year that his employees had built an AI clone of him.

But the Zuckerberg project has a different scale and institutional purpose: it is being designed as a mechanism for a $1.6 trillion company’s 79,000 employees to feel a sense of connection to a founder who is, by any measure, difficult to reach.

Advertisement

Source link

Continue Reading

Tech

Bremont Is Sending a Watch to the Moon’s Surface

Published

on

A multifaceted decahedral black ceramic bezel and sandwich-style three-piece case—a reworking of Bremont’s signature Trip-Tick construction—house a chronometer-rated automatic chronograph movement made by Sellita, with a 62-hour power reserve.

The watch will be a passenger aboard the FLIP rover, due to launch as part of Astrobotic’s Griffin Mission One (Griffin-1), expected to land at the lunar south pole at some point in the second half of this year.

It’s a one-way mission: The rover will remain permanently on the lunar surface, with the watch ticking away as it roams the landscape. FLIP’s objectives include reaching elevated positions on the lunar terrain, gathering data on lunar dust accumulation, testing dust-mitigation coatings, and surviving a two-week lunar night in hibernation (which would be a first for a US rover).

In terms of serious timekeeping data for Bremont, the mission is frankly symbolic. The watch will be positioned vertically in a specially designed housing within the FLIP’s chassis, between its front wheels. Only the watch head, weighing 107 grams, is included, glued in place using a specialist composite, its face visible to FLIP’s HD cameras. But the hibernatory periods will mean the watch (whose mechanical movement is driven in normal circumstances by the motion of the wearer’s arm) will stop running once its 62-hour power reserve runs down.

Advertisement

When the FLIP is on the move again, its motion should—in theory—jolt the mechanism into action once more. Despite the gravitational pull that’s a sixth of the Earth’s, the acceleration, pitches, and tilts of the rover should swing the winding rotor, if with less torque and efficiency than on Earth.

“My guess is that the watch will function from time to time, but for short periods,” Cerrato says. “We will learn along the way. But that’s what is exciting—it projects us into a thinking process that is absolutely out of the box. Just the fact of having it there is inspiring.” However, there is little doubt that Bremont will, just like other brands with any ties to the cosmos, mine its new space connection for all it is worth.

FLIP itself, which weighs just 1,058 pounds and carries a mix of commercial and government payloads, four HD cameras, and a deployable solar array, is fundamentally a technology demonstrator for Flexible Logistics and Exploration (FLEX), Astrolab’s much larger SUV-sized rover destined to support NASA’s Artemis program. The firm developed the FLIP from scratch after NASA’s equivalent vehicle for which the Griffin-1 mission was contracted, the VIPER, was put on pause in 2024. This left Astrobotic seeking a stand-in in short order. Astrolab, which signed the contract within a month of hearing about the opportunity in the fall of 2024, took the FLIP from blank sheet to finished rover in roughly a year.

Its standout feature is its hyper-deformable wheels, minutely structured from silicone, composite, and stainless steel, which create a soft, enlarged contact surface with the terrain. “It’s like if you’re off-roading in a Jeep or Land Rover where you let some air out of the tires to go softer and spread the load over a larger area,” explains Astrolab’s founder, Jaret Matthews. While the moon’s nighttime temperatures of around -200 degrees Celsius (around -328 Fahrenheit) would cause conventional rubber tires to become glass-like and shatter, Astrolab’s solution is intended to keep the rover from sinking into the unconsolidated lunar dust—or regolith—that covers the environment.

Advertisement

Source link

Continue Reading

Tech

The Most WIRED Watches at Watches and Wonders 2026

Published

on

The case is white zirconium oxide ceramic with a Ceratanium bezel and back, rated to handle temperature swings from 100 to -100 degrees Celsius (212 to -238 Fahrenheit). Indeed, the whole piece has been shaken to 10 g’s at Vast’s Long Beach facility, exceeding forces astronauts experience during ascent, and came out the other side running just fine. Price is still up in the air.

Image may contain Wristwatch Arm Body Part and Person

TAG Heuer Monaco Evergraph (From $25,000)

Watch brands love finding ever more recherché areas to reinvent, and the precise “snick” of a chronograph’s stop/start/reset buttons is the latest micro-battlefield in which R&D teams are duking it out. Last year, Audemars Piguet took the feel of an iPhone button as the inspiration for its Royal Oak RD#5; now TAG Heuer has its own take on push-button ergonomics.

Normally, chronograph buttons involve a cluster of levers, springs, and cams that click into place with varying degrees of precision. TAG Heuer has thrown most of that out with the Calibre TH80-00, five years in development between its TAG Heuer LAB innovation department and movement maker Vaucher Manufacture Fleurier. It replaces the traditional architecture with two flexible bistable components—essentially shape-shifting parts that snap between positions—produced via high-precision LIGA fabrication, a micro-manufacturing technique that includes lithography, electroforming, and molding.

The result? Crisper actuation that, crucially, doesn’t degrade. According to TAG, the 10,000th press feels identical to the first. Paired with TAG’s incredibly high-tech TH-Carbonspring oscillator (magnetism-resistant, 5-Hz, 70-hour reserve, COSC-certified), it’s housed in a reworked 40-mm titanium Monaco with the crown back on the left where Steve McQueen’s 1969 original had it. You get two versions: brushed titanium with blue accents or black Diamond-Like Carbon (DLC) with red. The dial is transparent acrylic, so you can watch the compliant mechanism do its thing.

Image may contain Wristwatch Arm Body Part and Person

Vacheron Constantin Overseas Dual Time Cardinal Points (Price on Request)

Vacheron Constantin’s Overseas line, among the most celebrated examples of Switzerland’s dominant “sports-luxe” genre, leans heavily into the sports side with a full-titanium, GMT-treatment across four references. Each dial is color-mapped to a compass point: white for north, brown for south, green for west, blue for east, contrasting with a bright orange, Rolex-style GMT hand for the time zone at home.

The lineage traces to a 2019 prototype built for explorer Cory Richards to wear up Everest—probably the most luxurious timepiece that has been to such places. The 41-mm case, integrated bracelet, and folding clasp are all in titanium with a matte anthracite finish on the bezel and crown. Inside is the in-house Calibre 5110 DT/3, a self-winding GMT with home-time am/pm indicator, local-time date pusher, and 60-hour reserve. Classic sports watch attributes, but here certified with the Geneva Hallmark, the highest official benchmark of fine watchmaking and hand-finishing.

Advertisement

Source link

Continue Reading

Tech

iOS 26 boarding passes now available for American Airlines flights

Published

on

American Airlines has now become the latest company to take advantage of the revamped boarding pass system in the iOS 26 update.

Two smartphones displaying colorful airline boarding passes and live flight tracking apps against a bright blue gradient background, emphasizing digital travel details like departure, arrival, gate, group, and QR code.
American Airlines now supports the revamped iOS 26 boarding pass system.

At WWDC 2025, Apple revealed that upgraded boarding passes, with support for Live Activities and real-time flight information, would make their way to the Apple Wallet app with iOS 26. Improved support for tracking luggage with AirTags and Find My, along with maps data for airports, was touted as well.
Since then, United Airlines and Southwest Airlines have rolled out support for the iOS 26 boarding pass system, and now American Airlines has followed suit. The American Airlines iOS app was recently updated, and its release notes detail the upgraded boarding pass experience.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Nevada Court Latest To Say Mandatory Detention Of Migrants Is Illegal

Published

on

from the can’t-pretend-rights-just-don’t-exist dept

More of the same for the Trump administration — one that seems incapable of achieving its goals without breaking the law or disregarding the Constitution.

Hundreds of judges handling thousands of cases have already told the administration it can’t do the things it thinks it can when it comes to satisfying its anti-migrant bloodlust/Stephen Miller’s 3,000-arrests-per-day quota (they’re the same thing!). And, outside of the Fifth Circuit, where the majority seems to believe Trump should get whatever he wants, this steady stream of judicial rejections continues.

Yet another class-action suit alleging the wholesale violation of Constitutional rights has resulted in a ruling siding with the Constitution. This case is one of several being handled by the ACLU. This particular one originates in Nevada, which at least keeps it out of the hands of the Fifth Circuit. (Unfortunately, the administration knows who’s buttering its bread, which is why detainees are often shipped immediately to detention centers in Texas and Louisiana.)

The administration has only a single argument to present in its defense of its unconstitutional mandatory detention activities. It involves selectively quoting two related (yet distinct!) immigration statutes and pretending that 1+1=whatever the fuck we say it does.

Advertisement

One of the most concise explanations of the administration’s deliberate misreading of these statutes was delivered by Judge Dale Ho of the Southern District of New York last year. The government wants to pretend people who encounter immigration agents while crossing the border are indistinct from migrants who have already been in this country for weeks, months, or years. They’re not the same thing, but the administration insists they are, despite having only convinced the Fifth Circuit that the laws don’t actually say the things they say.

Given that detention under § 1225(b)(2) is essentially mandatory and that detention under § 1226(a) is largely discretionary, it follows that whichever statute Mr. Lopez Benitez is subject to is potentially dispositive here. That is, if Mr. Lopez Benitez was detained as a noncitizen “seeking admission” to the country under § 1225(b)(2) (as Respondents argue), his detention would be mandatory. If, instead, he was detained as a noncitizen “already in the country” under § 1226(a), then his detention is discretionary and he would be, at a minimum, entitled to an appeal before an immigration judge.

To be sure, the line between when a person is “seeking admission” as opposed to being “already in the country” is not necessarily obvious. For instance, someone who has just crossed the border may technically be “in” the country but is still treated as “an alien seeking initial entry.” Thuraissigiam, 591 U.S. at 114, 139 (holding that a noncitizen detained “within 25 yards of the border” is treated as if stopped at the border). But there is no dispute that the provisions at issue here are mutually exclusive—a noncitizen cannot be subject to both mandatory detention under 1225 and discretionary detention under § 1226, a point that Respondents conceded.

These are not the same thing. Section 1226 deals with people already in the country, who are given Constitutional protections. Section 1225 deals with people crossing the border who are met immediately by immigration agents, who don’t have access to the same due process rights.

As the court points out in this case, the language of the statutes makes it clear Section 1225 is “temporally and geographically limited to the border” by other language contained in the Immigration and Nationality Act (INA). The government, however, wants to pretend it’s indistinct from Section 1226, which deals with people who are already in the country and have been there for a significant amount of time.

Advertisement

The only way the government can present its defense of indefinite detention of migrants without bond hearings is to twist the wording of both statutes. The Nevada court [PDF] isn’t going to let that happen. It calls out Trump’s DOJ for its cut-and-paste antics.

The government contends that the plain language of § 1225(b)(2) requires DHS to detain all noncitizens like Plaintiffs, who are present in the U.S. without admission or parole and subject to removal proceedings, regardless of how long they have been in the country or how far from the border they are apprehended. But this Court finds that the government reads § 1225(b)(2 (A) as a fragment of statutory text in isolation.

Context matters. The government knows this, which is why its arguments remove the parts of the law it wants to use from the context that indicates its actions are illegal.

The Court finds the government’s reading of the statutory text inapposite for severalreasons. First, the government distorts the statutory text, including terms of art specially defined by Congress. Second, the government isolates and abstracts the phrases it favors in § 1225(b)(2)(A) from their context within § 1225 and the statutory scheme, while rendering language it finds inconvenient within § 1225(b)(2)(A) both contrary to ordinary meaning and needless surplusage. Finally, the government’s interpretation unnecessarily renders provisions of § 1226(c) superfluous in all but the rarest cases, unjustifiably construes Congress’ addition of § 1226(c)(1)(E) through the 2025 Laken Riley Act to be utterly ineffectual, and creates unnecessary tension between the relevant provisions, §§ 1225 and 1226.

This is what it looks like when you know you can’t win on the merits. This is the government pretending the law says what it wants it to say and hoping to slip it past a judge and under the skirts of Lady Liberty.

Courts aren’t as dumb as the Trump administration hopes. Let’s look at the statutes, the court says, but the whole thing rather than just the things the government thinks might be usable.

Advertisement

The Court cannot accept such a fraught interpretation when a reading devoid of such conflict, which gives each statutory phrase and section independent meaning and force, is far more plausible.

What follows is a few dozen pages making everything summarized above granular and specific. And if Trump doesn’t like it, he can always ask the legislators he treats as extraneous to rewrite the law in his favor. Take it up with Congress if you don’t like the way the law is actually written, the court says without actually saying it:

[E]ven with regards to removal proceedings as opposed to custody determinations, Congress explicitly reflected its understanding of longstanding due process precedent that recognizes the more substantial due process rights of noncitizens already present and residing in the U.S. compared to the minimal rights of noncitizens seeking to enter.

Even a Congress loaded with MAGA bitchboys isn’t going to be able to erase Constitutional protections for migrants no one really seemed to have a problem with until white Christian nationalists took over the West Wing (on two non-consecutive occasions). The current Congress is merely an afterthought in service to Federalist Society theories of unitary executive power — something that surely won’t come back to haunt them when America decides it’s time to hand the reins to the opposition party.

And that’s not all of the bad news for Trump and his enablers. The due process thing is already a known issue and one that has resulted in hundreds of losses for the administration’s lawyers. This court also points out the Fourth Amendment implications of its actions. While this doesn’t necessarily create the sort of precedent that would shut down the DHS’s extremely creative interpretation of the Constitution, it will provide plenty of citation pull-quotes for litigants challenging ICE’s warrantless arrests and home entries.

[N]o administrative warrant requirements exist in the text of § 1225(b)(2)(A) or its implementing regulations. The government’s interpretation of that provision as geographically unlimited is thus in tension with the application of the Fourth Amendment within the country’s interior, which “requires that immigration stops must be based on reasonable suspicion of illegal presence, stops must be brief, arrests must be based on probable cause, and officers must not employ excessive force.”

I’m sure this quotation of Justice Kavanaugh’s concurrence in Trump v. Illinois is deliberate. The guy behind “Kavanaugh stops” (TL;DR: looking foreign is probable cause when it comes to immigration enforcement) is being directly quoted to reject the government’s reliance on administrative warrants to bypass the Constitution. [Chef’s kiss gesture.]

Advertisement

Great stuff. But, as always, tempered by the realization that this administration will not stop doing illegal things just because a court has directly told them these actions are illegal. The old equation — asking forgiveness > asking permission — doesn’t really apply. This administration will do neither. It will simply DO until it becomes impossible to continue.

Don’t let that discourage you, though. Even if the co-equal branches don’t seem to be living up to the “checks and balances” hype, we’re a nation of millions spread across a considerable number of square miles. They can’t take us all at once.

Filed Under: 14th amendment, bigotry, dhs, due process, ice, mass deportation, nevada, trump administration

Companies: aclu

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025