Every few years, a piece of open-source software arrives that rewires how the industry thinks about computing. Linux did it for servers. Docker did it for deployment. OpenClaw — the autonomous AI agent platform that went from niche curiosity to the fastest-growing open-source project in history in a matter of weeks — may be doing it for software itself.
Nvidia CEO and co-founder Jensen Huang made his position plain at GTC 2026 this week: “OpenClaw is the operating system for personal AI. This is the moment the industry has been waiting for — the beginning of a new renaissance in software.” And Nvidia wants to be the company that makes it enterprise-ready.
At its annual large GTC 2026 conference in San Jose this week, Nvidia unveiled NemoClaw, a software stack that integrates directly with OpenClaw and installs in a single command. Along with it came Nvidia OpenShell, an open-source security runtime designed to give autonomous AI agents — or “claws”, as the industry is increasingly calling them — the guardrails they need to operate inside real enterprise environments. Alongside both, the company announced an expanded Nvidia Agent Toolkit, a full-stack platform for building and running production-grade agentic workflows.
The message from Jensen Huang was unambiguous. “Claude Code and OpenClaw have sparked the agent inflection point — extending AI beyond generation and reasoning into action,” the Nvidia CEO said ahead of the conference. “Employees will be supercharged by teams of frontier, specialized and custom-built agents they deploy and manage.” Watch my video overview of it below and read on for more:
Advertisement
Why ‘claws’ — and why it matters that Nvidia is using the word
The terminology shift happening inside enterprise AI circles is subtle but significant. Internally, teams building with OpenClaw and similar platforms have taken to calling individual autonomous agents claws — a nod to the platform name, but also a useful shorthand for a new class of software that differs fundamentally from the chatbots and copilots of the last two years.
As Kari Briski, Nvidia’s VP of generative AI software, put it during a Sunday briefing: “Claws are autonomous agents that can plan, act, and execute tasks on their own — they’ve gone from just thinking and executing on tasks to achieving entire missions.”
That framing matters for IT decision-makers. Claws are not just assistants. They are persistent, tool-using programs that can write code, browse the web, manipulate files, call APIs, and chain actions together over hours or days without human input. The productivity upside is substantial. So is the attack surface. Which is precisely the problem Nvidia is positioning NemoClaw to solve.
Advertisement
The enterprise demand is not hypothetical. Harrison Chase, founder of LangChain — whose open-source agent frameworks have been downloaded more than a billion times — put it bluntly in a recent episode of VentureBeat’s Beyond the Pilot podcast: “I guarantee that every enterprise developer out there wants to put a safe version of OpenClaw onto onto their computer or expose it to their users.” The bottleneck, he made clear, has never been interest. It has been the absence of a credible security and governance layer underneath it. NemoClaw is Nvidia’s answer to that gap — and notably, LangChain is one of the launch partners for the Agent Toolkit and OpenShell integration.
What NemoClaw actually does — and what it doesn’t replace
NemoClaw is not a competitor to OpenClaw (or the now many alternatives). It is best understood as an enterprise wrapper around it — a distribution that ships with the components a security-conscious organization actually needs before letting an autonomous agent near production systems.
The stack has two core components. The first is Nvidia Nemotron, Nvidia’s family of open models, which can run locally on dedicated hardware rather than routing queries through external APIs. Nemotron-3-Super, scored the highest out of all open models on PinchBench, a benchmark that tests the types of tasks and tools calls needed by OpenClaw.
The second is OpenShell, the new open-source security runtime that runs each claw inside an isolated sandbox — effectively a Docker container with configurable policy controls written in YAML. Administrators can define precisely which files an agent can access, which network connections it can make, and which cloud services it can call. Everything outside those bounds is blocked.
Advertisement
Nvidia describes OpenShell as providing the missing infrastructure layer beneath claws — giving them the access they need to be productive while enforcing policy-based security, network, and privacy guardrails.
For organizations that have been watching OpenClaw’s rise with a mixture of excitement and dread, this is a meaningful development. OpenClaw’s early iterations were, by general consensus, a security liability — powerful and fast-moving, but essentially unconstrained. NemoClaw is the first attempt by a major hardware vendor to make that power manageable at enterprise scale.
The hardware angle: always-on agents need dedicated compute
One aspect of NemoClaw that deserves more attention than it has received is the hardware strategy underneath it. Claws, by design, are always-on — they do not wait for a human to open a browser tab. They run continuously, monitoring inboxes, executing tasks, building tools, and completing multi-step workflows around the clock.
That requires dedicated compute that does not compete with the rest of the organization’s workloads. Nvidia has a clear interest in pointing enterprises toward its own hardware for this purpose.
Advertisement
NemoClaw is designed to run on Nvidia GeForce RTX PCs and laptops, RTX PRO workstations, and the company’s DGX Spark and DGX Station AI supercomputers. The hybrid architecture allows agents to use locally-running Nemotron models for sensitive workloads, with a privacy router directing queries to frontier cloud models when higher capability is needed — without exposing private data to those external endpoints.
It is an elegant solution to a real problem: many enterprises are not yet ready to send customer data, internal documents, or proprietary code to cloud AI providers, but they still need model capability that exceeds what runs locally. NemoClaw’s privacy router architecture threads that needle, at least in principle.
What claws actually look like in the enterprise
Before evaluating the platform, it helps to understand what a claw doing real work looks like in practice. Two partner integrations announced alongside NemoClaw offer the clearest window into where this is heading.
Box is perhaps the most illustrative case for organizations that manage large volumes of unstructured enterprise content.
Advertisement
Box is integrating Nvidia Agent Toolkit to enable claws that use the Box file system as their primary working environment, with pre-built skills for Invoice Extraction, Contract Lifecycle Management, RFP sourcing, and GTM workflows.
The architecture supports hierarchical agent management: a parent claw — such as a Client Onboarding Agent — can spin up specialized sub-agents to handle discrete tasks, all governed by the same OpenShell Policy Engine.
Critically, an agent’s access to files in Box follows the exact same permissions model that governs human employees — enforced through OpenShell’s gateway layer before any data is exchanged. Every action is logged and attributable; no shadow copies accumulate in agent memory. As Box puts it in their announcement blog, “organizations need to know which agent touched which file, when, and why — and they need the ability to revoke access instantly if something goes wrong.”
Cisco’s integration offers perhaps the most visceral illustration of what OpenShell guardrails enable in practice. The Cisco security team has published a scenario in which a zero-day vulnerability advisory drops on a Friday evening.
Advertisement
Rather than triggering a weekend-long manual scramble — pulling asset lists, pinging on-call engineers, mapping blast radius — a claw running inside OpenShell autonomously queries the configuration database, maps impacted devices against the network topology, generates a prioritized remediation plan, and produces an audit-grade trace of every decision it made.
Cisco AI Defense verifies every tool call against approved policy in real time. The entire response completes in roughly an hour, with a complete record that satisfies compliance requirements.
“We are not trusting the model to do the right thing,” the Cisco team noted in their technical writeup. “We are constraining it so that the right thing is the only thing it can do.”
An ecosystem play: the partners behind the stack
Nvidia is not building this alone. The Agent Toolkit and OpenShell announcements came with a significant roster of enterprise partners — Box, Cisco, Atlassian, Salesforce, SAP, Adobe, CrowdStrike, Cohesity, IQVIA, ServiceNow, and more than a dozen others — whose integration depth signals how seriously the broader software industry is treating the agentic shift.
Advertisement
On the infrastructure side, OpenShell is available today on build.nvidia.com, supported by cloud inference providers including CoreWeave, Together AI, Fireworks, and DigitalOcean, and deployable on-premises on servers from Cisco, Dell, HPE, Lenovo, and Supermicro. Agents built within OpenShell can also continuously acquire new skills using coding agents including Claude Code, Codex, and Cursor — with every newly acquired capability subject to the same policy controls as the original deployment.
Separately, Nvidia announced the Nemotron Coalition — a collaborative initiative bringing together Mistral AI, Perplexity, Cursor, and LangChain to co-develop open frontier models. The coalition’s first project is a base model co-developed with Mistral that will underpin the upcoming Nemotron 4 family, aimed specifically at agentic use cases.
What enterprise leaders should be watching
The NemoClaw announcement marks a turning point in how enterprise AI is likely to be discussed in boardrooms and procurement meetings over the next twelve months. The question is no longer whether organizations will deploy autonomous agents. The industry has clearly moved past that debate. The question is now how — with what controls, on what hardware, using which models, and with what audit trail.
Nvidia’s answer is a vertically integrated stack that spans silicon, runtime, model, and security policy. For IT leaders evaluating their agentic roadmap, NemoClaw represents a significant attempt to provide all four layers from a single vendor, with meaningful third-party security integrations already in place.
Advertisement
The risks are not trivial. OpenShell’s YAML-based policy model will require operational maturity that most organizations are still building. Claws that can self-evolve and acquire new skills — as Nvidia’s architecture explicitly enables — raise governance questions that no sandbox can fully resolve. And the concentration of agentic infrastructure in a single vendor’s stack carries familiar platform risks.
That said the direction is clear. Claws are coming to the enterprise. Nvidia just made its bet on being the platform they run on — and the guardrails that keep them in bounds.
Passengers board a ferry that feels more like a luxury executive lounge than any other boat on the water. The Candela P-12 Business truly delivers, as the ride is so silky smooth and silent that discussions flow smoothly, you can even hold a full cup of coffee in your hand, and the entire journey becomes the highlight of your day. Candela devised a set of computer-controlled underwater wings for this 12-meter electric ferry, which lifts the hull out of the water once the boat reaches speed. Drag plummets and waves simply flow below instead than bouncing off the sides.
Two motors generate electricity, each having 110 kilowatts of continuous power and 160 kilowatts to spare. The motors are powered by a 378-kilowatt-hour battery pack, which can only be used for approximately 336 kilowatt-hours. At a steady cruise speed of 25 knots, the ferry may travel up to 40 nautical miles on a single charge, which is not surprising given the average length of daily routes between islands, coastal towns, and city harbors. During testing, top speed exceeded 30 knots. Recharging is achieved using the conventional high-power DC stations used for heavy trucks or rapid electric vehicles.
Big Views, Brilliant Quality – Groundbreaking 1-inch 360° imaging [1] delivers excellent low-light for sharper shots on every adventure. Now with…
Stunning, Day or Night – Capture every detail with 8K 360° videos and store them effortlessly with 105GB Built-in Storage. Whether you’re exploring…
No Pole, All Action – 1.2m Invisible Selfie Stick turns your Osmo 360 into a cameraman that follows you. Shoot super-smooth 4K/120fps and magical…
Noise levels within the cabin are roughly similar to a normal discussion in a quiet room (63 to 64 dB) when cruising at high speeds. Traditional speedboats, on the other hand, frequently reach 85 to 95 decibels, making quite the racket. Even modern diesel ships operate at 65 to 75 decibels. To put things into perspective, a 10 decibel drop sounds about half as loud to human hearing. The only sound you hear is a mild hum from the motors; there are no roaring engines or regular thuds of water against the hull. They’ve installed more sound insulation and some very thick carpets to keep the space as quiet as possible.
Inside, the layout prioritizes comfort for up to 20 passengers (although 16 is the typical). There are some quite comfortable seats with plenty of legroom, and each seat has a built-in USB-C connector to keep your electronics charged. They also have a coffee bar to keep everyone refreshed and air conditioning to keep the temperature consistent. There’s also some storage room in the back to keep people’s gear. At night, a star ceiling lighting system casts a calm, shifting radiance above. There are large panoramic windows that let you to take in the views from all directions, which is a huge plus. A wide, robust ramp at the boarding point is extremely useful, especially for anyone who need additional assistance, such as strollers or wheelchairs.
The P-12 Business’s operators gain significantly from its design. When the ferry is on its foils, energy consumption lowers by up to 80% compared to a typical vessel of the same size. That means decreased fuel bills, because electricity is far less expensive than marine diesel, after all. As a result, the boat causes significantly less disturbance to shorelines and other vessels. It also minimizes underwater noise, which benefits marine life significantly. All of these qualities make the P-12 Business a clear winner on routes where emissions and noise laws are tightening. [Source]
Steve Gustavson, Microsoft’s corporate vice president for design and research. (Microsoft Photo)
[Editor’s Note: Agents of Transformation is an independent GeekWire series, underwritten by Accenture, exploring the adoption and impact of AI and agents. See coverage of our related event.]
Using an AI model still comes with an unspoken asterisk: Verify before you act. Fact-check it. Google it. Ask a colleague. The burden of accuracy has always landed on the human at the end of the day. But Microsoft thinks it has a way to shift that burden — have two AIs keep tabs on each other.
In an era when workforce tasks are increasingly being handled by AI agents, this multi-model strategy now reaches into something human workers assumed was theirs alone: the judgment call. The human-in-the-loop had long been the one non-negotiable in AI workflows. Microsoft’s approach doesn’t eliminate it, but it does raise the question of how much of that role we’re willing to hand over.
‘Two heads are better than one’
Microsoft isn’t alone in this bet. Amazon Web Services, Google, and others are building platforms that give enterprises access to multiple models through a single interface.
AWS Bedrock offers access to foundation models from multiple providers, while Google’s Gemini Enterprise presents a single front door for workplace AI. Microsoft’s distinction is that it’s embedding multi-model review directly into a productivity tool used by millions of workers.
Advertisement
We saw the first implementation of this plan last week with new upgrades to Microsoft 365 Copilot. Its Researcher agent can now use OpenAI’s GPT to draft a response, then have Anthropic’s Claude review it for accuracy, completeness, and citation quality before finalizing it.
“We intentionally want a diversity of opinions,” Steve Gustavson, Microsoft’s corporate vice president for design and research, told GeekWire in an interview. “Two heads are better than one when they come together.”
That’s not a trivial concern. Research has already shown that AI users tend to outsource critical thinking to models they perceive as authoritative. If we’re already surrendering judgment to a single model, can having a second one push back on the first be the check that’s been missing?
It’s a question Microsoft has been wrestling with in designing Critique and Council, the two new features within its Researcher agent.
Advertisement
“Our research consistently shows that workers continue to crave both deeper trust in AI and quality content,” Gustavson said. “People are either over-trusting AI — accepting claims they shouldn’t — or under-trusting it and not getting the full value. Both are design and technical opportunities.”
Take Microsoft’s Critique feature, for example. Gustavson said Microsoft designed it around a deliberate handoff: GPT leads the generation, and Claude steps in as the reviewer.
“The separation matters because evaluation is a different cognitive mode than generation,” he said. “When one model does both, you get the same blind spots twice. When a second model’s job is to validate the first, you get something structurally different.”
This creates a “powerful feedback loop that delivers higher-quality results across factual accuracy, analytical breadth, and presentation,” Gaurav Anand, Microsoft’s corporate vice president for engineering, wrote in a technical blog post about M365’s Critique feature.
Advertisement
Multi-model isn’t just a proof of concept — it’s live, and it’s already the default experience inside Researcher. But Gustavson is quick to point out that most workers won’t care which models are running under the hood. The models, in his view, should be invisible.
“The average user wants phenomenal outputs. They want to be able to trust them,” he said. “Do they need to know it’s 5.2 versus whatever? I don’t think so.”
Gustavson disputes that this is a case of the “blind leading the blind,” stressing that tuning the models is how to avoid hallucinations. With Researcher, “Claude has proven to be a fantastic synthesizer and sort of check on what the GPT models might be doing.”
However, Gustavson said Microsoft is continuously evaluating the performance of single models versus double models, as well as putting “an LLM judge in between the two” to see the trade-offs.
Advertisement
Gustavson said Microsoft plans to move away from promoting specific model names altogether, shifting the focus to what a worker is trying to accomplish. For example, he said, workers could specify that they’re in finance, and Copilot would route work to whichever models best handle Excel, data synthesis, and analysis — no model-picking required.
The enterprise AI pendulum
For Microsoft, multi-model is less of a feature than the inevitable direction of enterprise AI. Gustavson calls it a natural progression, noting that Copilot started out with a single model.
Since then, he said, the industry has been swinging between what models can do, what the product experience should be, and where the competitive moat exists.
“I think this is just a natural evolution,” he said. “Two models are better than one.”
Advertisement
With models leapfrogging each other every few months, Microsoft isn’t betting on any single one, but rather trying to build something that outlasts them all.
As organizations move from experimenting with AI to depending on it for consequential decisions, the single-model approach starts to show its limits. The question may be less whether enterprises should adopt multi-model than whether they’re ready to accept a system where checks are automated, models are invisible, and AI reviews AI before a human ever sees the output.
Beyond the initial integration into the Researcher agent, Gustavson said Microsoft plans to extend the multi-model approach to its other AI tools. He hopes the approach becomes standard across the industry. In his view, building multi-model review into agentic workflows is both good governance and good design.
For those building agentic experiences, Gustavson’s advice is simple: treat agents like any process with meaningful consequences. The key question: “Who checks the work?”
Minutes after President Donald Trump announced that he would not wipe out “a whole civilization” on Tuesday evening, a team of self-described young Iranian activists jumped into action.
Members of the group known as Explosive Media were putting the finishing touches on their latest AI-generated, Lego-inspired Trump video. The video features a Trump mini-figure colluding with leaders from Gulf states, Iranian officials pressing a big red button labeled “back to the stone age,” and Trump throwing a chair at US generals.
This was the latest of more than a dozen videos the pro-Iran group has released since the beginning of the war in February, many of which have racked up millions of views on mainstream platforms. While Iranian government accounts have posted Lego-style videos in the past, Explosive Media’s content is more sophisticated and scripted. And it’s produced by a team of young pro-Iranian creators who appear deeply knowledgeable about the internet and American culture. Already some critics have alleged the group has ties to the Iranian government.
“We were almost certain Trump would back down; it was clear to us,” a member of the Explosive Media team, who did not want to publicly identify themselves, tells WIRED. “We were prepared for this scenario and had content ready in advance. We just made a few adjustments and released it.”
Advertisement
The team even added mention of the 10-point plan Iran proposed as part of its recent ceasefire agreement. As the video concludes, a Lego Trump sits next to the document, sobbing while holding a white flag and eating a taco—a knowing reference to the acronym for “Trump always chickens out.”
Within hours of Trump’s announcement, the video was published on Explosive Media’s X account and Telegram channel, where it had the caption: “IRAN WON! The way to crush imperialism has been shown to the world. Trump Surrendered. TACO will always remain TACO.”
While the Trump administration has been posting memes that intercut war footage with movie clips that appeal to a narrow audience of loyal followers, Explosive Media’s Lego videos have reached a much broader audience in the US—some of whom clearly liked what they saw.
“We’ve committed ourselves to learning more every day about American people and culture,” the Explosive Media team member tells WIRED. “In this process, Americans themselves have been helping us—and that support and guidance continues. They share impactful tips and ideas with us.”
Advertisement
Explosive Media began in 2025 as a YouTube channel featuring political commentary delivered by a young Iranian man. The content never gained traction, with most videos racking up only a couple hundred views.
But all that changed in February, when the group began posting Lego-inspired videos, with the team scripting, producing, and editing each video using AI tools. (The group would not reveal which AI tools it was using.)
The videos quickly took hold on platforms like TikTok, X, and Instagram.
“People are disengaging from some of the real conflict content and looking for something that can distill what’s happening quickly and in a language and tone that they understand and that’s what those Lego videos are doing,” Moustafa Ayad, a researcher with the Institute of Strategic Dialogue who has closely tracked the online content being shared by Iranian groups during the war, tells WIRED. “They’re making it easily accessible to understand the conflict from Iran’s point of view, and it’s hitting on points of disaffection in the United States at the same time. It’s working on two fronts.”
Advertisement
Iran has previously used Lego-style videos in war propaganda. Back in 2024, according to Ayad, the Islamic Revolutionary Guard Corps shared links to a Lego video, and during the Twelve-Day War in 2025, Iranian state media proclaimed victory over Israel in another Lego video.
In the recently released JD Power 2026 U.S. Customer Service Index (CSI) Study, one mass-market SUV brand topped all others. And even though JD Power combined SUVs and minivans into one category, the brand that came in first place does not sell any minivans. That brand is Subaru, which, with a few exceptions, is a brand mostly made up of SUVs. Subaru also did very well in the 2025 version of the JD Power CSI Study, being selected as the mass market car brand that owners trust most for service.
The JD Power 2026 U.S. Customer Service Index Study, in which Subaru ranked highest in the mass-market SUVs/minivans category, gave the brand a top-rated score of 887 points out of a possible 1,000. Following Subaru in the JD Power CSI rankings in this category were Nissan in second place with 885 points, and Buick in third with 882 points. Then came Honda with 880 points, Ford with 879 points, GMC with 878 points, Chevrolet with 876 points, Dodge with 872 points, and Mazda with 871 points, which was also the average score in the mass-market SUVs/minivans category. Those brands that fell below the average score, in descending order, were Mitsubishi and Toyota, tied at 870 points; Hyundai, 854 points; Kia, 851 points; Jeep, 850 points; and Volkswagen, last at 846 points.
Advertisement
The JD Power 2026 U.S. CSI Study covered 51,228 survey responses from lessees and registered owners of vehicles between one and three years old. The survey period ran from January through December 2025.
Advertisement
What else should you know about the JD Power U.S. Customer Service Index Study?
As JD Power states on its website, the 2026 U.S. Customer Service Index Study, “…continues to be the auto industry benchmark for measuring customer satisfaction with maintenance and repair service at new-vehicle dealerships, based on survey responses from owners of 1 to 3-year old vehicles.” Study subscribers can now receive monthly updates that keep them current with newly supplied data, allowing manufacturers to monitor their dealers’ customer service ratings in near-real time. This is just one of the many studies done by JD Power, one of which reveals the most dependable cars you can buy.
A wide variety of vehicle categories are covered in the JD Power CSI study. These include premium brands, mass-market brands, mass-market cars, mass-market SUVs/minivans (the topic of this article), premium cars, premium SUVs, and trucks. In each of these categories, the brands are evaluated using the same criteria.
The study’s methodology surveys owners of vehicles that are one to three years old. It asks about their level of customer satisfaction during their latest dealer service episode, which can pertain to either work paid for by the customer or work done under the new car’s warranty. Five areas of the customer’s experience with the service department are then analyzed. These include the start of the service experience, the pick-up of the vehicle, impressions of the facility where the car was serviced, the quality of the service itself, and perceptions of the service advisor who interfaced with the customer.
Advertisement
What else should you know about the Subaru brand?
Aside from its three non-SUV vehicles, the Impreza hatchback, the WRX sedan, and the BRZ sports coupe, the current Subaru lineup consists primarily of SUVS. These SUVs consist of the three-row Ascent, the Crosstrek, the Crosstrek Hybrid, the Forester, the Forester Hybrid, the Outback, the Solterra EV, the Trailseeker EV, and the Uncharted EV. This model lineup offers consumers four pure ICE SUVs, two hybrid SUVs, and three EV SUVs.
Subaru’s pricing range starts with the least expensive model, the 2026 Impreza Sport, at $27,790, including destination and delivery. Our review of the Impreza appreciated it as an affordable hatchback that’s sensible and simple. If you are in search of Subaru’s cheapest compact SUV, the 2026 Crosstrek Base is priced at $28,415. Next comes the 2026 Forester Base at $31,445, followed by the 2026 Outback Premium at $36,445. At the top of the range sits the leather-clad 2026 Ascent Onyx Edition Touring 7-Passenger, priced from $53,445. Our review of the Subaru Ascent found it to be a pretty well-rounded three-row SUV.
Advertisement
Then there are Subaru’s EVs, all of which are SUVs. The range starts with the 2026 Subaru Uncharted Premium FWD, priced from $36,445, continues with the Solterra Premium from $39,945, and tops out with the 2026 Subaru Trailseeker, which will arrive at dealers sometime in early 2026 at an MSRP of $39,995, with destination charges not yet revealed by Subaru at this point.
On Wednesday, January 7, federal immigration enforcement and deportation officer Jonathan Ross shot and killed Renee Good at approximately 9:37 am local time. That same day, an official from the Minnesota Bureau of Criminal Apprehension (BCA) texted a Federal Bureau of Investigation counterpart, repeatedly requesting access to the crime scene evidence.
But according to records WIRED obtained through a public records request, the FBI did not respond for at least two days.
The texts appear to have been sent shortly before the FBI, according to the BCA, told the agency that the investigation into Good’s death would “be led solely by the FBI” and that the BCA “would no longer have access to the case materials, scene evidence or investigative interviews necessary to complete a thorough and independent investigation.”
The texts provide new insight on a breakdown in communication between the two agencies that eventually contributed to the BCA, Hennepin County Attorney, and the state of Minnesota filing a lawsuit against the Department of Homeland Security and the Department of Justice, which includes the FBI. The lawsuit, filed on March 24, demands that federal authorities give state and local law enforcement access to investigative material relevant to the shootings of Good; Alex Pretti, a nurse shot and killed by Border Patrol agents on January 24; and Julio Sosa-Celis, a Venezuelan Minneapolis resident shot and injured by a federal immigration agent on January 14.
Advertisement
“The longstanding practice of cooperation and evidence-sharing between federal and Minnesota law enforcement authorities broke down during DHS’s Operation Metro Surge,” the lawsuit claims, adding that this partnership “abruptly ended once federal leadership became involved.”
In response to WIRED’s request for all emails, text messages, and digital communications the agency exchanged with the FBI on January 7 and January 8, the day the public record request was filed, the agency provided an image showing texts exchanged between a top BCA official and the FBI. (The agency added that “no emails were discovered.”)
The image obtained by WIRED, which was seemingly captured between January 9 and 13, shows text messages that appear to have been sent from an iOS device. The BCA says that the texts were sent on January 7 by Drew Evans, the agency’s superintendent to an individual whose name is redacted but is identified in Evans’ device as an “FBI ASAC,” or assistant special agent in charge. The FBI’s Minneapolis branch currently has three people with that title, according to its website.
The only text the FBI agent sent was delivered at 11:17 am local time. The message was mostly redacted by the BCA, but it begins with “ERO”—an apparent reference to Enforcement and Removal Operations, the ICE branch that oversees arrests, detainments, and deportations.
Advertisement
At 12:56 pm, Evans sent three messages to the FBI agent in quick succession.
“Can you be sure with your folks to include us on interviews,” Evans began. “It sounds like they have tried to do some and keep us out of them. I know this is a little challenging, but it really helps us to just have one set of interviews/interactions so we have a common understanding of the facts and information.”
“We are going to cancel crime scene – sounds like a lot of federal agents showed up to confront the crow[d] and it’s getting very contentious now,” Evans wrote in the second text. “We are in a lot of these in that city and our [special agent in charge] is working with your folks to clear – really unfortunate we did not get this done.”
The beginning of Evans’ next message was redacted, but likely includes the name of the FBI agent. “Do you think once they get [things] a little under control today our management teams and team leaders should connect today yet?” Evans wrote in the third text. “We could do it at your office at a time that makes sense once they can breathe a bit?”
Advertisement
Protesters began gathering near the site of Good’s killing shortly after news of her death began circulating. The lawsuit eventually co-filed by BCA claims that on January 7, its investigators had “trusted that important evidence gathered by federal investigators”—including Good’s car, the ICE agent’s gun, and the shell casings at the scene—would be available to them.
Update (8:45 AM PT): Spotify has now officially begun rolling out the feature globally, confirming that you can disable all video content across music and podcasts. The new controls are being added to settings across mobile, desktop, web, and TV. The company will also allow Premium and Basic users across Individual, Duo, Family, and Student plans, along with free users, to control how video content appears in the app.
If you find Spotify’s music videos annoying, you will soon be able to turn them off. Spotify is adding new video controls that will let you turn off any and all video content inside the app. The update was shared by Rowland Manthorpe on X.
Just got an email: Spotify is introducing controls which let users turn off video for music or podcasts, both for themselves and family plan members. I think the enshittification theory says this is impossible? Or is it actually a secret plot to make the service worse
How to turn off videos for music and podcasts on Spotify?
The new controls are not available in my region yet. According to The Verge, the new controls to turn off videos in Spotify will appear under the “Content and display” section in your settings on mobile, or under the “Display” section if you are on desktop.
Advertisement
Spotify
There will be three separate toggles to work with. The first is an existing toggle that disables Canvas clips, which are the short, looping, autoplay videos that play in the background while a track runs.
The second will be a brand new toggle that specifically turns off access to music videos. The third, also new, will disable all other video content on the platform, including podcast videos and vertical video. Together, these three controls will give you granular options to pick and choose exactly how much video you want in your Spotify experience.
How do Spotify’s new video controls work for Family Plan subscribers?
Spotify
If you manage a Spotify Family Plan, you will be able apply these video controls to each individual member on your subscription, similar to how managed account controls already work.
Once you disable video at the plan level for a specific member, that person will no longer have the option to switch to the video version of a song or podcast on their own.
At the time of writing, Spotify hasn’t made any official announcement about the new video controls. The availability may also vary depending on your region and account. If you haven’t seen them appear yet, try updating your app and checking your settings over the next few days.
Candace Owens spent years building a pro-MAGA audience by supporting President Donald Trump. Now, she’s calling for his removal from office.
Over the past few months, right-wing media figures like Owens have broken with Trump on a number of issues, including the Epstein files and the administration’s intervention in Venezuela. But the fracturing among the MAGA media coalition appears to have reached the point of no return after the president’s threats to annihilate “a whole civilization” in Iran this week.
“The 25th amendment needs to be invoked,” Owens wrote Tuesday on X. “He is a genocidal lunatic. Our Congress and military need to intervene. We are beyond madness.”
Owens is one of several right-wing media figures calling for Trump’s removal. Former congressperson Marjorie Taylor Greene also called for invoking the 25th Amendment, referring to Trump’s actions in Iran as “evil and madness.” Alex Jones urged Trump’s ouster on his InfoWars program on Tuesday, asking a guest “how do we 25th amendment his ass?” On an episode of Joe Rogan’s podcast last week, comedian Theo Von, who hosted Trump on his own show in 2024, called the US and Israel “fucking terrorists.” “It is vile on every level,” former Fox News pundit Tucker Carlson said during his show on Monday, referring to Trump’s recent Truth Social posts about Iran. The red-pill streamer Sneako wrote, “I miss Joe Biden” on X last week.
Advertisement
This pushback from major right-wing figures has fractured the MAGA media coalition even further; seemingly in response, a handful of pro-Trump stalwarts have called on the Justice Department to investigate American influencers for taking foreign money without disclosing it. The conservative activist Laura Loomer called posts from Owens “the most obvious foreign influence operation ever” before urging a DOJ investigation on Tuesday.
“The DOJ can investigate me all they want, Larry—they won’t find a thing,” Candace Owens posted in reply to Loomer on Wednesday.
Jack Posobiec, a prominent Pizzagate conspiracy theory promoter, echoed Loomer’s calls for an investigation. Benny Johnson, a former Turning Point USA contributor, wrote on X that he would “welcome” an investigation. (In 2024, the Justice Department alleged that Tenet Media, an online media company that produced shows for Johnson and other high-profile influencers, was largely funded by Russian state-backed news network RT. Johnson, whom the US government did not accuse of wrongdoing, issued a statement at the time denying awareness of the alleged Russian influence scheme and portraying himself as a victim.)
Throughout Trump’s second term in office, the administration has frequently worked with creators to push its messaging online. Last fall, the Pentagon revoked press credentials from mainstream outlets, replacing them with creators like Loomer and Cam Higby. While many of these creators have attended recent Pentagon press briefings, the White House hasn’t seemingly been in touch on messaging about the war in Iran.
Advertisement
“There is/was none,” one source familiar with the Republican influencer pipeline tells WIRED about the administration not reaching out to creators about Iran. “The online right wasn’t supportive, and there wasn’t anything that was going to change that. The best they could hope for is silence.”
Experts find credit card skimmer hidden in 1×1 SVG image
Fake “Secure Checkout” overlay stole card data
Likely exploited Magento PolyShell flaw, affecting many stores
Security researchers recently found a credit card skimmer on almost a hundred compromised ecommerce websites hiding in a tiny image.
Experts from Sansec reported finding 1×1-pixel Scalable Vector Graphics (SVG) elements with an ‘onload’ handler inside many e-commerce websites’ HTML.
“The onload handler contains the entire skimmer payload, base64-encoded inside an atob() call and executed via setTimeout,” the researchers said. They explained that with this technique, the attackers did not have to create external script references that usually get picked up by security scanners. “The entire malware lives inline, encoded as a single string attribute.”
Article continues below
Advertisement
Leveraging PolyShell
People who would try to buy something from these websites would, during checkout, be presented with a fake “Secure Checkout” overlay that includes card details fields and a billing form.
Everything they would submit this way would then be validated in real-time using the Luhn verification, and then sent to an attacker-controlled server in an XOR-encrypted, base64-obfuscated JSON format.
Advertisement
The researchers found a total of six domains used for data exfiltration, all of which were hosted in the Netherlands. Each was getting data from up to 15 confirmed victims.
Discussing how the websites may have been compromised, Sansec said it was possible that the attackers leveraged PolyShell, a vulnerability plaguing stable version 2 installations of Magento Open Source and Adobe Commerce, which was discovered in mid-March this year. Sansec, who were also the ones to discover PolyShell, warned about ongoing attacks at the time.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
“Mass exploitation of PolyShell started on March 19th, and Sansec has now found PolyShell attacks on 56.7% of all vulnerable stores,” Sansec said, without giving a raw number of targeted sites.
Advertisement
Adobe patched it, but the fix was only available in the second alpha release for version 2.4.9, meaning production versions remained vulnerable.
This remains the case today, and Sansec recommends users hunt for hidden SVG tabs, as well as monitor and block traffic coming from the attackers’ servers.
“5G” is an umbrella term that encompasses the current fifth-generation cellular wireless network technologies. All the major carriers and phones support 5G connections, which can offer faster data speeds than older technologies such as 4G LTE or 3G.
Essentially there are three types of 5G: Millimeter-wave (mmWave), which can be fast but has limited range; low-band 5G, which has slower speeds but works on a broader range; and midband, which is a balance between the two that’s faster than low-band but also covers a larger range than millimeter-wave. Midband also incorporates C-band, a batch of spectrum auctioned off by the Federal Communications Commission in 2021.
Your phone’s 5G connection depends on which type blankets the area you’re in, as well as other factors, such as population density and infrastructure. For instance, mmWave is super fast, but its signals can be thwarted by buildings, glass, leaves or by being inside of a structure.
When your device is connected to a 5G network, it can show up as several variations such as 5G, 5G Plus, 5G UW or others, depending on the carrier. Here’s a list of icons you see at the top of your phone for the major services:
Advertisement
AT&T: 5GE (which isn’t actually 5G, but rather a sly marketing name for 4G LTE), 5G (low band), 5G Plus (mmWave, midband)
Verizon: 5G (low band, also called “Nationwide 5G”), 5G UW/5G UWB (midband and mmWave, also called “5G Ultra Wideband”)
T-Mobile: 5G (low band), 5G UC (midband and mmWave, also called “Ultra Capacity 5G”)
There’s also 5G Reduced Capacity (5G RedCap), which is a lower-power, smaller-capacity branch of 5G used by devices such as smartwatches and portable health devices; the Apple Watch Ultra 3, for example, connects via 5G RedCap.
Advertisement
Just around the corner is 5G Advanced, promising much faster speeds because of carrier aggregation, or combining multiple spectrums.
Daniel Riley set out to tackle an issue that had been plaguing him: what if you built a drone with propeller blades a mile longer than regular models? His design features 41-inch blades that reach from tip to tip and spin at a steady 350 to 500 revolutions per minute, a far cry from the tiny, high-speed propellers seen on nearly every commercial drone on the market.
His quadcopter stands out from the crowd due to its gigantic 41-inch blades, and not only because of their size. They also spin at a nice, calm speed of 350 to 500 rpm, as opposed to the thousands of rpm seen on standard models. Riley paired these blades with a sophisticated variable pitch mechanism that allows each rotor to change the angle at which it bites into the air while keeping the motor speed constant. This combination enables the drone to generate lift and remain airborne with significantly less energy than you might expect, especially considering the blades’ high inertia.
High-Speed Precision: Experience unparalleled speed and precision with the Bambu Lab A1 3D Printer. With an impressive acceleration of 10,000 mm/s…
Multi-Color Printing with AMS lite: Unlock your creativity with vibrant and multi-colored 3D prints. The Bambu Lab A1 3D printers make multi-color…
Full-Auto Calibration: Say goodbye to manual calibration hassles. The A1 3D printer takes care of all the calibration processes automatically…
Riley designed a variable pitch system that allows the servos to change the angle of each blade while the motors continue to run at a consistent speed. This is a creative solution to the problem of high rotational inertia, which would normally make it difficult to swiftly increase and decrease motor speed in order to operate the drone successfully. He mounted high-torque servos at the base of each arm and ran a pushrod through them to the blade roots. As a result, the drone gains precise control over lift and attitude without constantly adjusting the engine speed.
The drone’s chassis was built of carbon fiber tubes attached to 3D printed polycarbonate parts, which gave just enough strength to keep things from breaking without adding too much weight. The propellers were made from PETG plastic reinforced with carbon fiber rods. Four pancake-style 5010 360KV motors power the blades via a belt reduction system, which reduces the speed and increases torque. Riley even removed the motor controllers’ heat sinks and sealed them in epoxy to save a few grams of weight. Every little piece added up to keep the overall power consumption low, allowing the huge rotors to support the airframe with minimal effort.
Ground tests produced some really impressive results. When the drone was hovering in situ, it produced a remarkable 18.1 grams of torque for every watt of electricity used, which is roughly half as much as a well-optimized conventional quadcopter. When the power was turned off, the drone was able to slowly circle its way to the ground. The only reason it didn’t come out of it without losing its balance was that it lacked a stabilisation system and crashed.
Engineers have long recognized that the key to making rotorcraft fly is getting the power loading just right. Spreading the weight over a larger rotor surface allows you to stay aloft with less energy, like Riley did here. He applied a principle that most commercial drone manufacturers are afraid to explore since it is more sophisticated, and it paid off handsomely. His approach demonstrates that scaling up to larger blades and adding some sophisticated pitch control can result in significant increases in flight time without simply throwing some heavier batteries on it. [Source]
You must be logged in to post a comment Login