Connect with us

Tech

Endor Labs launches free tool AURI after study finds only 10% of AI-generated code is secure

Published

on

Endor Labs, the application security startup backed by more than $208 million in venture funding, today launched AURI, a platform that embeds real-time security intelligence directly into the AI coding tools that are reshaping how software gets built. The product is available free to individual developers and integrates natively with popular AI coding assistants including Cursor, Claude, and Augment through the Model Context Protocol (MCP).

The announcement arrives against a sobering backdrop. While 90% of development teams now use AI coding assistants, research published in December by Carnegie Mellon University, Columbia University, and Johns Hopkins University found that leading models produce functionally correct code only about 61% of the time — and just 10% of that output is both functional and secure.

“Even though AI can now produce functionally correct code 61% of the time, only 10% of that output is both functional and secure,” Endor Labs CEO Varun Badhwar told VentureBeat in an exclusive interview. “These coding agents were trained on open source code from across the internet, so they’ve learned best practices — but they’ve also learned to replicate a lot of the same security problems of the past.”

That gap between code that works and code that is safe defines the market AURI is designed to capture — and the urgency behind its launch.

Advertisement

The security crisis hiding inside the AI coding revolution

To understand why Endor Labs built AURI, it helps to understand the structural problem at the heart of AI-assisted software development. AI coding models are trained on vast repositories of open-source code scraped from across the internet — code that includes not only best practices but also well-documented vulnerabilities, insecure patterns, and flaws that may not be discovered for years after the code was originally written.

Badhwar, a repeat cybersecurity entrepreneur who previously built RedLock (acquired by Palo Alto Networks), founded Endor Labs four years ago with Dimitri Stiliadis. The original thesis was straightforward: developers were becoming “software assemblers,” writing less original code and importing most components from open source repositories. Then came the explosion of AI-powered coding tools, which Badhwar described as “the once in a generation opportunity of how to rewrite software development life cycle powered by AI.”

The productivity gains are real — more efficiency, faster time to market, and the democratization of software creation beyond trained engineers. But the security consequences are potentially devastating. New vulnerabilities are discovered every day in code that may have been written a decade ago, and that constantly evolving threat intelligence is not easily available to the AI models generating new code.

“Every day, every hour, new vulnerabilities are found in software that might have been written 5, 10, 12 years ago — and that information isn’t easily available to the models,” Badhwar explained. “If you started filtering out anything that ever had a vulnerability, you’d have no code left to train on.”

Advertisement

The result is a feedback loop: AI tools generate code at unprecedented speed, much of it modeled on insecure patterns, and security teams scramble to keep up. Traditional scanning tools, designed for a world where humans wrote and reviewed code at human speed, are increasingly overmatched.

How AURI traces vulnerabilities through every layer of an application

AURI’s core technical differentiator is what Endor Labs calls its “code context graph” — a deep, function-level map of how an application’s first-party code, open source dependencies, container layers, and AI models interconnect. Where competitors like Snyk and GitHub’s Dependabot examine what libraries an application imports and cross-reference them against known vulnerability databases, Endor Labs traces exactly how and where those components are actually used, down to the individual line of code.

“We have this code intelligence graph that understands not just what libraries and dependencies you use, but pinpoints exactly how, where, and in what context they’re used — down to the specific line of code where you’re calling a piece of functionality that has a vulnerability,” Badhwar said.

He illustrated the difference with a concrete example. A developer might import a large library like an AWS SDK but only call two services comprising 10 lines of code. The remaining 99,000 lines in that open source library are unreachable by the application. Traditional tools flag every known vulnerability across the entire library. AURI’s full-stack reachability analysis trims those irrelevant findings away.

Advertisement

Building that capability required significant investment. Endor Labs hired 13 PhDs specializing in program analysis, many of whom previously built similar technology internally at companies like Meta, GitHub, and Microsoft. The company has indexed billions of functions across millions of open source packages and created over half a billion embeddings to identify the provenance of copied code, even when function names or structures have been changed.

The platform combines this deterministic analysis with agentic AI reasoning. Specialized agents work together to detect, triage, and remediate vulnerabilities automatically, while multi-file call graphs and dataflow analysis detect complex business logic flaws that span multiple components. The result, according to Endor Labs, is an average 80% to 95% reduction in security findings for enterprise customers — trimming away what Badhwar called “tens of millions of dollars a year in developer productivity” lost to investigating false positives.

A free tier for developers, a paid platform for the enterprise

In a strategic move aimed at rapid adoption, Endor Labs is offering AURI’s core functionality free to individual developers through an MCP server that integrates directly with popular IDEs including VS Code, Cursor, and Windsurf. The free tier requires no credit card, no sign-up process, and no complex registration.

“The idea is that there’s no policy, no administration, no customization. It just helps your code generation tools stop creating more vulnerabilities,” Badhwar said.

Advertisement

Privacy-conscious developers will note a key architectural choice: the free product runs entirely on the developer’s machine. Only non-proprietary vulnerability intelligence is pulled from Endor Labs’ servers. “All of your code stays local and is scanned locally. It never gets copied into AURI or Endor Labs or anything else,” Badhwar explained.

The enterprise version adds the features large organizations need: full customization, policy configuration, role-based access control for teams of thousands of developers, and integration across CI/CD pipelines. Enterprise pricing is based on the number of developers and the volume of scans. Deployment options include local scanning, ephemeral cloud containers, and on-premises Kubernetes clusters with full tenant isolation — flexibility Badhwar said is “the most any vendor offers in this space.”

The freemium approach mirrors the playbook that worked for developer tools companies like GitHub and Atlassian: win individual developers first, then expand into their organizations. But it also reflects a practical reality. In a world where AI coding agents are proliferating across every team, Endor Labs needs to be wherever code is being written — not waiting behind a procurement process.

“Over 97% of vulnerabilities flagged by our previous tool weren’t reachable in our application,” said Travis McPeak, Security at Cursor, in a statement sent to VentureBeat. “AURI by Endor Labs shows the few vulnerabilities that are impactful, so we patch quickly, focusing on what matters.”

Advertisement

Why Endor Labs says independence from AI coding tools is essential

The application security market is increasingly crowded. Snyk, GitHub Advanced Security, and a growing number of startups all compete for developer attention. Even the AI model providers themselves are entering the fray: Anthropic recently announced a code security product built into Claude, a move that sent ripples through the market.

Badhwar, however, framed Anthropic’s announcement as validation rather than threat. “That’s one of the biggest validations of what we do, because it says code security is one of the hottest problems in the market,” he told VentureBeat. The deeper question, he argued, is whether enterprises want to trust the same tool generating code to also review it.

“Claude is not going to be the only tool you use for agentic coding. Are you going to use a separate security product for Cursor, a separate one for Claude, a separate one for Augment, and another for Gemini Code Assist?” Badhwar said. “Do you want to trust the same tool that’s creating the software to also review it? There’s a reason we’ve always had reviewers who are different from the developers.”

He outlined three principles he believes will define effective security in the agentic era: independence (security review must be separate from the tool that generated the code), reproducibility (findings must be consistent, not probabilistic), and verifiability (every finding must be backed by evidence). It is a direct challenge to purely LLM-based approaches, which Badhwar characterized as “completely non-deterministic tools that you have no control over in terms of having verifiability of findings, consistency.”

Advertisement

AURI’s approach combines LLMs for what they do best — reasoning, explanation, and contextualization — with deterministic tools that provide the consistency enterprises require. Beyond detection, the platform simulates upgrade paths and tells developers which remediation route will work without introducing breaking changes, a step beyond what most competitors offer. Developers can then execute those fixes themselves or route them to AI coding agents with confidence that the changes have been deterministically validated.

Real-world results show AURI can already find zero-day vulnerabilities

Endor Labs has already demonstrated AURI’s capabilities in high-profile scenarios. In February 2026, the company announced that AURI had identified and validated seven security vulnerabilities in OpenClaw, the popular agentic AI assistant, which were later acknowledged by the OpenClaw development team. As reported by Infosecurity Magazine, OpenClaw subsequently patched six of the vulnerabilities, which ranged from high-severity server-side request forgery bugs to path traversal and authentication bypass flaws.

“These are zero days. They’ve never been found, but AURI did an incredible job of finding those,” Badhwar said. The company has also been detecting active malware campaigns in ecosystems like NPM, including tracking campaigns like Shai-Hulud for several months.

The company is well-capitalized to sustain its push. Endor Labs closed an oversubscribed $93 million Series B round in April 2025 led by DFJ Growth, with participation from Salesforce Ventures, Lightspeed Venture Partners, Coatue, Dell Technologies Capital, Section 32, and Citi Ventures. The company reported 30x annual recurring revenue growth and 166% net revenue retention since its Series A just 18 months earlier. Its platform now protects more than 5 million applications and runs over 1 million scans each week for customers including OpenAI, Cursor, Dropbox, Atlassian, Snowflake, and Robinhood.

Advertisement

Several dozen enterprise customers already use Endor Labs to accelerate compliance with frameworks including FedRAMP, NIST standards, and the European Cyber Resilience Act — a growing priority as regulators increasingly treat software supply chain security as a matter of national security.

The bet that security can keep pace with autonomous software agents

The broader question hanging over AURI’s launch — and over the application security industry as a whole — is whether security tooling can evolve fast enough to match the pace of AI-driven development. Critics of agentic security warn that the industry is moving too quickly, granting AI agents permissions across critical systems without fully understanding the risks. Badhwar acknowledged the concern but argued that resistance is futile.

“I’ve seen this play out when I was building cloud security products, and people were fearful of moving to AWS,” he said. “There was a perception of control when it was in your data center. Yet, guess what? That was the biggest movement of its time, and we as an industry built the right technology and security tooling and visibility around it to make ourselves comfortable.”

For Badhwar, the most exciting implication of agentic development is not the new risks it creates but the old problems it can finally solve. Security teams have spent decades struggling to get developers to prioritize fixing vulnerabilities over building features. AI agents, he argued, do not have that problem — if you give them the right instructions and the right intelligence, they simply execute.

Advertisement

“Security has always struggled for lack of a developer’s attention,” Badhwar said. “But we think you can get an AI agent that’s writing software’s attention by giving them the right context, integrating into the right workflows, and just having them do the right thing for you, so you don’t take an automation opportunity and make it a human’s problem.”

It is a characteristically optimistic framing from a founder who has built his career at the intersection of tectonic technology shifts and the security gaps they leave behind. Whether AURI can deliver on that vision at the scale the AI coding revolution demands remains to be seen. But in a world where machines are writing code faster than humans can review it, the alternative — hoping the models get security right on their own — is a bet few enterprises can afford to make.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

What It’s Like to Have a Brain Implant for 5 Years

Published

on

Initially, Gorham used his brain-computer interface for single clicks, Oxley says. Then he moved on to multi-clicks and eventually sliding control, which is akin to turning up a volume knob. Now he can move a computer cursor, an example of 2D control—horizontal and vertical movements within a two-dimensional plane.

Over the years, Gorham has gotten to try out different devices using his implant. Zafar Faraz, a field clinical engineer for Synchron, says Gorham directly contributed to the development of Switch Control, a new accessibility feature Apple announced last year that allows brain-computer interface users the ability to control iPhones, iPads, and the Vision Pro with their thoughts.

In a video demonstration shown at an Nvidia conference last year in San Jose, California, Gorham demonstrates using his implant to play music from a smart speaker, turn on a fan, adjust his lights, activate an automatic pet feeder, and run a robotic vacuum in his home in Melbourne, Australia.

“Rodney has been pushing the boundaries of what is possible,” Faraz says.

Advertisement

As a field clinical engineer, Faraz visits Gorham in his home twice a week to lead sessions on his brain-computer interface. It’s Faraz’s job to monitor the performance of the device, troubleshoot problems, and also learn the range of things that Gorham can and can’t do with it. Synchron relies on this data to improve the reliability and user-friendliness of its system.

In the years he’s been working with Gorham, the two have done a lot of experimenting to see what’s possible with the implant. Once, Faraz says, he had Gorham using two iPads side by side, switching between playing a game on one and listening to music on the other. Another time, Gorham played a computer game in which he had to grab blocks on a shelf. The game was tied to an actual robotic arm at the University of Melbourne, about six miles from Gorham’s home, that remotely moved real blocks in a lab.

Gorham, who was an IBM software salesman before he was diagnosed with ALS in 2016, has relished being such a key part of the development of the technology, his wife Caroline says.

“It fits Rodney’s set of life skills,” she says. “He spent 30 years in IT, talking to customers, finding out what they needed from their software, and then going back to the techos to actually develop what the customer needed. Now it’s sort of flipped around the other way.” After a session with Faraz, Gorham will often be smiling ear to ear.

Advertisement

Through field visits, the Synchron team realized it needed to change the setup of its system. Currently, a wire cable with a paddle on one end needs to sit on top of the user’s chest. The paddle collects the brain signals that are beamed through the chest and transmits them via the wire to an external unit that translates those signals into commands. In its second generation system, Synchron is removing that wire.

“If you have a wearable component where there’s a delicate communication layer, we learned that that’s a problem,” Oxley says. “With a paralyzed population, you have to depend on someone to come and modify the wearable components and make sure the link is working. That was a huge learning piece for us.”

Source link

Advertisement
Continue Reading

Tech

You can preorder the Apple iPad Air (M4) from $249 at Best Buy when you trade in your old tablet

Published

on

Apple recently announced a new iPad Air and there’s already a great way to save on preorders. From 9:15am ET on March 4, you will be able to preorder the Apple iPad Air (M4) at Best Buy from as little as $249 when trading in.

To grab the best discount, you need to trade in a fairly new device. For instance, if you trade in the iPad Air (M3) 11-inch model or an iPad Pro 12.9 (4th Gen), you get $350 trade-in towards the iPad Air (M4). If you trade in the iPad mini (6th Gen), you get $200 off.

It’s worth playing around with the trade-in system to see how you can save while getting rid of unwanted older tech. Bear in mind that how cheap that makes the Apple iPad Air (M4) depends on which model you’re aiming for.

The Apple iPad Air (M4) provides the same design as previous models, but it now uses Apple’s much more powerful M4 chip. There’s also Wi-Fi 7 support, and LTE or 5G for relevant models. Think of this all as a small but important upgrade if you want the fastest iPad Air around. It’s sleek enough to carry around easily, too.

Advertisement

iPad Air (M3) a highly respectable 4.5 stars out of five. We appreciated the power that the M3 chip offers, its “vibrant screen”, and “strong battery life and audio.”

That seems almost certain to carry over to this M4 model, so we’re counting on it riding high in our look at the best iPads soon.

Advertisement

For Apple enthusiasts, it’ll be one of the best tablets to upgrade to, even if it’s a relatively subtle improvement over the previous model. If you need something more powerful than your phone but more portable than your laptop, this is a great way to bridge the gap.

If you can’t wait for the latest model or you want to buy something a little cheaper, take a look at the other iPad deals currently going on. There are some good tablet deals around for every budget and need.

Source link

Advertisement
Continue Reading

Tech

Viral S$45M spa hit by hygiene complaints & staff mistreatment allegations

Published

on

Amid the backlash, the business—touted as S’pore’s largest 24-hour spa—closes its pools

When House+ Bubble announced its arrival in Singapore, it quickly became one of the most talked-about spa openings here.

Touted as Singapore’s largest 24-hour spa—it will span nearly 100,000 sqft once completed—the new S$45 million wellness destination in Jurong East promised an all-in-one experience: soaking pools, therapy rooms, a cinema, an e-sports room, and round-the-clock access.

But that hype appears to have been short-lived.

Just a week into its soft-opening, during which guests could access the spa, massage services, pools, and dining areas for S$49, House+ Bubble closed its bathing pools in both the male and female sections indefinitely, citing “internal facility adjustments.”

Advertisement
A statement from House+ Bubble./ Image Credit: House+ Bubble

This move comes amid mounting complaints online. Google Reviews and visitor feedback have flagged hygiene concerns, inconsistent pool temperatures, and other operational issues, raising questions about whether the spa can live up to its lofty promises.

A slew of negative reviews

When Vulcan Post combed through House+ Bubble’s Google Reviews, bathrooms and toilets were described as “dirty,” and shared amenities raised concerns: combs reportedly had visible dandruff, while communal skincare bottles contained stray hairs.

A Google Review accompanying photos shows wet floors and towels left on the ground. The user also claimed that toilet bowls were clogged, and urinals were broken with “water running non-stop,” and a lack of toilet paper or paper towels./ Image Credit: Google Maps

Others pointed out that, despite being marketed as a 24-hour spa, not all facilities actually operate around the clock. The on-site restaurant closes at 12:30 AM, while massage services end at 10:30PM.

In response to one reviewer, the management of House+ Bubble said that it “is taking action to address these issues” and would elevate its cleaning standards.

Some visitors also highlighted hidden costs and misleading advertising. Despite claims of “unlimited massages,” the S$49 soft-opening fee only covered the massage chairs. A proper massage would reportedly cost between S$150 and S$250 per hour.

Alleged staffing issues

Allegedly, staffing issues may be compounding the spa’s operational problems.

Advertisement

A Reddit post claims that several employees left after short stints due to “poor management” and “poor staff treatment.”

Staff reportedly received only a 30-minute unpaid meal break for a nine-hour shift, despite being told they would get an hour. The post adds that the spa is now facing manpower shortages as a result.

Vulcan Post has reached out to House+ Bubble for comment on these claims but has yet to receive a response.

A S$45 million spa ambition

House+ Bubble is a S$45 million project.

Advertisement
House+ BubbleHouse+ Bubble
Some of the facilities shown on the House+ Bubble website include private pools and even an esports room./ Image Credit: House+ Bubble

Its first opening phase, spanning approximately 49,000 sqft, was slated for an official launch on Mar 14, though it remains unclear if this will proceed as planned.

The second phase will look to add about 50,000 sqft, and is targeted for completion at the end of the year, subject to regulatory approvals.

Currently, visitors can still access House+ Bubble, but following the closure of the bathing pools, the trial operating fee has been reduced from S$49 to S$39 for three hours, excluding pool access.

  • Read other articles we’ve written on Singaporean businesses here

Featured Image Credit: House+ Bubble/ Screengrab from Google Reviews

Advertisement

Source link

Continue Reading

Tech

Neither Android Nor IOS: DIY Smartphone Runs On ESP32!

Published

on

You may or may not be reading this on a smartphone, but odds are that even if you aren’t, you own one. Well, possess one, anyway — it’s debatable if the locked-down, one-way relationships we have with our addiction slabs counts as ownership. [LuckyBor], aka [Breezy], on the other hand — fully owns his 4G smartphone, because he made it himself.

OK, sure, it’s only rocking a 4G modem, not 5G. But with an ESP32-S3 for a brain, that’s probably going to provide plenty of bandwidth. It does what you expect from a phone: thanks to its A7682E simcom modem, it can call and text. The OV2640 Arducam module allows it to take pictures, and yes, it surfs the web. It even has features certain flagship phones lack, like a 3.5 mm audio jack, and with its 3.5″ touchscreen, the ability to fit in your pocket. Well, once it gets a case, anyway.

It talks, it texts, it… does not julienne fry, but that’s arguably a good thing.

This is just an alpha version, a brick of layered modules. [LuckyBor] plans on fitting everything into a slimmer form factor with a four-layer PCB that will also include an SD-card adapter, and will open-source the design at that time, both hardware and software. Since [LuckyBor] has also promised the world documentation, we don’t mind waiting a few months.

It’s always good to see another open-source option, and this one has us especially chuffed. Sure, we’ve written about Postmarket OS and other Linux options like Nix, and someone even put the rust-based Redox OS on a phone, but those are still on the same potentially-backdoored commercial hardware. That’s why this project is so great, even if its performance is decidedly weak compared to flagship phones that have as more horsepower as some of our laptops.

We very much hope [LuckyBor] carries through with the aforementioned promise to open source the design.

Advertisement

Source link

Continue Reading

Tech

4 Bluetooth Gadgets You Can Connect To Your Fire TV Stick

Published

on





We may receive a commission on purchases made from links.

With streaming being such an integral part of modern entertainment, it’s no wonder we’re all looking for ways to optimize our experience. Beyond owning smart TVs, this also means investing in additional devices, such as the Amazon Fire TV Stick, to enhance the viewing experience. In fact, it includes numerous useful remote shortcuts that Amazon doesn’t advertise, letting you do everything from switching display resolutions to enabling accessibility features.

Unlike our smart TVs, which usually stay firmly in our homes, you can also easily travel with your Fire TV Stick and enjoy your streaming content as long as you have access to a compatible TV and Wi-Fi. Not to mention, you can play games on television screens without lugging around huge gaming laptops or bringing extra handheld consoles.

Advertisement

Owning an Amazon Fire TV Stick also opens many connectivity options, especially with Bluetooth-enabled devices. But take note: while there are a ton of devices you can connect to your Fire TV Stick to see your exact options, you first need to find your model number by either referencing the receipt, the box it came with, or the device itself when in use. If you’ve forgotten what model of Fire TV stick you own, you can launch it, open the Settings menu, and select “My Fire TV.”

To pair any compatible Bluetooth device, launch your Amazon Fire TV Stick and navigate to Settings. Afterward, select Controllers & Bluetooth Devices, choose the device category you want to pair, and follow the pairing instructions on the screen. 

Here are the gadgets you can connect to the Fire TV Stick via Bluetooth.

Advertisement

1. Speakers

Many modern smart television sets will probably already let you hook up your speakers directly via Bluetooth. However, there are reasons why you might still want to do it through the Fire TV Stick. For example, you can easily adjust the volume with the Fire TV Stick remote, so you have fewer things to fiddle with. If you tend to use your television only with your Fire TV Stick, this can also streamline audio processing and reduce the risk of audio issues when streaming your favorite shows or movies. These days, there’s no shortage of Bluetooth speakers worth buying that can work with your Fire TV Stick, such as the Anker Soundcore 2, Marshall Stanmore III, and Sonos Move 2. With this, you can get better sound than your TV speakers, and you can also move your speakers to your preferred location.

For those who are already invested in the Amazon smart home ecosystem, you can hook up the Fire TV Stick to your Alexa-powered Echo speakers. With this alone, it introduces a ton of additional possibilities for your integrated smart home experience. Apart from voice control options, it can be used as a component in creating automated scenes that work with other Alexa-compatible devices, such as light bulbs, scent machines, and smart switches. For example, some Alexa automations compatible with your Fire TV Stick can optimize your bedtime routine or turn everything off after movie nights.

Advertisement

2. Headphones and earbuds

While some people are lucky enough to live in places where they can turn on the loudspeakers freely while accessing their favorite content, others need to be more mindful of their viewing habits. Thankfully, just because you’re watching from a TV doesn’t mean the whole neighborhood has to watch with you. Whether you want some privacy or just to avoid an angry neighbor knocking on your door, you can pair your Bluetooth headphones with your Amazon Fire TV devices. In recent times, there is no shortage of multi-point Bluetooth Headphones and Earbuds that can work with your Amazon Fire TV Stick. For example, Apple users will be relieved to know that its AirPods, AirPods Pro, and AirPods Max all work with it.

But take note: the same issues other devices have with Bluetooth headphones and earphones apply, such as audio latency, which you’ll need to resolve using AV Sync Tuning. Not to mention, apart from commercial headphones and earbuds, some Amazon Fire TV devices also work with hearing aids, including several of its TV offerings and the Fire TV Cube (2nd- and 3rd-generation models). Among compatible hearing aids, it lists Starkey, Widex, and Cochlear hearing devices. Although you may need to check with your specific model. In 2025, Amazon released a few new features that make its Fire TV devices more accessible, such as the Dual Audio option, which allows hearing aid users and others to listen to audio at adjusted loudness levels simultaneously.

Advertisement

3. Bluetooth game controllers

Even though many smart TVs can perform the same functions, the Amazon Fire TV Stick still does a lot of things better, such as navigation, software experiences, and cloud gaming. In 2020, Amazon launched Luna Cloud Gaming, which lets people run its library of games on Amazon’s remote servers. Depending on your preferences, you can choose a subscription model that suits the kinds of games you play most often.

According to Amazon, certified Luna-compatible controllers include the official Luna Controller, PlayStation 4 DualShock 4 Wireless Controller, Xbox One Controller, and the Google Stadia Controller. Additionally, owners of the PS5’s DualSense Controllers have been able to use them effectively. Although some people may claim that their 3rd-party controllers from other manufacturers work with their Fire TV Stick, it’s important to note that you will not have the same protection, assurance, or expected longevity as with official ones.

Advertisement

Regardless of which model you choose, you’ll still want to make sure you have the right network and device settings to enjoy your Bluetooth controllers. Apart from having a fast enough connection, you’ll also want to turn on Game Mode when possible. Not to mention, compatibility isn’t entirely guaranteed for everything and still depends on the specific game you are playing. In a jiffy, you can opt to use the Luna Controller app on your mobile phone instead.

Advertisement

4. Bluetooth mice and keyboards

Devices like the Fire TV Stick solve many problems, but they also introduce new ones. One of the most annoying, yet somewhat universal, experiences for anyone who has used a streaming device is finding it difficult to navigate with the remote. In fact, while the Amazon Fire TV Stick lets you browse the internet with your TV using Amazon Silk, it can be a nightmare to type all the website names and click all the right buttons.

If you want a sleek-looking wireless keyboard, something like the Logitech K380 Multi-Device Bluetooth Keyboard lets you pair with up to three devices, so you don’t have to unpair it from your computer to use it with your Fire TV Stick. But if you’re looking for something more ergonomic, there are even Bluetooth mice with side-scrolling, like the Logitech MX Master 3S, Keychron M6 Wireless Mouse, and Razer Basilisk V3 Pro.

If you don’t own a Bluetooth keyboard or mouse, all hope is not lost. As we’ve mentioned before, you can use a micro-USB OTG splitter to plug in a wired keyboard or mouse to your Fire TV Stick. So, if you still prefer using a wired peripheral or have already maxed out the number of devices you can connect to your Fire TV stick, this is a possible alternative.

Advertisement



Source link

Advertisement
Continue Reading

Tech

Leaked government-grade iPhone hacking tools now used to steal crypto and data from users

Published

on


According to new technical analyses from Google and mobile security firm iVerify, Coruna’s technical core comprises five complete exploit chains and 23 distinct iOS vulnerabilities that bypass most of the major software defenses Apple has shipped in versions 13 through 17.2.1, effectively turning a web page into a silent infection…
Read Entire Article
Source link

Continue Reading

Tech

These Official ChromeOS Flex USB Sticks Can Give Your Old Mac or Windows PC a Second Life

Published

on

“People want something that lasts them a long time, that is quality, that is useful,” says Google senior director Alexander Kuscher. “Eventually, when it breaks or when you lose it, you get a new one because you feel taken care of. So I think that builds trust, and the trust is important.”

Flex started as an enterprise service for businesses; Google offered companies worried about security vulnerabilities on aging hardware a way to easily update to a more secure operating system. Or, at least, one that still received updates. After a while, other users started to get ahold of the software, downloading and installing it on their own USB sticks for their personal machines. “We didn’t make it particularly easy at the time,” Kuscher says. “But people did it.”

What led to the more consumer-oriented push of ChromeOS Flex—like this partnership with Back Market—was the end of software support for Microsoft’s Windows 10 operating system last fall. While the OS still technically works, it stopped receiving security updates, and Microsoft has encouraged users to update to Windows 11. But Windows 11 has specific hardware requirements, and it may not be a simple upgrade on certain machines. Google saw this as a moment to provide a cheaper alternative to the “Windows 10 cliff,” as Kuscher puts it. Back Market agreed.

“Ultimately, [Microsoft is] saying that people need to throw away their existing laptop to buy another one,” Hug de Larauze says. “And we say politely, no.”

Advertisement

If you’re tech-savvy, you can forgo Back Market’s $3 stick and download ChromeOS Flex onto a USB drive you have lying around right now.

Buying Refurb

Back Market has done very well for itself despite economic turmoil. As devices become more expensive, people turn to cheaper, refurbished options. He compares the device market to the auto industry.

“Ninety percent of cars are being sold pre-owned,” Hug de Larauze says. “The new normal is to purchase them pre-owned because it’s almost dumb to buy a new one.”

When US president Donald Trump announced sweeping tariffs last year, Hug de Larauze says Back Market sales tripled afterwards. Even after the dust settled a little and it became clear that tariffs would not directly affect smartphones or computers, Hug de Larauze says sales stayed around twice what they’d been before. Back Market made $3.8 billion in 2025, making the company profitable for the first time. While Hug de Larauze says these kinds of economic fluctuations may be good for sending more people to Back Market, he hopes it will shift buyer mindsets to buying refurbished tech writ large.

Advertisement

“We have one planet, and resources are limited,” Hug de Larauze says. “We need to do more with what we already have in every sector. Fashion is the same, transportation is the same, energy is the same, it’s the same for everything.”

Source link

Continue Reading

Tech

Apple’s new Studio Display XDR monitor has limited functionality on older Silicon Macs

Published

on

If you’re looking to pre-order Apple’s new Studio Display XDR monitor today but have an older Mac, beware of some potential issues. According to the compatibility list spotted by Apple Insider, the new display will only work at 60Hz and not at its full 120Hz refresh rate on some older and less powerful Silicon models. Moreover, support for older Intel Macs isn’t mentioned at all for either the Studio Display XDR or cheaper Studio Display.

All Apple Silicon Macs will work with both monitors, including those with the oldest M1 chips, according to the support pages. However, the compatibility list for the Studio Display XDR includes this nugget: “Mac models with M1, M1 Pro, M1 Max, M1 Ultra, M2, and M3 support Studio Display XDR at up to 60Hz. All other Studio Display XDR features are supported.” So even if you have a hotrod M1 Ultra-based Mac, the Studio Display XDR’s refresh rate is capped at 60Hz — despite the fact that the chip can drive third-party monitors at 120Hz.

Similarly, only the iPad Pro M5 supports the Studio Display XDR at 120Hz, with all other compatible models (in the iPad Pro and iPad Air family) limited to 60Hz.

Intel Mac support isn’t mentioned at all in the compatibility list for either display, though they may function in some limited manner when connected. Intel Macs just received their last new OS update with macOS Tahoe (and only three more years of security updates), but it’s still surprising that they’re not compatible with Apple’s latest monitors.

Advertisement

Source link

Continue Reading

Tech

Military Drone Insights for Safer Self-Driving Cars

Published

on

Self-driving cars often struggle with with situations that are commonplace for human drivers. When confronted with construction zones, school buses, power outages, or misbehaving pedestrians, these vehicles often behave unpredictably, leading to crashes or freezing events, causing significant disruption to local traffic and possibly blocking first responders from doing their jobs. Because self-driving cars cannot successfully handle such routine problems, self-driving companies use human babysitters to remotely supervise them and intervene when necessary.

This idea—humans supervising autonomous vehicles from a distance—is not new. The U.S. military has been doing it since the 1980s with unmanned aerial vehicles (UAVs). In those early years, the military experienced numerous accidents due to poorly designed control stations, lack of training, and communication delays.

As a Navy fighter pilot in the 1990s, I was one of the first researchers to examine how to improve the UAV remote supervision interfaces. The thousands of hours I and others have spent working on and observing these systems generated a deep body of knowledge about how to safely manage remote operations. With recent revelations that U.S. commercial self-driving car remote operations are handled by operators in the Philippines, it is clear that self-driving companies have not learned the hard-earned military lessons that would promote safer use of self-driving cars today.

While stationed in the Western Pacific during the Gulf War, I spent a significant amount of time in air operations centers, learning how military strikes were planned, implemented and then replanned when the original plan inevitably fell apart. After obtaining my PhD, I leveraged this experience to begin research on the remote control of UAVs for all three branches of the U.S. military. Sitting shoulder-to-shoulder in tiny trailers with operators flying UAVs in local exercises or from 4000 miles away, my job was to learn about the pain points for the remote operators as well as identify possible improvements as they executed supervisory control over UAVs that might be flying halfway around the world.

Advertisement

Supervisory control refers to situations where humans monitor and support autonomous systems, stepping in when needed. For self-driving cars, this oversight can take several forms. The first is teleoperation, where a human remotely controls the car’s speed and steering from afar. Operators sit at a console with a steering wheel and pedals, similar to a racing simulator. Because this method relies on real-time control, it is extremely sensitive to communication delays.

The second form of supervisory control is remote assistance. Instead of driving the car in real time, a human gives higher-level guidance. For example, an operator might click a path on a map (called laying “breadcrumbs”) to show the car where to go, or interpret information the AI cannot understand, such as hand signals from a construction worker. This method tolerates more delay than teleoperation but is still time-sensitive.

Five Lessons From Military Drone Operations

Over 35 years of UAV operations, the military consistently encountered five major challenges during drone operations which provide valuable lessons for self-driving cars.

Latency

Latency—delays in sending and receiving information due to distance or poor network quality—is the single most important challenge for remote vehicle control. Humans also have their own built-in delay: neuromuscular lag. Even under perfect conditions, people cannot reliably respond to new information in less than 200–500 milliseconds. In remote operations, where communication lag already exists, this makes real-time control even more difficult.

Advertisement

In early drone operations, U.S. Air Force pilots in Las Vegas (the primary U.S. UAV operations center) attempted to take off and land drones in the Middle East using teleoperation. With at least a two-second delay between command and response, the accident rate was 16 times that of fighter jets conducting the same missions . The military switched to local line-of-sight operators and eventually to fully automated takeoffs and landings. When I interviewed the pilots of these UAVs, they all stressed how difficult it was to control the aircraft with significant time lag.

Self-driving car companies typically rely on cellphone networks to deliver commands. These networks are unreliable in cities and prone to delays. This is one reason many companies prefer remote assistance instead of full teleoperation. But even remote assistance can go wrong. In one incident, a Waymo operator instructed a car to turn left when a traffic light appeared yellow in the remote video feed—but the network latency meant that the light had already turned red in the real world. After moving its remote operations center from the U.S. to the Philippines, Waymo’s latency increased even further. It is imperative that control not be so remote, both to resolve the latency issue but also increase oversight for security vulnerabilities.

Workstation Design

Poor interface design has caused many drone accidents. The military learned the hard way that confusing controls, difficult-to-read displays, and unclear autonomy modes can have disastrous consequences. Depending on the specific UAV platform, the FAA attributed between 20% and 100% of Army and Air Force UAV crashes caused by human error through 2004 to poor interface design.

UAV crashes (1986-2004) caused by human factors problems, including poor interface and procedure design. These two categories do not sum to 100% because both factors could be present in an accident.

Advertisement

Human Factors Interface Design Procedure Design
Army Hunter 47% 20% 20%
Army Shadow 21% 80% 40%
Air Force Predator 67% 38% 75%
Air Force Global Hawk 33% 100% 0%

Many UAV aircraft crashes have been caused by poor human control systems. In one case, buttons were placed on the controllers such that it was relatively easy to accidentally shut off the engine instead of firing a missile. This poor design led to the accidents where the remote operators inadvertently shut the engine down instead of launching a missile.

The self-driving industry reveals hints of comparable issues. Some autonomous shuttles use off-the-shelf gaming controllers, which—while inexpensive—were never designed for vehicle control. The off-label use of such controllers can lead to mode confusion, which was a factor in a recent shuttle crash. Significant human-in-the-loop testing is needed to avoid such problems, not only prior to system deployment, but also after major software upgrades.

Operator Workload

Drone missions typically include long periods of surveillance and information gathering, occasionally ending with a missile strike. These missions can sometimes last for days; for example, while the military waits for the person of interest to emerge from a building. As a result, the remote operators experience extreme swings in workload: sometimes overwhelming intensity, sometimes crushing boredom. Both conditions can lead to errors.

When operators teleoperate drones, workload is high and fatigue can quickly set in. But when onboard autonomy handles most of the work, operators can become bored, complacent, and less alert. This pattern is well documented in UAV research.

Advertisement

Self-driving car operators are likely experiencing similar issues for tasks ranging from interpreting confusing signs to helping cars escape dead ends. In simple scenarios, operators may be bored; in emergencies—like driving into a flood zone or responding during a citywide power outage—they can become quickly overwhelmed.

The military has tried for years to have one person supervise many drones at once, because it is far more cost effective. However, cognitive switching costs (regaining awareness of a situation after switching control between drones) result in workload spikes and high stress. That coupled with increasingly complex interfaces and communication delays have made this extremely difficult.

Self-driving car companies likely face the same roadblocks. They will need to model operator workloads and be able to reliably predict what staffing should be and how many vehicles a single person can effectively supervise, especially during emergency operations. If every self-driving car turns out to need a dedicated human to pay close attention, such operations would no longer be cost-effective.

Training

Early drone programs lacked formal training requirements, with training programs designed by pilots, for pilots. Unfortunately, supervising a drone is more akin to air traffic control than actually flying an aircraft, so the military often placed drone operators in critical roles with inadequate preparation. This caused many accidents. Only years later did the military conduct a proper analysis of the knowledge, skills, and abilities needed to conduct safe remote operations, and changed their training program.

Advertisement

Self-driving companies do not publicly share their training standards, and no regulations currently govern the qualifications for remote operators. On-road safety depends heavily on these operators, yet very little is known about how they are selected or taught. If commercial aviation dispatchers are required to have formal training overseen by the FAA, which are very similar to self-driving remote operators, we should hold commercial self-driving companies to similar standards.

Contingency Planning

Aviation has strong protocols for emergencies including predefined procedures for lost communication, backup ground control stations, and highly reliable onboard behaviors when autonomy fails. In the military, drones may fly themselves to safe areas or land autonomously if contact is lost. Systems are designed with cybersecurity threats—like GPS spoofing—in mind.

Self-driving cars appear far less prepared. The 2025 San Francisco power outage left Waymo vehicles frozen in traffic lanes, blocking first responders and creating hazards. These vehicles are supposed to perform “minimum-risk maneuvers” such as pulling to the side—but many of them didn’t. This suggests gaps in contingency planning and basic fail-safe design.

The history of military drone operations offers crucial lessons for the self-driving car industry. Decades of experience show that remote supervision demands extremely low latency, carefully designed control stations, manageable operator workload, rigorous, well-designed training programs, and strong contingency planning.

Advertisement

Self-driving companies appear to be repeating many of the early mistakes made in drone programs. Remote operations are treated as a support feature rather than a mission-critical safety system. But as long as AI struggles with uncertainty, which will be the case for the foreseeable future, remote human supervision will remain essential. The military learned these lessons through painful trial and error, yet the self-driving community appears to be ignoring them. The self-driving industry has the chance—and the responsibility—to learn from our mistakes in combat settings before it harms road users everywhere.

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Anthropic sees major Claude outage after ‘unprecedented demand’

Published

on

As the US administration proceeds to drop Anthropic as a supplier, many are rallying around the AI company’s relatively ethical stance, creating ‘unprecedented demand’ for Claude.

Anthropic’s Claude has been fast becoming the darling of the AI enthusiasts, for development, research and enterprise work. Now it is facing the might of the US administration which is threatening to drop it entirely as a supplier after a falling out with the Pentagon over so-called “red lines” it would not pass.

With many in Silicon Valley supporting its relatively principled stand, and general users sending it to the top of the US Apple charts in recent days for free downloads – beating OpenAI’s ChatGPT for the first time – its flagship Claude.ai and Claude Code apps went down for around three hours on Monday (2 March), causing many to bemoan its absence. There are already reports of further outages as we write, although its latest update says “a fix has been implemented and we are monitoring the results”.

In a nostalgic post on LinkedIn yesterday, regular contributor to Silicon Republic, AI aficionado Jonathan McCrea wrote: “I now feel the same way about Claude being down as I used to about Twitter being down.”

Advertisement

De facto boycott

Last night, treasury secretary Scott Bessent added his voice to the de facto US administration boycott of Anthropic products saying in a post on X that his department would terminate use of Anthropic products.

It follows a directive from president Donald Trump ordering US agencies to “phase out” their use of the AI company’s products, and his defence department labelling Anthropic a “supply-chain risk”, an allocation normally reserved for foreign suppliers from non-friendly states. Anthropic has been quick to say that this is a “legally unsound’ designation, and is expected to challenge the move in the courts.

Reuters is also reporting that it has seen memos to employees at the Department of Health and Human Services, asking them to switch to other AI platforms such as ChatGPT and Gemini, and at the State Department saying it was switching the model powering its in-house chatbot – StateChat – to OpenAI from Anthropic.

Financially it will surely deal a serious blow to Anthropic in the short term, but some commentators are arguing that it could be a pivotal moment for Anthropic as it may be seen by many as the relatively ethical choice when it comes to the AI giants.

Advertisement

The recent Grok scandal has put a major question mark over xAI’s credentials and OpenAI’s Sam Altman clearly sees the reputational risk as he has been quick to claim that it is ensuring some guardrails in its contract with the Pentagon.

On X yesterday Altman claimed that these guardrails would ensure OpenAI would not be “intentionally used for domestic surveillance of ⁠US persons ​and nationals”.

The backstory

If you haven’t been following, Anthropic drew the ire of the US administration after a standoff with the Pentagon, where Anthropic refused to change its safeguards related to using its AI for fully autonomous weapons, or for mass surveillance of US citizens.

On Thursday (February 27), Anthropic’s Dario Amodei released an official statement saying Anthropic believed that in “a narrow set of cases, we believe AI can undermine, rather than defend, democratic values”.

Advertisement

“Some uses are also simply outside the bounds of what today’s technology can safely and reliably do,” he said. “Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included.

“We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values.”

Amodei went on to say that partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. “But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk.”

It’s a debacle that is likely to roll on in coming days, and it remains to be seen whether Anthropic can withstand the unprecedented onslaught from its own government and rely on the support of users for its principled stand. In the short term, its challenge appears to be to meet the current demand on its systems.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025