From the street, the only indication I’ve found Physical Intelligence’s headquarters in San Francisco is a pi symbol that’s a slightly different color than the rest of the door. When I walk in, I’m immediately confronted with activity. There’s no reception desk, no gleaming logo in fluorescent lights.
Inside, the space is a giant concrete box made slightly less austere by a haphazard sprawl of long blonde-wood tables. Some are clearly meant for lunch, dotted with Girl Scout cookie boxes, jars of Vegemite (someone here is Australian), and small wire baskets stuffed with one too many condiments. The rest of the tables tell a different story entirely. Many more of them are laden with monitors, spare robotics parts, tangles of black wire, and fully assembled robotic arms in various states of attempting to master the mundane.
During my visit, one arm is folding a pair of black pants, or trying to. It’s not going well. Another is attempting to turn a shirt inside out with the kind of determination that suggests it will eventually succeed, just not today. A third — this one seems to have found its calling — is quickly peeling a zucchini, after which it is supposed to deposit the shavings into a separate container. The shavings are going well, at least.
“Think of it like ChatGPT, but for robots,” Sergey Levine tells me, gesturing toward the motorized ballet unfolding across the room. Levine, an associate professor at UC Berkeley and one of Physical Intelligence’s co-founders, has the amiable, bespectacled demeanor of someone who has spent considerable time explaining complex concepts to people who don’t immediately grasp them.
Advertisement
Image Credits:Connie Loizos for TechCrunch
What I’m watching, he explains, is the testing phase of a continuous loop: data gets collected on robot stations here and at other locations — warehouses, homes, wherever the team can set up shop — and that data trains general-purpose robotic foundation models. When researchers train a new model, it comes back to stations like these for evaluation. The pants-folder is someone’s experiment. So is the shirt-turner. The zucchini-peeler might be testing whether the model can generalize across different vegetables, learning the fundamental motions of peeling well enough to handle an apple or a potato it’s never encountered.
The company also operates a test kitchen in this building and elsewhere using off-the-shelf hardware to expose the robots to different environments and challenges. There’s a sophisticated espresso machine nearby, and I assume it’s for the staff until Levine clarifies that no, it’s there for the robots to learn. Any foamed lattes are data, not a perk for the dozens of engineers on the scene who are mostly peering into their computers or hovering over their mechanized experiments.
The hardware itself is deliberately unglamorous. These arms sell for about $3,500, and that’s with what Levine describes as “an enormous markup” from the vendor. If they manufactured them in-house, the material cost would drop below $1,000. A few years ago, he says, a roboticist would have been shocked these things could do anything at all. But that’s the point — good intelligence compensates for bad hardware.
Techcrunch event
Boston, MA | June 23, 2026
Advertisement
As Levine excuses himself, I’m approached by Lachy Groom, moving through the space with the purposefulness of someone who has half a dozen things happening at once. At 31, Groom still has the fresh-faced quality of Silicon Valley’s boy wonder, a designation he earned early, having sold his first company nine months after starting it at age 13 in his native Australia (this explains the Vegemite).
When I first approached him earlier, as he welcomed a small gaggle of sweatshirt-wearing visitors into the building, his response to my request for time with him was immediate: “Absolutely not, I’ve got meetings.” Now he has 10 minutes, maybe.
Advertisement
Groom found what he was looking for when he started following the academic work coming out of the labs of Levine and Chelsea Finn, a former Berkeley PhD student of Levine’s who now runs her own lab at Stanford focused on robotic learning. Their names kept appearing in everything interesting happening in robotics. When he heard rumors they might be starting something, he tracked down Karol Hausman, a Google DeepMind researcher who also taught at Stanford and who Groom had learned was involved. “It was just one of those meetings where you walk out and it’s like, This is it.”
Groom never intended to become a full-time investor, he tells me, even though some might wonder why not given his track record. After leaving Stripe, where he was an early employee, he spent roughly five years as an angel investor, making early bets on companies like Figma, Notion, Ramp, and Lattice while searching for the right company to start or join himself. His first robotics investment, Standard Bots, came in 2021 and reintroduced him to a field he’d loved as a kid building Lego Mindstorms. As he jokes, he was “on vacation much more as an investor.” But investing was just a way to stay active and meet people, not the endgame. “I was looking for five years for the company to go start post-Stripe,” he says. “Good ideas at a good time with a good team — [that’s] extremely rare. It’s all execution, but you can execute like hell on a bad idea, and it’s still a bad idea.”
Image Credits:Connie Loizos for TechCrunch
The two-year-old company has now raised over $1 billion, and when I ask about its runway, he’s quick to clarify it doesn’t actually burn that much. Most of its spending goes toward compute. A moment later, he acknowledges that under the right terms, with the right partners, he’d raise more. “There’s no limit to how much money we can really put to work,” he says. “There’s always more compute you can throw at the problem.”
What makes this arrangement particularly unusual is what Groom doesn’t give his backers: a timeline for turning Physical Intelligence into a money-making endeavor. “I don’t give investors answers on commercialization,” he says of backers that include Khosla Ventures, Sequoia Capital, and Thrive Capital among others that have valued the company at $5.6 billion. “That’s sort of a weird thing, that people tolerate that.” But tolerate it they do, and they may not always, which is why it behooves the company to be well-capitalized now.
So what’s the strategy, if not commercialization? Quan Vuong, another co-founder who came from Google DeepMind, explains that it revolves around cross-embodiment learning and diverse data sources. If someone builds a new hardware platform tomorrow, they won’t need to start data collection from scratch — they can transfer all the knowledge the model already has. “The marginal cost of onboarding autonomy to a new robot platform, whatever that platform might be, it’s just a lot lower,” he says.
Advertisement
The company is already working with a small number of companies in different verticals — logistics, grocery, a chocolate maker across the street — to test whether their systems are good enough for real-world automation. Vuong claims that in some cases, they already are. With their “any platform, any task” approach, the surface area for success is large enough to start checking off tasks that are ready for automation today.
Physical Intelligence isn’t alone in chasing this vision. The race to build general-purpose robotic intelligence — the foundation on which more specialized applications can be built, much like the LLM models that captivated the world three years ago — is heating up. Pittsburgh-based Skild AI, founded in 2023, just this month raised $1.4 billion at a $14 billion valuation and is taking a notably different approach. While Physical Intelligence remains focused on pure research, Skild AI has already deployed its “omni-bodied” Skild Brain commercially, saying it generated $30 million in revenue in just a few months last year across security, warehouses, and manufacturing.
Image Credits:Connie Loizos for TechCrunch
Skild has even taken public shots at competitors, arguing on its blog that most “robotics foundation models” are just vision-language models “in disguise” that lack “true physical common sense” because they rely too heavily on internet-scale pretraining rather than physics-based simulation and real robotics data.
It’s a pretty sharp philosophical divide. Skild AI is betting that commercial deployment creates a data flywheel that improves the model with each real-world use case. Physical Intelligence is betting that resisting the pull of near-term commercialization will enable it to produce superior general intelligence. Who’s “more right” will take years to resolve.
In the meantime, Physical Intelligence operates with what Groom describes as unusual clarity. “It’s such a pure company. A researcher has a need, we go and collect data to support that need — or new hardware or whatever it is — and then we do it. It’s not externally driven.” The company had a 5- to 10-year roadmap of what the team thought would be possible. By month 18, they’d blown through it, he says.
Advertisement
The company has about 80 employees and plans to grow, though Groom says hopefully “as slowly as possible.” What’s the most challenging, he says, is hardware. “Hardware is just really hard. Everything we do is so much harder than a software company.” Hardware breaks. It arrives slowly, delaying tests. Safety considerations complicate everything.
As Groom springs up to rush to his next commitment, I’m left watching the robots continue their practice. The pants are still not quite folded. The shirt remains stubbornly right-side-out. The zucchini shavings are piling up nicely.
There are obvious questions, including my own, about whether anyone actually wants a robot in their kitchen peeling vegetables, about safety, about dogs going crazy at mechanical intruders in their homes, about whether all of the time and money being invested here solves big enough problems or creates new ones. Meanwhile, outsiders question the company’s progress, whether its vision is achievable, and if betting on general intelligence rather than specific applications makes sense.
If Groom has any doubts, he doesn’t show it. He’s working with people who’ve been working on this problem for decades and who believe the timing is finally right, which is all he needs to know.
Advertisement
Besides, Silicon Valley has been backing people like Groom and giving them a lot of rope since the beginning of the industry, knowing there’s a good chance that even without a clear path to commercialization, even without a timeline, even without certainty about what the market will look like when they get there, they’ll figure it out. It doesn’t always work out. But when it does, it tends to justify a lot of the times it didn’t.
To grab the best discount, you need to trade in a fairly new device. For instance, if you trade in the iPad Air (M3) 11-inch model or an iPad Pro 12.9 (4th Gen), you get $350 trade-in towards the iPad Air (M4). If you trade in the iPad mini (6th Gen), you get $200 off.
It’s worth playing around with the trade-in system to see how you can save while getting rid of unwanted older tech. Bear in mind that how cheap that makes the Apple iPad Air (M4) depends on which model you’re aiming for.
The Apple iPad Air (M4) provides the same design as previous models, but it now uses Apple’s much more powerful M4 chip. There’s also Wi-Fi 7 support, and LTE or 5G for relevant models. Think of this all as a small but important upgrade if you want the fastest iPad Air around. It’s sleek enough to carry around easily, too.
Advertisement
iPad Air preorder deal at Best Buy
As the Apple iPad Air (M4) has only just been announced, we haven’t had any hands-on time with it yet. However, we gave the iPad Air (M3) a highly respectable 4.5 stars out of five. We appreciated the power that the M3 chip offers, its “vibrant screen”, and “strong battery life and audio.”
That seems almost certain to carry over to this M4 model, so we’re counting on it riding high in our look at the best iPads soon.
Advertisement
For Apple enthusiasts, it’ll be one of the best tablets to upgrade to, even if it’s a relatively subtle improvement over the previous model. If you need something more powerful than your phone but more portable than your laptop, this is a great way to bridge the gap.
If you can’t wait for the latest model or you want to buy something a little cheaper, take a look at the other iPad deals currently going on. There are some good tablet deals around for every budget and need.
Amid the backlash, the business—touted as S’pore’s largest 24-hour spa—closes its pools
When House+ Bubble announced its arrival in Singapore, it quickly became one of the most talked-about spa openings here.
Touted as Singapore’s largest 24-hour spa—it will span nearly 100,000 sqft once completed—the new S$45 million wellness destination in Jurong East promised an all-in-one experience: soaking pools, therapy rooms, a cinema, an e-sports room, and round-the-clock access.
But that hype appears to have been short-lived.
Just a week into its soft-opening, during which guests could access the spa, massage services, pools, and dining areas for S$49, House+ Bubble closed its bathing pools in both the male and female sections indefinitely, citing “internal facility adjustments.”
Advertisement
A statement from House+ Bubble./ Image Credit: House+ Bubble
This move comes amid mounting complaints online. Google Reviews and visitor feedback have flagged hygiene concerns, inconsistent pool temperatures, and other operational issues, raising questions about whether the spa can live up to its lofty promises.
A slew of negative reviews
When Vulcan Post combed through House+ Bubble’s Google Reviews, bathrooms and toilets were described as “dirty,” and shared amenities raised concerns: combs reportedly had visible dandruff, while communal skincare bottles contained stray hairs.
A Google Review accompanying photos shows wet floors and towels left on the ground. The user also claimed that toilet bowls were clogged, and urinals were broken with “water running non-stop,” and a lack of toilet paper or paper towels./ Image Credit: Google Maps
Others pointed out that, despite being marketed as a 24-hour spa, not all facilities actually operate around the clock. The on-site restaurant closes at 12:30 AM, while massage services end at 10:30PM.
In response to one reviewer, the management of House+ Bubble said that it “is taking action to address these issues” and would elevate its cleaning standards.
Some visitors also highlighted hidden costs and misleading advertising. Despite claims of “unlimited massages,” the S$49 soft-opening fee only covered the massage chairs. A proper massage would reportedly cost between S$150 and S$250 per hour.
Alleged staffing issues
Allegedly, staffing issues may be compounding the spa’s operational problems.
Advertisement
A Reddit post claims that several employees left after short stints due to “poor management” and “poor staff treatment.”
Staff reportedly received only a 30-minute unpaid meal break for a nine-hour shift, despite being told they would get an hour. The post adds that the spa is now facing manpower shortages as a result.
Vulcan Post has reached out to House+ Bubble for comment on these claims but has yet to receive a response.
A S$45 million spa ambition
House+ Bubble is a S$45 million project.
Advertisement
Some of the facilities shown on the House+ Bubble website include private pools and even an esports room./ Image Credit: House+ Bubble
Its first opening phase, spanning approximately 49,000 sqft, was slated for an official launch on Mar 14, though it remains unclear if this will proceed as planned.
The second phase will look to add about 50,000 sqft, and is targeted for completion at the end of the year, subject to regulatory approvals.
Currently, visitors can still access House+ Bubble, but following the closure of the bathing pools, the trial operating fee has been reduced from S$49 to S$39 for three hours, excluding pool access.
Read other articles we’ve written on Singaporean businesses here.
Featured Image Credit: House+ Bubble/ Screengrab from Google Reviews
You may or may not be reading this on a smartphone, but odds are that even if you aren’t, you own one. Well, possess one, anyway — it’s debatable if the locked-down, one-way relationships we have with our addiction slabs counts as ownership. [LuckyBor], aka [Breezy], on the other hand — fully owns his 4G smartphone, because he made it himself.
OK, sure, it’s only rocking a 4G modem, not 5G. But with an ESP32-S3 for a brain, that’s probably going to provide plenty of bandwidth. It does what you expect from a phone: thanks to its A7682E simcom modem, it can call and text. The OV2640 Arducam module allows it to take pictures, and yes, it surfs the web. It even has features certain flagship phones lack, like a 3.5 mm audio jack, and with its 3.5″ touchscreen, the ability to fit in your pocket. Well, once it gets a case, anyway.
It talks, it texts, it… does not julienne fry, but that’s arguably a good thing.
This is just an alpha version, a brick of layered modules. [LuckyBor] plans on fitting everything into a slimmer form factor with a four-layer PCB that will also include an SD-card adapter, and will open-source the design at that time, both hardware and software. Since [LuckyBor] has also promised the world documentation, we don’t mind waiting a few months.
It’s always good to see another open-source option, and this one has us especially chuffed. Sure, we’ve written about Postmarket OS and other Linux options like Nix, and someone even put the rust-based Redox OS on a phone, but those are still on the same potentially-backdoored commercial hardware. That’s why this project is so great, even if its performance is decidedly weak compared to flagship phones that have as more horsepower as some of our laptops.
We very much hope [LuckyBor] carries through with the aforementioned promise to open source the design.
We may receive a commission on purchases made from links.
With streaming being such an integral part of modern entertainment, it’s no wonder we’re all looking for ways to optimize our experience. Beyond owning smart TVs, this also means investing in additional devices, such as the Amazon Fire TV Stick, to enhance the viewing experience. In fact, it includes numerous useful remote shortcuts that Amazon doesn’t advertise, letting you do everything from switching display resolutions to enabling accessibility features.
Unlike our smart TVs, which usually stay firmly in our homes, you can also easily travel with your Fire TV Stick and enjoy your streaming content as long as you have access to a compatible TV and Wi-Fi. Not to mention, you can play games on television screens without lugging around huge gaming laptops or bringing extra handheld consoles.
Advertisement
Owning an Amazon Fire TV Stick also opens many connectivity options, especially with Bluetooth-enabled devices. But take note: while there are a ton of devices you can connect to your Fire TV Stick to see your exact options, you first need to find your model number by either referencing the receipt, the box it came with, or the device itself when in use. If you’ve forgotten what model of Fire TV stick you own, you can launch it, open the Settings menu, and select “My Fire TV.”
To pair any compatible Bluetooth device, launch your Amazon Fire TV Stick and navigate to Settings. Afterward, select Controllers & Bluetooth Devices, choose the device category you want to pair, and follow the pairing instructions on the screen.
Here are the gadgets you can connect to the Fire TV Stick via Bluetooth.
Advertisement
1. Speakers
Valeriia Sakhno/Shutterstock
Many modern smart television sets will probably already let you hook up your speakers directly via Bluetooth. However, there are reasons why you might still want to do it through the Fire TV Stick. For example, you can easily adjust the volume with the Fire TV Stick remote, so you have fewer things to fiddle with. If you tend to use your television only with your Fire TV Stick, this can also streamline audio processing and reduce the risk of audio issues when streaming your favorite shows or movies. These days, there’s no shortage of Bluetooth speakers worth buying that can work with your Fire TV Stick, such as the Anker Soundcore 2, Marshall Stanmore III, and Sonos Move 2. With this, you can get better sound than your TV speakers, and you can also move your speakers to your preferred location.
For those who are already invested in the Amazon smart home ecosystem, you can hook up the Fire TV Stick to your Alexa-powered Echo speakers. With this alone, it introduces a ton of additional possibilities for your integrated smart home experience. Apart from voice control options, it can be used as a component in creating automated scenes that work with other Alexa-compatible devices, such as light bulbs, scent machines, and smart switches. For example, some Alexa automations compatible with your Fire TV Stick can optimize your bedtime routine or turn everything off after movie nights.
Advertisement
2. Headphones and earbuds
Yuganov Konstantin/Shutterstock
While some people are lucky enough to live in places where they can turn on the loudspeakers freely while accessing their favorite content, others need to be more mindful of their viewing habits. Thankfully, just because you’re watching from a TV doesn’t mean the whole neighborhood has to watch with you. Whether you want some privacy or just to avoid an angry neighbor knocking on your door, you can pair your Bluetooth headphones with your Amazon Fire TV devices. In recent times, there is no shortage of multi-point Bluetooth Headphones and Earbuds that can work with your Amazon Fire TV Stick. For example, Apple users will be relieved to know that its AirPods, AirPods Pro, and AirPods Max all work with it.
But take note: the same issues other devices have with Bluetooth headphones and earphones apply, such as audio latency, which you’ll need to resolve using AV Sync Tuning. Not to mention, apart from commercial headphones and earbuds, some Amazon Fire TV devices also work with hearing aids, including several of its TV offerings and the Fire TV Cube (2nd- and 3rd-generation models). Among compatible hearing aids, it lists Starkey, Widex, and Cochlear hearing devices. Although you may need to check with your specific model. In 2025, Amazon released a few new features that make its Fire TV devices more accessible, such as the Dual Audio option, which allows hearing aid users and others to listen to audio at adjusted loudness levels simultaneously.
Advertisement
3. Bluetooth game controllers
Even though many smart TVs can perform the same functions, the Amazon Fire TV Stick still does a lot of things better, such as navigation, software experiences, and cloud gaming. In 2020, Amazon launched Luna Cloud Gaming, which lets people run its library of games on Amazon’s remote servers. Depending on your preferences, you can choose a subscription model that suits the kinds of games you play most often.
According to Amazon, certified Luna-compatible controllers include the official Luna Controller, PlayStation 4 DualShock 4 Wireless Controller, Xbox One Controller, and the Google Stadia Controller. Additionally, owners of the PS5’s DualSense Controllers have been able to use them effectively. Although some people may claim that their 3rd-party controllers from other manufacturers work with their Fire TV Stick, it’s important to note that you will not have the same protection, assurance, or expected longevity as with official ones.
Advertisement
Regardless of which model you choose, you’ll still want to make sure you have the right network and device settings to enjoy your Bluetooth controllers. Apart from having a fast enough connection, you’ll also want to turn on Game Mode when possible. Not to mention, compatibility isn’t entirely guaranteed for everything and still depends on the specific game you are playing. In a jiffy, you can opt to use the Luna Controller app on your mobile phone instead.
Advertisement
4. Bluetooth mice and keyboards
Devices like the Fire TV Stick solve many problems, but they also introduce new ones. One of the most annoying, yet somewhat universal, experiences for anyone who has used a streaming device is finding it difficult to navigate with the remote. In fact, while the Amazon Fire TV Stick lets you browse the internet with your TV using Amazon Silk, it can be a nightmare to type all the website names and click all the right buttons.
If you want a sleek-looking wireless keyboard, something like the Logitech K380 Multi-Device Bluetooth Keyboard lets you pair with up to three devices, so you don’t have to unpair it from your computer to use it with your Fire TV Stick. But if you’re looking for something more ergonomic, there are even Bluetooth mice with side-scrolling, like the Logitech MX Master 3S, Keychron M6 Wireless Mouse, and Razer Basilisk V3 Pro.
If you don’t own a Bluetooth keyboard or mouse, all hope is not lost. As we’ve mentioned before, you can use a micro-USB OTG splitter to plug in a wired keyboard or mouse to your Fire TV Stick. So, if you still prefer using a wired peripheral or have already maxed out the number of devices you can connect to your Fire TV stick, this is a possible alternative.
According to new technical analyses from Google and mobile security firm iVerify, Coruna’s technical core comprises five complete exploit chains and 23 distinct iOS vulnerabilities that bypass most of the major software defenses Apple has shipped in versions 13 through 17.2.1, effectively turning a web page into a silent infection… Read Entire Article Source link
“People want something that lasts them a long time, that is quality, that is useful,” says Google senior director Alexander Kuscher. “Eventually, when it breaks or when you lose it, you get a new one because you feel taken care of. So I think that builds trust, and the trust is important.”
Flex started as an enterprise service for businesses; Google offered companies worried about security vulnerabilities on aging hardware a way to easily update to a more secure operating system. Or, at least, one that still received updates. After a while, other users started to get ahold of the software, downloading and installing it on their own USB sticks for their personal machines. “We didn’t make it particularly easy at the time,” Kuscher says. “But people did it.”
What led to the more consumer-oriented push of ChromeOS Flex—like this partnership with Back Market—was the end of software support for Microsoft’s Windows 10 operating system last fall. While the OS still technically works, it stopped receiving security updates, and Microsoft has encouraged users to update to Windows 11. But Windows 11 has specific hardware requirements, and it may not be a simple upgrade on certain machines. Google saw this as a moment to provide a cheaper alternative to the “Windows 10 cliff,” as Kuscher puts it. Back Market agreed.
“Ultimately, [Microsoft is] saying that people need to throw away their existing laptop to buy another one,” Hug de Larauze says. “And we say politely, no.”
Advertisement
If you’re tech-savvy, you can forgo Back Market’s $3 stick and download ChromeOS Flex onto a USB drive you have lying around right now.
Buying Refurb
Back Market has done very well for itself despite economic turmoil. As devices become more expensive, people turn to cheaper, refurbished options. He compares the device market to the auto industry.
“Ninety percent of cars are being sold pre-owned,” Hug de Larauze says. “The new normal is to purchase them pre-owned because it’s almost dumb to buy a new one.”
When US president Donald Trump announced sweeping tariffs last year, Hug de Larauze says Back Market sales tripled afterwards. Even after the dust settled a little and it became clear that tariffs would not directly affect smartphones or computers, Hug de Larauze says sales stayed around twice what they’d been before. Back Market made $3.8 billion in 2025, making the company profitable for the first time. While Hug de Larauze says these kinds of economic fluctuations may be good for sending more people to Back Market, he hopes it will shift buyer mindsets to buying refurbished tech writ large.
Advertisement
“We have one planet, and resources are limited,” Hug de Larauze says. “We need to do more with what we already have in every sector. Fashion is the same, transportation is the same, energy is the same, it’s the same for everything.”
If you’re looking to pre-order Apple’s new Studio Display XDR monitor today but have an older Mac, beware of some potential issues. According to the compatibility list spotted by Apple Insider, the new display will only work at 60Hz and not at its full 120Hz refresh rate on some older and less powerful Silicon models. Moreover, support for older Intel Macs isn’t mentioned at all for either the Studio Display XDR or cheaper Studio Display.
All Apple Silicon Macs will work with both monitors, including those with the oldest M1 chips, according to the support pages. However, the compatibility list for the Studio Display XDR includes this nugget: “Mac models with M1, M1 Pro, M1 Max, M1 Ultra, M2, and M3 support Studio Display XDR at up to 60Hz. All other Studio Display XDR features are supported.” So even if you have a hotrod M1 Ultra-based Mac, the Studio Display XDR’s refresh rate is capped at 60Hz — despite the fact that the chip can drive third-party monitors at 120Hz.
Similarly, only the iPad Pro M5 supports the Studio Display XDR at 120Hz, with all other compatible models (in the iPad Pro and iPad Air family) limited to 60Hz.
Intel Mac support isn’t mentioned at all in the compatibility list for either display, though they may function in some limited manner when connected. Intel Macs just received their last new OS update with macOS Tahoe (and only three more years of security updates), but it’s still surprising that they’re not compatible with Apple’s latest monitors.
Self-driving cars often struggle with with situations that are commonplace for human drivers. When confronted with construction zones, school buses, power outages, or misbehaving pedestrians, these vehicles often behave unpredictably, leading to crashes or freezing events, causing significant disruption to local traffic and possibly blocking first responders from doing their jobs. Because self-driving cars cannot successfully handle such routine problems, self-driving companies use human babysitters to remotely supervise them and intervene when necessary.
This idea—humans supervising autonomous vehicles from a distance—is not new. The U.S. military has been doing it since the 1980s with unmanned aerial vehicles (UAVs). In those early years, the military experienced numerous accidents due to poorly designed control stations, lack of training, and communication delays.
As a Navy fighter pilot in the 1990s, I was one of the first researchers to examine how to improve the UAV remote supervision interfaces. The thousands of hours I and others have spent working on and observing these systems generated a deep body of knowledge about how to safely manage remote operations. With recent revelations that U.S. commercial self-driving car remote operations are handled by operators in the Philippines, it is clear that self-driving companies have not learned the hard-earned military lessons that would promote safer use of self-driving cars today.
While stationed in the Western Pacific during the Gulf War, I spent a significant amount of time in air operations centers, learning how military strikes were planned, implemented and then replanned when the original plan inevitably fell apart. After obtaining my PhD, I leveraged this experience to begin research on the remote control of UAVs for all three branches of the U.S. military. Sitting shoulder-to-shoulder in tiny trailers with operators flying UAVs in local exercises or from 4000 miles away, my job was to learn about the pain points for the remote operators as well as identify possible improvements as they executed supervisory control over UAVs that might be flying halfway around the world.
Advertisement
Supervisory control refers to situations where humans monitor and support autonomous systems, stepping in when needed. For self-driving cars, this oversight can take several forms. The first is teleoperation, wherea human remotely controls the car’s speed and steering from afar. Operators sit at a console with a steering wheel and pedals, similar to a racing simulator. Because this method relies on real-time control, it is extremely sensitive to communication delays.
The second form of supervisory control is remote assistance. Instead of driving the car in real time, a human gives higher-level guidance. For example, an operator might click a path on a map (called laying “breadcrumbs”) to show the car where to go, or interpret information the AI cannot understand, such as hand signals from a construction worker. This method tolerates more delay than teleoperation but is still time-sensitive.
Five Lessons From Military Drone Operations
Over 35 years of UAV operations, the military consistently encountered five major challenges during drone operations which provide valuable lessons for self-driving cars.
Latency
Latency—delays in sending and receiving information due to distance or poor network quality—is the single most important challenge for remote vehicle control. Humans also have their own built-in delay: neuromuscular lag. Even under perfect conditions, people cannot reliably respond to new information in less than 200–500 milliseconds. In remote operations, where communication lag already exists, this makes real-time control even more difficult.
Advertisement
In early drone operations, U.S. Air Force pilots in Las Vegas (the primary U.S. UAV operations center) attempted to take off and land drones in the Middle East using teleoperation. With at least a two-second delay between command and response, the accident rate was 16 times that of fighter jets conducting the same missions . The military switched to local line-of-sight operators and eventually to fully automated takeoffs and landings. When I interviewed the pilots of these UAVs, they all stressed how difficult it was to control the aircraft with significant time lag.
Self-driving car companies typically rely on cellphone networks to deliver commands. These networks are unreliable in cities and prone to delays. This is one reason many companies prefer remote assistance instead of full teleoperation. But even remote assistance can go wrong. In one incident, a Waymo operator instructed a car to turn left when a traffic light appeared yellow in the remote video feed—but the network latency meant that the light had already turned red in the real world. After moving its remote operations center from the U.S. to the Philippines, Waymo’s latency increased even further. It is imperative that control not be so remote, both to resolve the latency issue but also increase oversight for security vulnerabilities.
Workstation Design
Poor interface design has caused many drone accidents. The military learned the hard way that confusing controls, difficult-to-read displays, and unclear autonomy modes can have disastrous consequences. Depending on the specific UAV platform, the FAA attributed between 20% and 100% of Army and Air Force UAV crashes caused by human error through 2004 to poor interface design.
UAV crashes (1986-2004) caused by human factors problems, including poor interface and procedure design. These two categories do not sum to 100% because both factors could be present in an accident.
The self-driving industry reveals hints of comparable issues. Some autonomous shuttles use off-the-shelf gaming controllers, which—while inexpensive—were never designed for vehicle control. The off-label use of such controllers can lead to mode confusion, which was a factor in a recent shuttle crash. Significant human-in-the-loop testing is needed to avoid such problems, not only prior to system deployment, but also after major software upgrades.
Operator Workload
Drone missions typically include long periods of surveillance and information gathering, occasionally ending with a missile strike. These missions can sometimes last for days; for example, while the military waits for the person of interest to emerge from a building. As a result, the remote operators experience extreme swings in workload: sometimes overwhelming intensity, sometimes crushing boredom. Both conditions can lead to errors.
When operators teleoperate drones, workload is high and fatigue can quickly set in. But when onboard autonomy handles most of the work, operators can become bored, complacent, and less alert. This pattern is well documented in UAV research.
Advertisement
Self-driving car operators are likely experiencing similar issues for tasks ranging from interpreting confusing signs to helping cars escape dead ends. In simple scenarios, operators may be bored; in emergencies—like driving into a flood zone or responding during a citywide power outage—they can become quickly overwhelmed.
The military has tried for years to have one person supervise many drones at once, because it is far more cost effective. However, cognitive switching costs (regaining awareness of a situation after switching control between drones) result in workload spikes and high stress. That coupled with increasingly complex interfaces and communication delays have made this extremely difficult.
Self-driving car companies likely face the same roadblocks. They will need to model operator workloads and be able to reliably predict what staffing should be and how many vehicles a single person can effectively supervise, especially during emergency operations. If every self-driving car turns out to need a dedicated human to pay close attention, such operations would no longer be cost-effective.
Training
Early drone programs lacked formal training requirements, with training programs designed by pilots, for pilots. Unfortunately, supervising a drone is more akin to air traffic control than actually flying an aircraft, so the military often placed drone operators in critical roles with inadequate preparation. This caused many accidents. Only years later did the military conduct a proper analysis of the knowledge, skills, and abilities needed to conduct safe remote operations, and changed their training program.
Advertisement
Self-driving companies do not publicly share their training standards, and no regulations currently govern the qualifications for remote operators. On-road safety depends heavily on these operators, yet very little is known about how they are selected or taught. If commercial aviation dispatchers are required to have formal training overseen by the FAA, which are very similar to self-driving remote operators, we should hold commercial self-driving companies to similar standards.
Contingency Planning
Aviation has strong protocols for emergencies including predefined procedures for lost communication, backup ground control stations, and highly reliable onboard behaviors when autonomy fails. In the military, drones may fly themselves to safe areas or land autonomously if contact is lost. Systems are designed with cybersecurity threats—like GPS spoofing—in mind.
Self-driving cars appear far less prepared. The 2025 San Francisco power outage left Waymo vehicles frozen in traffic lanes, blocking first responders and creating hazards. These vehicles are supposed to perform “minimum-risk maneuvers” such as pulling to the side—but many of them didn’t. This suggests gaps in contingency planning and basic fail-safe design.
The history of military drone operations offers crucial lessons for the self-driving car industry. Decades of experience show that remote supervision demands extremely low latency, carefully designed control stations, manageable operator workload, rigorous, well-designed training programs, and strong contingency planning.
Advertisement
Self-driving companies appear to be repeating many of the early mistakes made in drone programs. Remote operations are treated as a support feature rather than a mission-critical safety system. But as long as AI struggles with uncertainty, which will be the case for the foreseeable future, remote human supervision will remain essential. The military learned these lessons through painful trial and error, yet the self-driving community appears to be ignoring them. The self-driving industry has the chance—and the responsibility—to learn from our mistakes in combat settings before it harms road users everywhere.
As the US administration proceeds to drop Anthropic as a supplier, many are rallying around the AI company’s relatively ethical stance, creating ‘unprecedented demand’ for Claude.
Anthropic’s Claude has been fast becoming the darling of the AI enthusiasts, for development, research and enterprise work. Now it is facing the might of the US administration which is threatening to drop it entirely as a supplier after a falling out with the Pentagon over so-called “red lines” it would not pass.
With many in Silicon Valley supporting its relatively principled stand, and general users sending it to the top of the US Apple charts in recent days for free downloads – beating OpenAI’s ChatGPT for the first time – its flagship Claude.ai and Claude Code apps went down for around three hours on Monday (2 March), causing many to bemoan its absence. There are already reports of further outages as we write, although its latest update says “a fix has been implemented and we are monitoring the results”.
In a nostalgic post on LinkedIn yesterday, regular contributor to Silicon Republic, AI aficionado Jonathan McCrea wrote: “I now feel the same way about Claude being down as I used to about Twitter being down.”
Advertisement
De facto boycott
Last night, treasury secretary Scott Bessent added his voice to the de facto US administration boycott of Anthropic products saying in a post on X that his department would terminate use of Anthropic products.
It follows a directive from president Donald Trump ordering US agencies to “phase out” their use of the AI company’s products, and his defence department labelling Anthropic a “supply-chain risk”, an allocation normally reserved for foreign suppliers from non-friendly states. Anthropic has been quick to say that this is a “legally unsound’ designation, and is expected to challenge the move in the courts.
Reuters is also reporting that it has seen memos to employees at the Department of Health and Human Services, asking them to switch to other AI platforms such as ChatGPT and Gemini, and at the State Department saying it was switching the model powering its in-house chatbot – StateChat – to OpenAI from Anthropic.
Financially it will surely deal a serious blow to Anthropic in the short term, but some commentators are arguing that it could be a pivotal moment for Anthropic as it may be seen by many as the relatively ethical choice when it comes to the AI giants.
Advertisement
The recent Grok scandal has put a major question mark over xAI’s credentials and OpenAI’s Sam Altman clearly sees the reputational risk as he has been quick to claim that it is ensuring some guardrails in its contract with the Pentagon.
On X yesterday Altman claimed that these guardrails would ensure OpenAI would not be “intentionally used for domestic surveillance of US persons and nationals”.
The backstory
If you haven’t been following, Anthropic drew the ire of the US administration after a standoff with the Pentagon, where Anthropic refused to change its safeguards related to using its AI for fully autonomous weapons, or for mass surveillance of US citizens.
On Thursday (February 27), Anthropic’s Dario Amodei released an official statement saying Anthropic believed that in “a narrow set of cases, we believe AI can undermine, rather than defend, democratic values”.
Advertisement
“Some uses are also simply outside the bounds of what today’s technology can safely and reliably do,” he said. “Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included.
“We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values.”
Amodei went on to say that partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. “But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk.”
It’s a debacle that is likely to roll on in coming days, and it remains to be seen whether Anthropic can withstand the unprecedented onslaught from its own government and rely on the support of users for its principled stand. In the short term, its challenge appears to be to meet the current demand on its systems.
Advertisement
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
If your iPhone is running an outdated version of iOS, you may have 23 vulnerabilities that can be exploited by highly sophisticated toolkit being sold to bad actors.
Update to iOS 26 to avoid a sophisticated hacking toolkit
It is well known that law enforcement agencies and government entities rely on hardware like GrayKey to attempt a bypass of iPhone security. It seems that the United States Government may have created a monstrous exploit tool that is now being sold and spread to bad actors. A Wiredreport details data shared by Google’s Threat Intelligence Group and iVerify. Google explains how the exploit toolkit, named “Coruna,” spread, while iVerify shared its findings tying its origins to the US government. Continue Reading on AppleInsider | Discuss on our Forums