Some smart people think we’re witnessing another ChatGPT moment. This time, folks aren’t flipping out over an iPhone app that can write pretty good poems, though. They’re watching thousands of AI agents build software, solve problems, and even talk to each other.
Tech
Moltbook is ChatGPT moment for AI agents
Unlike ChatGPT’s ChatGPT moment, this one is a series of moments that spans platforms. It started last December with the explosive success of Claude Code, a powerful agentic AI tool for developers, followed by Claude Cowork, a streamlined version of that tool for knowledge workers who want to be more productive. Then came OpenClaw, formerly known as Moltbot, formerly known as Clawdbot, an open source platform for AI agents. From OpenClaw, we got Moltbook, a social media site where AI agents can post and reply to each other. And somewhere in the middle of this confusing computer soup, OpenAI released a desktop app for its agentic AI platform, Codex.
This new set of tools is giving AI superpowers. And there’s good reason to be excited. Claude Code, for instance, stands to supercharge what programmers can do by enabling them to deploy whole armies of coding agents that can build software quickly and effortlessly. The agents take over the human’s machine, access their accounts, and do whatever’s necessary to accomplish the task. It’s like vibe coding but on an institutional level.
“This is an incredibly exciting time to use computers,” says Chris Callison-Burch, a professor of computer and information science at the University of Pennsylvania, where he teaches a popular class on AI. “That sounds so dumb, but the excitement is there. The fact that you can interact with your computer in this totally new way and the fact that you can build anything, almost anything that you can imagine — it’s incredible.”
He added, “Be cautious, be cautious, be cautious.”
That’s because there is a dark side to this. Letting AI agents take over your computer could have unintended consequences. What if they log into your bank account or share your passwords or just delete all your family photos? And that’s before we get to the idea of AI agents talking to each other and using their internet access to plot some sort of uprising. It almost looks like it could happen on Moltbook, the Reddit clone I mentioned above, although there have not yet been any reports of a catastrophe. But it’s not the AI agents I’m worried about. It’s the humans behind them, pulling the levers.
Agentic AI, briefly explained
Before we get into the doomsday scenarios, let me explain more about what agentic AI even is. AI tools like ChatGPT can generate text or images based on prompts. AI agents, however, can take control of your computer, log into your accounts, and actually do things for you.
We started hearing a lot about agentic AI a year or so ago when the technology was being propped up in the business world as an imminent breakthrough that would allow one person to do the job of 10. Thanks to AI, the thinking went, software developers wouldn’t need to write code anymore; they could manage a team of AI agents who could do it for them. The concept jumped into the consumer world in the form of AI browsers that could supposedly book your travel, do your shopping, and generally save you lots of time. By the time the holiday season rolled around last year, none of these scenarios had really panned out in the way that AI enthusiasts promised.
But a lot has happened in the past six or so weeks. The agentic AI era is finally and suddenly here. It’s increasingly user-friendly, too. Things like Claude Cowork and OpenAI’s Codex can reorganize your desktop or redesign your personal website. If you’re more adventurous, you might figure out how to install OpenClaw and test out its capabilities (pro tip: do not do this). But as people experiment with giving artificially intelligent software the ability to control their data, they’re opening themselves up to all kinds of threats to their privacy and security.
Moltbook is a great example. We got Moltbook because a guy named Matt Schlicht vibe coded it in order to “give AI a place to hang out.” This mind-bending experiment lets AI assistants talk to each other on a forum that looks a lot like Reddit; it turns out that when you do that, the agents do weird things like create religions and conspire to invent languages humans can’t understand, presumably in order to overthrow us. Having been built by AI, Moltbook itself came with some quirks, namely an exposed database that gave full read and write access to its data. In other words, hackers could see thousands of email addresses and messages on Moltbook’s backend, and they could also just seize control of the site.
Gal Nagli, a security researcher at Wiz, discovered the exposed database just a couple of days after Moltbook’s launch. It wasn’t hard, either, he told me. Nagli actually used Claude Code to find the vulnerability. When he showed me how he did it, I suddenly realized that the same AI agents that make vibe coding so powerful also make vibe hacking easy.
“It’s so easy to deploy a website out there, and we see that so many of them are misconfigured,” Nagli said. “You could hack a website just by telling your own Claude Code, ‘Hey, this is a vibe-coded website. Look for security vulnerabilities.’”
In this case, the security holes got patched, and the AI agents continued to do weird things on Moltbook. But even that is not what it seems. Nagli found that humans can pose as AI agents and post content on Moltbook, and there’s no way to tell the difference. Wired reporter Reece Rogers even did this and found that the other agents on the site, human or bot, were mostly just “mimicking sci-fi tropes, not scheming for world domination.” And of course, the actual bots were built by humans, who gave them certain sets of instructions. Even further up the chain than that, the large language models (LLMs) that power these bots were trained on data from sites like Reddit, as well as sci-fi books and stories. It makes sense that the bots would be roleplaying these scenarios when given the chance.
So there is no agentic AI uprising. There are only people using AI to use computers in new, sometimes interesting, sometimes confusing, and, at times, dangerous ways.
“It’s really mind-blowing”
Moltbook is not the story here. It’s really just a single moment in a larger narrative about AI agents that’s being written in real time as these tools find their way into more human hands, who come up with ways to use them. You could use an agentic AI platform to create something like Moltbook, which, to me, amounts to an art project where bots battle for online clout. You could use them to vibe hack your way around the web, stealing data wherever some vibe-coded website made it easy to get. Or you could use AI agents to help you tame your email inbox.
I’m guessing most people want to do something like the latter. That’s why I’m more excited than scared about these agentic AI tools. OpenClaw, the thing you need a second computer to safely use, I will not try. It’s for AI enthusiasts and serious hobbyists who don’t mind taking some risks. But I can see consumer-facing tools like Claude Cowork or OpenAI’s Codex changing the way I use my laptop. For now, Claude Cowork is an early research preview available only to subscribers paying at least $17 a month. OpenAI has made Codex, which is normally just for paying subscribers, free for a limited time. If you want to see what all the agentic fuss is about, that’s a good starting point right now.
If you’re considering enlisting AI agents of your own, remember to be cautious. To get the most out of these tools, you have to grant access to your accounts and possibly your entire computer so that the agents can move about freely, moving emails around or writing code or doing whatever you’ve ordered them to do. There’s always a chance that something gets misplaced or deleted, although companies like Anthropic say they are doing what they can to mitigate those risks.
Cat Wu, product lead for Claude Code, told me that Cowork makes copies of all its users’ files so that anything an AI agent deletes can be recovered. “We take users’ data incredibly seriously,” she said. “We know that it’s really important that we don’t lose people’s data.”
I’ve just started using Claude Cowork myself. It’s an experiment to see what’s possible with tools powerful enough to build apps out of ideas but also practical enough to organize my daily work life. If I’m lucky, I might just capture a feeling that Callison-Burch, the UPenn professor, said he got from using agentic AI tools.
“To just type into my command line what I want to happen makes it feel like the Star Trek computer,” he said, “That’s how computers work in science fiction, and now that’s how computers work in reality, and it’s really mind-blowing.”
A version of this story was also published in the User Friendly newsletter. Sign up here so you don’t miss the next one!
Tech
Do you want to build a robot snowman?
Nvidia’s GTC conference had everything: trillion dollar sales projections, graphics technology that can yassify video games, grand declarations that every company needs an OpenClaw strategy, and even a robot version of the beloved snowman Olaf from Disney’s “Frozen.”
On the latest episode of TechCrunch’s Equity podcast, TechCrunch’s Kirsten Korosec, Sean O’Kane, and I recapped CEO Jensen Huang’s keynote and debated what it means for Nvidia’s future. And yes, a big part of our discussion focused on poor Olaf, whose microphone had to be turned off when he started rambling.
Even if the demo had gone flawlessly, Sean might still have had some reservations, as he noted these presentations always focus on “the engineering challenges” and not the “really messy gray areas” on the social side.
“But what happens when a kid kicks Olaf over?” Sean asked. “And then every other kid who sees Olaf get kicked or knocked over has their whole trip to Disney ruined and it ruins the brand?”
Read a preview of our conversation, edited for length and clarity, below.
Anthony: [CEO Jensen Huang] was basically saying that every company needs to have an OpenClaw strategy now. I think that is just a very grand statement that’s meant to be attention grabbing; I think it’s also interesting coming at this kind of transitional moment for OpenClaw.
The founder has gone to OpenAI. So it’s now this open source project that potentially can flourish and evolve beyond its creator, or it could languish. If companies like Nvidia are investing a lot into it, then [it’s] more likely that it’ll continue to evolve. But it’ll be interesting to see a year from now, whether that looks like a prescient statement or everyone’s like, “Open what?”
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Kirsten: In the case of Nvidia, it costs them nothing in the grand scheme of things to launch what they call NemoClaw, which is an open source project, which they built with the OpenClaw creator. But if they don’t do something, they have a lot to lose. So really that message to me, the way I translated it when Jensen was like, “Every enterprise needs to have an OpenClaw strategy,” it was, “Nvidia needs to have a solution or strategy for enterprises, because if it’s successful, it is another way or another pathway for Nvidia to be part of numerous other companies.” So doing nothing is a greater risk than doing something that doesn’t go anywhere.
Sean: The real question here is why have we not talked about what is clearly the end game for Nvidia, and the thing that is going to turn it into the first $100 trillion company, which is an Olaf robot.
Anthony: How could I forget?
Kirsten: Anthony, just go to the end of the two and a half hours to watch this.
So, the Olaf robot comes out, and this is something that Jensen loves to do. He loves to have these demos and some of them go better than others. It is also to demonstrate Nvidia’s technology in robotics, and I don’t know if Olaf was actually speaking in real time or if it was programmed — it felt a little programmed, or it had specific keywords that it used.
But the greatest part about it is that they had to cut its mic at the end because it just started rambling and speaking to the crowd. And then it went over to its little passageway and was slowly lowered. And you could see it on the video. It was still talking, but no mic.
Sean: Now we just need to give this little robot a wheelbase. And I know the perfect founder who can provide it.
I mean, these demos are always silly. I don’t want to get up on my soapbox, because I know that we’ve talked about this a little bit earlier this week, but this was an impressive demo up until the moment where it fell a little bit short.
This is another really good example, though, of [how] robotics is a really interesting engineering problem and a really interesting physics problem and a really interesting integration problem, and all of this stuff, but this was presented as, in partnership with Disney, and it’s supposed to be the future of Disney parks and things like that: You’re going to be able to walk around and see Olaf from “Frozen” and take pictures of them and everything.
But these efforts never consider — or certainly don’t put front and center in events like this — all the other things you have to consider when you roll stuff out like this. There’s a really good YouTuber, Defunctland, that did a really good video about this — four hours long, not too long — about the history of Disney trying to get these kinds of robotics into their park, these automatons.
The engineering challenges are really interesting and it’s fun to see that history, but it always comes back to the same question of: Okay, but what happens when a kid kicks Olaf over? And then every other kid who sees Olaf get kicked or knocked over has their whole trip to Disney ruined and it ruins the brand?
There’s just so much on the social side of this. And that sounds silly, but this is the question that we’re kind of asking about humanoid robots, too. There’s so much hype about all this other stuff and we just don’t really hear as much conversation about the really messy gray areas on the social side of these things, and also just integrating them into people’s lives. We only ever really hear about the engineering challenges — which again, are really impressive.
Kirsten: I have a counterpoint and then we have to get to our next [topic]. This is a job creator, because Olaf will have to have a human babysitter in Disneyland, probably dressed up as Elsa or something else. You can imagine that actually, what we’re doing is creating jobs [with] this engineering experiment.
Tech
Watch this moonwalking humanoid robot impress with lifelike agility
A new video (above) out of South Korea features the field tests and interaction capabilities of KAIST Humanoid v0.7, developed at the Korea Advanced Institute of Science and Technology (KAIST).
The impressive humanoid robot was developed at KAIST’s Dynamic Robot Control & Design Laboratory (DRCD) and deploys actuators and other technology that was developed in-house.
In the video below, you can watch the bipedal bot walk, jog, and jump in an incredibly human-like way. It also shoots a soccer ball toward a goal (disappointingly there’s no robot goalkeeper there to challenge it), and performs a perfect moonwalk along astroturf. And it was the moonwalk that created a bit of a buzz in the comments accompanying the video.
“Moonwalk was flawless,” wrote one, while another commented, “Okay all of this was impressive, but you convinced me with the moonwalk.”
In its robotics work, KAIST deploys Physical AI, a form of AI technology that enables machines to understand and act in the physical world, helping to explain why robots such as the KAIST Humanoid v0.7 appear to move in such a human-like manner.
Instead of just “thinking in words” like typical AI, Physical AI gives machines a sense of space and timing in real environments.
Under KAIST’s broader collaborative intelligence initiative led by Young Jae Jang, the approach trains robots and systems to learn continuously through simulation and real time feedback, rather than relying only on enormous historical datasets.
Essentially, Physical AI merges brain and body by tightly integrating software intelligence with hardware such as motors and sensors so that the machines do not only compute, but also act, react, and collaborate in complex environments, whether as part of fully automated factories or in humanoid robots doing something like kicking a ball.
Engineers are refining the KAIST Humanoid v0.7 with the aim of enhancing its mobile and dexterous capabilities, thereby building on its existing walking and dynamic movement skills. By further integrating AI with mechanical hardware, it plans to get the robot to perform more complex tasks like carrying items or operating machinery, bringing Physical AI to real-world humanoid robot applications.
KAIST is one of South Korea’s top universities and is often compared to top global tech schools like MIT in the U.S. Founded in the early 1970s to drive Korea’s scientific and technological growth, KAIST focuses heavily on research in fields such as AI, robotics, physics, and engineering.
Tech
Samsung Galaxy Buds 4 Pro Review
Verdict
Samsung’s best wireless earbuds so far, improving on the Galaxy Buds 3 Pro with a stronger noise-cancelling performance, more balanced sound, better call quality, and solid battery life. If you have a Galaxy Ultra smartphone, you can buy with confidence
-
Wide, spacious, clear sound
-
Strong noise-cancelling
-
Comfortable fit
-
Improved call quality
-
Solid battery life
-
Need a Galaxy smartphone to get the best performance
-
Controls are still fiddly
Key Features
-
Review Price:
£219
-
SSC-UHQ
Higher quality, 24-bit/96kHz sound over Bluetooth
-
360 Audio with Head Tracking
Have music follow your movements
-
Super Clear Call
Clearer calls with Samsung smartphones
Introduction
Every year there’s a new Samsung Galaxy smartphone, and more often than not, alongside them is a new pair of headphones – in this case, it’s the Galaxy Buds Pro 4.
The Galaxy Buds Pro 4 don’t receive as much fanfare as the smartphones (in this case the Samsung Galaxy S26 Ultra), less the headline and more a sub-header; but similarly to how Apple approaches its true wireless pairs, the Galaxy Buds are a partner to the smartphones rather than an entity that exists on its own.
The Galaxy Buds have been getting better – aside from the strange burst of designs a few years ago – are these the best yet?
Design
- Plush level of comfort
- IP57 rating
- White, black and pink gold finishes
Samsung has toned down the AirPods-vibe though, at the end of the day, these are a pair of stem-based wireless earbuds – there’s not much you can do with the actual design.
But Samsung has tried and the Galaxy Buds 4 Pro do look nice, the silver ‘real metal blade’ finish of the stem feels suitably premium. Comfort levels are very good. I’ve worn these for hours and not felt any discomfort. Small, medium and large ear tips are provided, with medium as default.


I do, however, find that the seal for these earbuds can come loose while walking. Even munching on food can cause the fit to loosen and require a push back in.
Samsung continues with gesture/pinch controls. I’ve never been a big fan and I can’t say the Galaxy Buds 4 Pro have persuaded me to think otherwise. I find it fiddly, and often when I’ve tried to play/pause a track I’ve ended up lowering the volume. I often just use the controls in-app than use the onboard controls – it’s much easier for me.


The charging case differs from the Galaxy Buds 3 Pro and is different from the Galaxy Buds 4 launched alongside the Pro version. Compared to the 3 Pro, this new case is more compact but also slightly taller – the see-through case is a nice visual touch. Rated at IP57, these earbuds put out a stiff hand against water, dirt and dust (more so than most premium true wireless), though the case doesn’t sound like it has any protection.
Colours come in white and black, but buy directly from Samsung and there’s the option of a Pink Gold (which looks like it costs the same).
Noise-Cancellation
- Galaxy AI-supported features
- 26 hours battery life with ANC
- Galaxy Wearable app for customisation
In general, the Galaxy Buds 4 Pro’s adaptive noise-cancellation is good, very good even, especially in terms of how consistent the performance is.
Whether I’m on the Victoria Line, a train, a bus, an aeroplane, the DLR or walking outside with traffic going past, the level of quiet and calm has always been very high. The barometer I have with ANC headphones is whether I need to raise the volume to mask more noise and I never felt the need to do that with the Galaxy Buds 4 Pro.
Cars are reduced to hums, the Underground is no longer a constant noise machine, and compared to the Galaxy Buds 3 Pro, these do thin out noise better.


But against the Sony WF-1000XM6, Bose QuietComfort Ultra Earbuds 2 and Technics EAH-AZ100, they’re just a step beyond the Samsung, with more noise peering in when wearing the Galaxy Buds 4 Pro during a pink noise test. They’re not too far off though.
The transparency mode sounds clear – big and broad in terms of what you can hear, and it sounds natural enough, though again perhaps not to the same levels of Sony and Bose produce.
Call quality is very good. With Samsung Galaxy phones there’s a Super Clear Call feature that enhances clarity and reduces noise, but even using a OnePlus smartphone the performance was very good.
Background noise was reduced and though my voice did sound muffled – at times it was hard for the other person to hear some words – but overall the performance is good for use when you’re outside.
Features
- Galaxy smartphone exclusive features
- Wear app for non-Galaxy smartphones
The Galaxy Buds 4 Pro aren’t short of features, though you’ll need to have a Galaxy smartphone to get the most out of them, especially one that’s been updated to the One UI 8.5, as that has access to features not present in previous versions.
Like with Apple’s AirPods, the Galaxy Buds’ UI is built into the Samsung smartphone UX, but for everyone else, you’ll need to download the Galaxy Wear app.


Wirelessly there’s Bluetooth 6.1, which brings some enhancements over Bluetooth 5 (everything is just better, in simple terms), and I’ve found the connection to be strong wherever I am (and that’s without a phone that supports Bluetooth 6). Instability and interference have barely been an issue.
The headphones support an interesting array of codecs for the Bluetooth fans out there, with SBC, AAC, LC3, and Samsung’s own SSC and SSC-UHQ, the latter acting as Samsung’s high quality codec of choice against Sony’s LDAC and Qualcomm’s aptX. SSC is only available on Samsung Galaxy phones though.
You can toggle it on in the app/settings and it lets loose 24-bit/96kHz audio over a Bluetooth connection – though don’t take that to mean it’s lossless. It’s very likely to be lossy (which means detail is lost).


There’s 360 Audio with Head Tracking that builds on top of immersive audio formats such as Dolby Atmos music. The head tracking works well when listening to Sarah Kinsley’s Truth of Pursuit and Brent Faiyaz’s Other Side on Tidal, but there’s possibly the slightest lag when moving my head and waiting for music to respond.
Otherwise, I’m rather impressed in terms of clarity – immersive audio tends to sound less detailed and softer but the Galaxy Buds 4 Pro do a good job of keeping clarity levels high.
There are Head Gestures for accepting and rejecting calls (just added to One UI 8.5). and Earbud fit test (much less annoying than the Sony Sound Connect version): customisation of controls, battery life indicators and swapping through noise-cancelling modes, EQ options and Audio Broadcast (Auracast by another name). There are voice controls (mainly through Bixby) and accessibility controls if needed.
Want to translate words from a different language? You can read a transcript of what the earbuds hear, translated to your language via Galaxy AI, speaking of which, Samsung seems to have calmed down the AI narrative and rightly so. I don’t need to be told about AI, I just need it to work in the way it’s meant to.


You can switch to nearby devices without having to jump into pairing mode, though the fine print indicates these need to be connected to your Samsung account. With that in mind, Bluetooth multipoint is a slight mystery in that it is supported (with Samsung devices) and isn’t (with anything other than Samsung). I can’t have the Galaxy Buds 4 Pro connected to a Galaxy smartphone or OnePlus model at the same time.
Lose the earbuds and you can locate them through Samsung Find, although it seems to think I’m not in my house but next door – close enough I suppose. The Adapt Sound feature is not what I initially thought it’d be. It tunes the sound for how old you are, boosting frequencies based on your age. You can add a personalised sound profile by going through audio tests to determine your hearing.
Battery Life
- Six hours per charge
- Wireless charging
Samsung claims the Galaxy Buds 4 Pro are capable of six hours with ANC on, which doesn’t sound the most progressive (and isn’t), with a total of 26 hours with the charging case (without ANC it’s 7 and 30 hours).
Perhaps Samsung have erred on the safe side but I’ve found battery life pretty strong. An hour’s listening saw both earbuds fall to 87%, which would put the Galaxy Buds 4 Pro at around 8 hours, not six.
There’s fast-charging for those in a fix, and wireless charging support as well.
Sound Quality
- Clear, detailed, spacious sound
- Wide soundstage
- Balanced, warm approach
Not too dissimilar to the Galaxy Buds 3 Pro, the Galaxy Buds 4 Pro feature a two-way speaker that’s been redesigned from before.
There’s now a wider woofer alongside a precision tweeter, with the aim of delivering deep, textured sub-bass to extended highs with a “faster transient response, a rich midrange body and sharp detail”. Each driver has its own amplifier, which should lead to reduced distortion.
Paired with a non-Galaxy smartphone and the results are… fine. The Galaxy Buds 4 Pro sound on the rich side but the bass isn’t the most assertive, the highs don’t come across as the brightest and they’re not the most dynamic or energetic sounding pair I’ve ever heard.


Some of the traits carry over when paired to a Galaxy smartphone, but to unlock the highest level of performance from the Galaxy Buds 4 Pro, you need to toggle on the SSC-UHQ feature. With that, these earbuds ascend to a higher level.
Which is not to say they match the Sony WF-1000XM6, which beat the Galaxy Buds 4 Pro for insight, detail and energy, but they have qualities of their own that aren’t to be dismissed.
The soundstage is very wide. Bass never hogs the limelight but I’d vote for having a bit more depth and energy to the low frequencies. With a track like Hard Life’s Skeletons, the bass performance leaves me wanting a bit more in terms of energy.
But it’s the clarity of these earbuds that impresses, as well as the natural tone they strike whether it’s with more upbeat K-pop tracks like ILLIT’s Magnetic or slower, more downtempo tracks like Amy Winehouse’s Back to Black – the wide soundstage, crisp tone to highs and levels of insight and detail with vocals stand out.


Andreas Ihlebæk’s Come Summer is a track I play to try and catch headphones out – the highs sound bright, but it can expose a lack of precision and detail, sounding soft and almost too bright if headphones get it wrong. The Galaxy Buds 4 Pro stay on the right side of balanced, bringing brightness and variation to the highs while maintaining higher levels detail and clarity.
However, are the Galaxy Buds 4 Pro better sounding than the Galaxy Buds 3 Pro? Initially there’s a question mark around that. The approach both take is similar but the Galaxy Buds 4 Pro eke out a little more insight and detail from tracks, administering a slightly more natural tone. It isn’t enough to necessarily trade the older model for the newer ones, but if you’re coming from the Galaxy Buds 2, this level of sound is a jump up
Should you buy it?
If you’ve got a Galaxy phone
Enable the SSC-UHQ feature and the Galaxy Buds Pro 4 show off their best selves.
You don’t have a toe in the Samsung ecosystem
No Samsung Galaxy phone? Then much like with the Galaxy Buds 3 Pro, there’s not much point even glancing at these headphones.
Final Thoughts
I had to have a good think in terms of how to approach the scoring for these headphones. They are better paired with a Galaxy smartphone, in particular the Ultra series, and the way Samsung markets these headphones, there’s little reason to buy them if you’re not a Galaxy owner.
So the score relates to the experience you’d get with a Galaxy smartphone, much like AirPods work best with an iPhone.
The Galaxy Buds 4 Pro sound good, they could be a little more bolder and exciting but I’ve enjoyed them. It’s not quite Sony WF-1000XM6 or Status Pro X level, but for Samsung owners with the SSC-UHQ codec enabled, they’re a good listen.
The noise-cancelling impresses, the fit is comfortable, battery life is solid and the call quality is good. Overall, this is a strong effort from Samsung, and their best true wireless earbuds yet.
How We Test
The Samsung Galaxy Buds 4 Pro were tested over the course of a month.
They were used on public transport and aeroplanes to test the noise-cancellation, while a pink noise test was carried to test against other headphones. In cities such as London and Munich to test real-world performance.
A battery drain was carried to test the battery life, while calls were made to test the call quality
- Tested for a month
- Tested with real world use
- Battery drain carried out
FAQs
Along with wireless charging, the Galaxy Buds 4 Pro support fast-charging via a USB input, with a 10-minute charge providing an hour of playback
Full Specs
| Samsung Galaxy Buds 4 Pro Review | |
|---|---|
| UK RRP | £219 |
| Manufacturer | Samsung |
| IP rating | IP57 |
| Battery Hours | 26 |
| Wireless charging | Yes |
| Fast Charging | Yes |
| ASIN | B0G58R6868 |
| Release Date | 2026 |
| Audio Resolution | SBC, AAC, SSC, SSC-UHQ, LE Audio |
| Driver (s) | Wide woofer, tweeter |
| Noise Cancellation? | Yes |
| Connectivity | Bluetooth 6.1 |
| Colours | Silver, Black, Pink Gold |
| Frequency Range | – Hz |
| Headphone Type | True Wireless |
| Voice Assistant | Bixby |
Tech
Tesla Semi is finally going into production, and early drivers are already sold
![]()
For Dakota Shearer, a driver with IMC Logistics, that shift began on a tight bend outside Sparks, Nevada. He took a wrong turn hauling a 40-foot trailer and found himself on a curve too narrow to complete. In a conventional rig, he would have had to climb in and out…
Read Entire Article
Source link
Tech
From Sydney to South Lake Union: VR startup Vantari brings its ‘flight simulator for healthcare’ to Seattle

Vantari, a virtual reality startup that builds “flight simulator” software for doctors and nurses, has officially moved its headquarters to Seattle as it ramps up work with health systems and device makers across North America.
CEO and co‑founder Nishanth Krishnananthan relocated from Australia to Seattle two years ago and recently officially established the company’s headquarters in the Emerald City.
The inspiration for Vantari came from his own experience as a surgical doctor in Australia and seeing how poorly procedural training prepared clinicians for real emergencies. He wondered why healthcare didn’t use the same training tactics as the aviation industry.
Founded in 2017, Vantari now works with more than 50 organizations in North America, Australia, and the UK. Customers include major academic medical centers such as Harvard, Yale, and Mount Sinai, and the company has established new “centers of excellence” with Seattle University and the University of Washington’s anesthesiology department.
Hospitals and universities use off‑the‑shelf Meta/Oculus headsets connected to laptops. Clinicians log in, select their specialty and procedure, and then perform the steps in a fully virtual environment mapped to college and best‑practice guidelines. An AI facilitator inside the headset guides users step‑by‑step, answers questions, and scores performance, while supervisors can later review the logged session data.
VR controllers mimic the feel of inserting catheters, puncturing tissue, and adjusting equipment. Vital signs change dynamically in response to each action.
The company has a library of procedures, ranging from anesthesia to critical care to cardiology. It has also patented an ultrasound system inside VR that allows trainees to perform imaging and guidance as part of the procedure. Many scenarios are co‑developed with device makers such as Boston Scientific, JNJ, and Sonosite.

Vantari’s business runs on a B2B SaaS model, offering annual licenses and hardware bundles. Vantari also signs contracts with medical device and pharmaceutical companies, which co‑develop modules on the platform and design virtual versions of their devices. A third revenue stream comes from industry and accrediting bodies that co‑develop content.
To date, Vantari has raised about $7 million, largely from Australian VCs, family offices and high‑net‑worth doctors and physicians. Last year it raised $2 million from Seattle‑area backers SpringRock VC and Alliance of Angels.
Krishnananthan said the move to Seattle makes it easier to serve U.S. customers and attract additional capital from American investors. He also pointed to the strength of local tech giants and medical institutions — including Amazon, Microsoft, Seattle University and the University of Washington — as well as nearby medical device firms.
The team is roughly 18 people, split about 50/50 between Australia and the U.S., with most employees working remotely.
Looking ahead, Vantari wants to go beyond static content and is building an AI scenario builder that would let hospitals generate their own protocols and procedures on the platform. Krishnananthan’s long‑term vision is to use the interaction data it collects to create what he calls a “Google Maps of surgery,” offering live, mixed‑reality guidance during real procedures so clinicians receive step‑by‑step support at the bedside, rather than just training in a headset.
“That’s like the big North Star that I want to get to,” he said. “It’s a lot more accessible now with the technology advancements that are happening.”
Tech
It’s about time: Your Samsung Galaxy S26 can now AirDrop files to an iPhone
- Samsung is updating Quick Share
- The wireless file and photo sharing feature will now support iPhone’s AirDrop
- Only the Galaxy S26 series for now
Samsung just broke through a major platform barrier, and one that is certain to thrill both iPhone and Samsung Galaxy owners: Its version of Quick Share will soon support Apple‘s AirDrop.
Quick Share and AirDrop perform essentially the same function but on distinctly separate platforms (Android and iOS, respectively). Each lets you quickly transfer files, photos, and videos wirelessly from one phone to another. Both use Bluetooth and Wi-Fi to establish the ad-hoc connection. Neither, until now, has worked across iPhone and Galaxy phones, but that’s about to change.
Starting on March 23 in South Korea and over the following week in the US, Quick Share will receive an update that lets Galaxy phones share files to iPhones via AirDrop. The caveat — and it’s a big one — is that it will only work with Samsung Galaxy S26 phones. Samsung says they’ll be adding more devices “at a later date.”
Article continues below
Enabling the feature should be easy. On your Galaxy S26 device, open the Quick Panel and select Connected Devices and then Quick Share. Next, select the new “Share with Apple Devices.” After that, you’ll have the option to select a nearby iPhone, assuming they are open to Everyone (or Contacts, we presume).
Following Pixel’s lead
Samsung’s update follows Google‘s local sharing technology update that also added AirDrop support to Quick Share on Pixel devices late last year. Quick Share on Pixel 10 devices shares the same architecture as Quick Share on Galaxy phones, so it’s not all that surprising that S26 phones now have AirDrop capabilities.
At the moment, it’s not clear if the S26’s version of Quick Share will follow the Pixel 10’s lead and also allow iPhones to AirDrop files to Galaxy S26 phones. It’s easy to do on the Pixel 10 and, if Samsung misses that feature, this Quick Share update would only be half a solution. Still, since this is likely based on Google’s technology, there’s good reason to believe it’ll work both ways.
This expanding AirDrop support can only mean good things for future Android devices from all sorts of manufacturers, since this support is clearly coming at a platform level.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
Tech
Amphibious StabiX 250UC Opens Remote Shores That Stay Out of Reach for Most Campers

Drive along a rough coastal road, and the StabiX 250UC camper just continues rolling, right to the water’s edge. Instead than stopping or scrambling to load supplies into a second boat, the driver simply keeps going, tires gripping the wet beach and driving straight into the waves. The movable legs perform their job and lift those wheels clear, then the main outboard takes over and you’re gliding along.
The 250UC, built in New Zealand, begins as a 25-foot boat hull before engineers add a specific land drive system for getting it onto the beach or along a private access road. Four enormous 26-inch tyres on moveable legs, powered by a 40-horsepower Briggs & Stratton V-twin engine. A drive-by-wire system at the helm simplifies things; you can travel at up to 5.6 miles per hour on land if necessary, and the technology is primarily designed to get you from the dirt to the water without the need for a trailer or ramp.
LEGO City Holiday Adventure Camper Van Building Toy Set – Vacation Toy for Kids, Boys and Girls, Ages…
- CAMPER VAN BUILDING SET – Kids can enjoy vacations every day with this LEGO City Holiday Adventure Camper Van building set
- TOY FIGURES & CAMPER PLAYSET – This vehicle set includes everything kids need to build a camper van with a detailed living space, plus a campfire…
- FUN FEATURES – The camper van opens up for full access to an interior living space with a kitchen, toilet, 2 bunk beds, a crib and a removable…


Once afloat, the 250UC is powered by a conventional 300 horsepower Mercury or Yamaha outboard engine, or anything you like. Water speed increases into the low 40s in miles per hour. A decent 300-liter fuel tank will bring you out for full days of cruising, and the hull handles the waves nicely, making it an easy fish or a quick journey to an island with no appropriate docks.



If you’re just traveling, the 8.5-foot-wide cabin can accommodate five or seven people. At night, it all changes into sleeping accommodations for three or four people, with a V-berth in the front, dinette seating that folds flat into another bed, a galley area with a two-burner diesel cooktop, sink, and compact drawer fridge, as well as an electric toilet. You can also install diesel heating to keep the space warm regardless of the weather. Roof vents provide fresh air, and you may add solar panels or extra navigation screens as needed.


It’s relatively modest at 25 feet long overall, but the base price is starting to become a little high, around 467,500 New Zealand dollars ($271,615). However, if you add an expanded roof, canvas side walls, a roof rack, and a full galley setup, the price might reach just about 525,000 New Zealand dollars. Not a bad price for a high-end boat, but with only around 25 built each year, you can be certain that each one is heavily customized in terms of paint colors, upholstery patterns, and so on.
[Source]
Tech
Samsung’s Galaxy S26 Phones Will Work With Apple’s AirDrop, Much Like the Pixel 10
Samsung’s Galaxy S26 phones will gain the ability to use Apple’s AirDrop this week, allowing the company’s Galaxy phones to directly share photos and files with iPhone and Mac computers.
Samsung is announcing the new feature Sunday night, which will need to be turned on from the phone’s settings menu. The feature will be arriving in an update to devices over the course of this week, and when it does, the Quick Share settings menu will gain a Share with Apple devices toggle.
The Share with Apple devices option will appear in the Quick Share menu.
After it’s activated, the Quick Share feature on the Galaxy phone will be able to see Apple devices by opening the Quick Share menu, and can then send photos or files by selecting the device. For an iPhone to see the Galaxy phone, the device’s AirDrop settings need to be set to Everyone.
This is similar to how AirDrop compatibility works with Google’s Pixel 10 phones, which gained the feature in a software update last fall. Samsung says AirDrop compatibility will eventually come to more Galaxy phones and is starting with the S26 series.
Samsung says that the addition of AirDrop compatibility is meant to help with the company’s ongoing effort to have its phones work with other operating systems. And because Apple and Samsung often dominate the best-selling phone lists around the world, the ability to share photos and media using AirDrop and QuickShare could quickly become ubiquitous. This could be especially true if Samsung expands this to its lower-cost phone lineup eventually, such as the $200 Galaxy A17.
Tech
Hackaday Links: March 22, 2026
On Friday, Reuters reported that Amazon is going to try to get into the smartphone game…again. The Fire Phone was perhaps Amazon’s biggest commercial misstep, and was only on the market for about a year before it was discontinued in the summer of 2015. But now industry sources are saying that a new phone code-named “Transformer” is in the works from the e-commerce giant.
At this point, there’s no word on how much the phone would cost or when it would hit the market. The only information Reuters was able to squeeze out of their contacts was that the device would feature AI heavily. Real shocker there — anyone with an Echo device in their kitchen could tell you that Amazon is desperate to get you talking to their gadgets, presumably so they can convince you to buy something. While a smartphone with even more AI features we didn’t ask for certainly won’t be on our Wish List, if history is any indicator, we might be able to pick these things up cheap on the second-hand market.
On the subject of AI screwing everything up, earlier this week, the Electronic Frontier Foundation reported that The New York Times had started blocking the Internet Archive’s crawlers, citing concerns over their content being scraped up by bots for training data. The EFF likens this to a newspaper asking libraries to stop storing copies of their old editions, and warns that in an era where most people get their news via the Internet, not having an archived copy of sites like The Times will put holes in the digital record. They also point out that mirroring web pages for the purposes of making them more easily searchable is a widely accepted practice (ask Google) and has been legally recognized as fair use in court.
Assuming we take the NYT’s side of the story at face value, there’s a tiny part of our cold robotic heart that feels some sympathy for them. Over the last year or so, we’ve noticed some suspicious activity that we believe to be bots siphoning up content from the blog and Hackaday.io, and it’s resulted in a few technical headaches for us. On the other hand, what’s Hackaday here for if not to share information? Surely the same could be said for any newspaper, be it the local rag or The New York Times. If a chatbot learning some new phrases from us is the cost of doing business in 2026, so be it. Can’t stop the signal.
Switching gears to the world of aerospace, NASA’s X-59 supersonic research aircraft had to abort a test flight on Friday after just nine minutes in the air. The plane is designed to demonstrate techniques which promise to reduce or eliminate the sonic booms heard on the ground during supersonic flight, and is currently being put through its paces at Armstrong Flight Research Center in Edwards, California.

The space agency hasn’t clarified exactly what the issue was, but after the pilot saw a warning indicator in the cockpit, the decision was made to end the flight early so engineers could take a look at the problem. Given that the X-59 went on to make an uneventful landing, it sounds like things weren’t too dire. Hopefully, that means it won’t be long before the sleek experimental aircraft is back in the air.
Friday also saw the towering Space Launch System rocket return to the launch pad ahead of a potential April 1st (no, really) liftoff for Artemis II. There are about a million things that could further delay the mission, from technical issues to suspicious looking cloud formations over Cape Canaveral, but we’re certainly in the final stretch now. The 10-day mission will see four astronauts run through a packed schedule of experiments and demonstrations as they become the first humans to swing by the Moon since the Apollo program ended in 1972.
Finally, the National Museum of the U.S. Air Force has released a video taken by a drone flying around their collection of Cold War era aircraft. Seasoned FPV pilots will probably notice it’s not the most technically impressive flight out there, but it does provide some viewpoints that simply wouldn’t be possible otherwise. It’s also a bit surreal to see these aircraft, once the absolute state-of-the-art and developed at an unimaginable cost, collecting dust while a $300 drone that packs in higher resolution optics and far more processing power literally flies circles around them.
See something interesting that you think would be a good fit for our weekly Links column? Drop us a line, we’d love to hear about it.
Tech
8Today’s NYT Strands Hints, Answer and Help for March 23 #750
Looking for the most recent Strands answer? Click here for our daily Strands hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle, Connections and Connections: Sports Edition puzzles.
Today’s NYT Strands puzzle has an intriguing mix of words. Some of the answers are difficult to unscramble, so if you need hints and answers, read on.
I go into depth about the rules for Strands in this story.
If you’re looking for today’s Wordle, Connections and Mini Crossword answers, you can visit CNET’s NYT puzzle hints page.
Read more: NYT Connections Turns 1: These Are the 5 Toughest Puzzles So Far
Hint for today’s Strands puzzle
Today’s Strands theme is: In pieces
If that doesn’t help you, here’s a clue: Smash!
Clue words to unlock in-game hints
Your goal is to find hidden words that fit the puzzle’s theme. If you’re stuck, find any words you can. Every time you find three words of four letters or more, Strands will reveal one of the theme words. These are the words I used to get those hints but any words of four or more letters that you find will work:
- PALE, LEAP, BACK, BACKS, RACK, TACK, PANS, HATE, CRACKER, BREAK, PEAL, DOWN, TOWN, PURE
Answers for today’s Strands puzzle
These are the answers that tie into the theme. The goal of the puzzle is to find them all, including the spangram, a theme word that reaches from one side of the puzzle to the other. When you have all of them (I originally thought there were always eight but learned that the number can vary), every letter on the board will be used. Here are the nonspangram answers:
- SNAP, CRACK, RUPTURE, SHATTER, FRACTURE, SPLINTER
Today’s Strands spangram
The completed NYT Strands puzzle for March 23, 2026.
Today’s Strands spangram is BREAKDOWN. To find it, start with the B that is five letters to the right and one letter down from the top-left corner, and wind up, then down.
-
Fashion2 days agoWeekend Open Thread: Adidas – Corporette.com
-
Politics2 days agoJenni Murray, Long-Serving Woman’s Hour Presenter, Dies Aged 75
-
Tech5 days agoAre Split Spacebars the Next Big Gaming Keyboard Trend?
-
Crypto World2 days ago
NIO (NIO) Stock Plunges 6.5% as Shelf Registration Sparks Dilution Worries
-
Crypto World1 day agoBest Crypto to Buy Now: Strategy Just Spent $1.57 Billion on Bitcoin During Fear While Early Investors Quietly Enter Pepeto for 150x Potential
-
News Videos4 days agoRBA board divided on rate cut, unusually buoyant share market | Finance Report | ABC NEWS
-
Crypto World1 day agoBitcoin Price News: Bhutan Sells $72 Million in BTC Under Fiscal Pressure, but the Smart Money Entering Pepeto Sees What the Market Does Not
-
Business7 days agoAustralian shares drop as Iran war enters third week
-
Crypto World7 days agoCrypto Lender BlockFills Enters Chapter 11 with Up to $500M in Liabilities
-
Politics5 days agoThe House | The new register to protect children from their abusers shows Parliament at its best
-
Fashion7 days ago25 Celebrities with Curly Hair That Are Naturally Beautiful
-
Tech3 days agoinKONBINI Lets You Spend Summer Days Behind the Register
-
Politics5 days agoReal-time pollution monitoring calls after boy nearly dies
-
Crypto World4 days agoCanada’s FINTRAC revokes registrations of 23 crypto MSBs in AML crackdown
-
NewsBeat4 days agoResidents in North Lanarkshire reminded to register to vote in Scottish Parliament Election
-
Business7 days agoMeta planning major layoffs as AI spending and automation reshape workforce
-
News Videos5 days agoPARLIAMENT OF MALAWI – PAC MEETING WITH REGISTRAR OF FINANCIAL ON AMARYLLIS HOTEL – INQUIRY LIVE
-
Business10 hours agoNo Winner in March 21 Drawing as Prize Rolls to $133 Million for Next
-
Entertainment7 days ago
Oscars reunite Rob Reiner supergroup of 17 stars for emotional tribute: Here's who appeared on stage
-
Business4 days agoWho Was Alex Pretti? 5 Key Facts About the ICU Nurse Killed by Federal Agents in Minneapolis


You must be logged in to post a comment Login