Connect with us
DAPA Banner

Tech

Amphibious StabiX 250UC Opens Remote Shores That Stay Out of Reach for Most Campers

Published

on

StabiX 250UC Amphibious Camper
Drive along a rough coastal road, and the StabiX 250UC camper just continues rolling, right to the water’s edge. Instead than stopping or scrambling to load supplies into a second boat, the driver simply keeps going, tires gripping the wet beach and driving straight into the waves. The movable legs perform their job and lift those wheels clear, then the main outboard takes over and you’re gliding along.



The 250UC, built in New Zealand, begins as a 25-foot boat hull before engineers add a specific land drive system for getting it onto the beach or along a private access road. Four enormous 26-inch tyres on moveable legs, powered by a 40-horsepower Briggs & Stratton V-twin engine. A drive-by-wire system at the helm simplifies things; you can travel at up to 5.6 miles per hour on land if necessary, and the technology is primarily designed to get you from the dirt to the water without the need for a trailer or ramp.


LEGO City Holiday Adventure Camper Van Building Toy Set – Vacation Toy for Kids, Boys and Girls, Ages…
  • CAMPER VAN BUILDING SET – Kids can enjoy vacations every day with this LEGO City Holiday Adventure Camper Van building set
  • TOY FIGURES & CAMPER PLAYSET – This vehicle set includes everything kids need to build a camper van with a detailed living space, plus a campfire…
  • FUN FEATURES – The camper van opens up for full access to an interior living space with a kitchen, toilet, 2 bunk beds, a crib and a removable…

StabiX 250UC Amphibious Camper
StabiX 250UC Amphibious Camper
Once afloat, the 250UC is powered by a conventional 300 horsepower Mercury or Yamaha outboard engine, or anything you like. Water speed increases into the low 40s in miles per hour. A decent 300-liter fuel tank will bring you out for full days of cruising, and the hull handles the waves nicely, making it an easy fish or a quick journey to an island with no appropriate docks.

StabiX 250UC Amphibious Camper Interior
StabiX 250UC Amphibious Camper Interior
StabiX 250UC Amphibious Camper Interior
If you’re just traveling, the 8.5-foot-wide cabin can accommodate five or seven people. At night, it all changes into sleeping accommodations for three or four people, with a V-berth in the front, dinette seating that folds flat into another bed, a galley area with a two-burner diesel cooktop, sink, and compact drawer fridge, as well as an electric toilet. You can also install diesel heating to keep the space warm regardless of the weather. Roof vents provide fresh air, and you may add solar panels or extra navigation screens as needed.

StabiX 250UC Amphibious Camper
StabiX 250UC Amphibious Camper
It’s relatively modest at 25 feet long overall, but the base price is starting to become a little high, around 467,500 New Zealand dollars ($271,615). However, if you add an expanded roof, canvas side walls, a roof rack, and a full galley setup, the price might reach just about 525,000 New Zealand dollars. Not a bad price for a high-end boat, but with only around 25 built each year, you can be certain that each one is heavily customized in terms of paint colors, upholstery patterns, and so on.
[Source]

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Do you want to build a robot snowman?

Published

on

Nvidia’s GTC conference had everything: trillion dollar sales projections, graphics technology that can yassify video games, grand declarations that every company needs an OpenClaw strategy, and even a robot version of the beloved snowman Olaf from Disney’s “Frozen.”

On the latest episode of TechCrunch’s Equity podcast, TechCrunch’s Kirsten Korosec, Sean O’Kane, and I recapped CEO Jensen Huang’s keynote and debated what it means for Nvidia’s future. And yes, a big part of our discussion focused on poor Olaf, whose microphone had to be turned off when he started rambling.

Even if the demo had gone flawlessly, Sean might still have had some reservations, as he noted these presentations always focus on “the engineering challenges” and not the “really messy gray areas” on the social side.

“But what happens when a kid kicks Olaf over?” Sean asked. “And then every other kid who sees Olaf get kicked or knocked over has their whole trip to Disney ruined and it ruins the brand?”

Advertisement

Read a preview of our conversation, edited for length and clarity, below.

Anthony: [CEO Jensen Huang] was basically saying that every company needs to have an OpenClaw strategy now. I think that is just a very grand statement that’s meant to be attention grabbing; I think it’s also interesting coming at this kind of transitional moment for OpenClaw. 

The founder has gone to OpenAI. So it’s now this open source project that potentially can flourish and evolve beyond its creator, or it could languish. If companies like Nvidia are investing a lot into it, then [it’s] more likely that it’ll continue to evolve. But it’ll be interesting to see a year from now, whether that looks like a prescient statement or everyone’s like, “Open what?”

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

Kirsten: In the case of Nvidia, it costs them nothing in the grand scheme of things to launch what they call NemoClaw, which is an open source project, which they built with the OpenClaw creator. But if they don’t do something, they have a lot to lose. So really that message to me, the way I translated it when Jensen was like, “Every enterprise needs to have an OpenClaw strategy,” it was, “Nvidia needs to have a solution or strategy for enterprises, because if it’s successful, it is another way or another pathway for Nvidia to be part of numerous other companies.” So doing nothing is a greater risk than doing something that doesn’t go anywhere.

Advertisement

Sean: The real question here is why have we not talked about what is clearly the end game for Nvidia, and the thing that is going to turn it into the first $100 trillion company, which is an Olaf robot.

Anthony: How could I forget?

Kirsten: Anthony, just go to the end of the two and a half hours to watch this.

So, the Olaf robot comes out, and this is something that Jensen loves to do. He loves to have these demos and some of them go better than others. It is also to demonstrate Nvidia’s technology in robotics, and I don’t know if Olaf was actually speaking in real time or if it was programmed — it felt a little programmed, or it had specific keywords that it used.

Advertisement

But the greatest part about it is that they had to cut its mic at the end because it just started rambling and speaking to the crowd. And then it went over to its little passageway and was slowly lowered. And you could see it on the video. It was still talking, but no mic.

Sean: Now we just need to give this little robot a wheelbase. And I know the perfect founder who can provide it. 

I mean, these demos are always silly. I don’t want to get up on my soapbox, because I know that we’ve talked about this a little bit earlier this week, but this was an impressive demo up until the moment where it fell a little bit short.

This is another really good example, though, of [how] robotics is a really interesting engineering problem and a really interesting physics problem and a really interesting integration problem, and all of this stuff, but this was presented as, in partnership with Disney, and it’s supposed to be the future of Disney parks and things like that: You’re going to be able to walk around and see Olaf from “Frozen” and take pictures of them and everything.

Advertisement

But these efforts never consider — or certainly don’t put front and center in events like this — all the other things you have to consider when you roll stuff out like this. There’s a really good YouTuber, Defunctland, that did a really good video about this — four hours long, not too long — about the history of Disney trying to get these kinds of robotics into their park, these automatons.

The engineering challenges are really interesting and it’s fun to see that history, but it always comes back to the same question of: Okay, but what happens when a kid kicks Olaf over? And then every other kid who sees Olaf get kicked or knocked over has their whole trip to Disney ruined and it ruins the brand?

There’s just so much on the social side of this. And that sounds silly, but this is the question that we’re kind of asking about humanoid robots, too. There’s so much hype about all this other stuff and we just don’t really hear as much conversation about the really messy gray areas on the social side of these things, and also just integrating them into people’s lives. We only ever really hear about the engineering challenges — which again, are really impressive.

Kirsten: I have a counterpoint and then we have to get to our next [topic]. This is a job creator, because Olaf will have to have a human babysitter in Disneyland, probably dressed up as Elsa or something else. You can imagine that actually, what we’re doing is creating jobs [with] this engineering experiment.

Advertisement

Source link

Continue Reading

Tech

Watch this moonwalking humanoid robot impress with lifelike agility

Published

on

A new video (above) out of South Korea features the field tests and interaction capabilities of KAIST Humanoid v0.7, developed at the Korea Advanced Institute of Science and Technology (KAIST).

The impressive humanoid robot was developed at KAIST’s Dynamic Robot Control & Design Laboratory (DRCD) and deploys actuators and other technology that was developed in-house.

In the video below, you can watch the bipedal bot walk, jog, and jump in an incredibly human-like way. It also shoots a soccer ball toward a goal (disappointingly there’s no robot goalkeeper there to challenge it), and performs a perfect moonwalk along astroturf. And it was the moonwalk that created a bit of a buzz in the comments accompanying the video.

“Moonwalk was flawless,” wrote one, while another commented, “Okay all of this was impressive, but you convinced me with the moonwalk.”

Advertisement

In its robotics work, KAIST deploys Physical AI, a form of AI technology that enables machines to understand and act in the physical world, helping to explain why robots such as the KAIST Humanoid v0.7 appear to move in such a human-like manner.

Instead of just “thinking in words” like typical AI, Physical AI gives machines a sense of space and timing in real environments.

Under KAIST’s broader collaborative intelligence initiative led by Young Jae Jang, the approach trains robots and systems to learn continuously through simulation and real time feedback, rather than relying only on enormous historical datasets.

Essentially, Physical AI merges brain and body by tightly integrating software intelligence with hardware such as motors and sensors so that the machines do not only compute, but also act, react, and collaborate in complex environments, whether as part of fully automated factories or in humanoid robots doing something like kicking a ball.

Advertisement

Engineers are refining the KAIST Humanoid v0.7 with the aim of enhancing its mobile and dexterous capabilities, thereby building on its existing walking and dynamic movement skills. By further integrating AI with mechanical hardware, it plans to get the robot to perform more complex tasks like carrying items or operating machinery, bringing Physical AI to real-world humanoid robot applications.

KAIST is one of South Korea’s top universities and is often compared to top global tech schools like MIT in the U.S. Founded in the early 1970s to drive Korea’s scientific and technological growth, KAIST focuses heavily on research in fields such as AI, robotics, physics, and engineering.

Source link

Advertisement
Continue Reading

Tech

Samsung Galaxy Buds 4 Pro Review

Published

on

Verdict

Samsung’s best wireless earbuds so far, improving on the Galaxy Buds 3 Pro with a stronger noise-cancelling performance, more balanced sound, better call quality, and solid battery life. If you have a Galaxy Ultra smartphone, you can buy with confidence

  • Wide, spacious, clear sound

  • Strong noise-cancelling

  • Comfortable fit

  • Improved call quality

  • Solid battery life

  • Need a Galaxy smartphone to get the best performance

  • Controls are still fiddly

Key Features

  • Trusted Reviews IconTrusted Reviews Icon

    Advertisement

    Review Price:
    £219

  • SSC-UHQ

    Advertisement

    Higher quality, 24-bit/96kHz sound over Bluetooth

  • 360 Audio with Head Tracking

    Advertisement

    Have music follow your movements

  • Super Clear Call

    Advertisement

    Clearer calls with Samsung smartphones

Introduction

Every year there’s a new Samsung Galaxy smartphone, and more often than not, alongside them is a new pair of headphones – in this case, it’s the Galaxy Buds Pro 4.

The Galaxy Buds Pro 4 don’t receive as much fanfare as the smartphones (in this case the Samsung Galaxy S26 Ultra), less the headline and more a sub-header; but similarly to how Apple approaches its true wireless pairs, the Galaxy Buds are a partner to the smartphones rather than an entity that exists on its own.

Advertisement

The Galaxy Buds have been getting better – aside from the strange burst of designs a few years ago – are these the best yet?

Advertisement

Design

  • Plush level of comfort
  • IP57 rating
  • White, black and pink gold finishes

Samsung has toned down the AirPods-vibe though, at the end of the day, these are a pair of stem-based wireless earbuds – there’s not much you can do with the actual design.

But Samsung has tried and the Galaxy Buds 4 Pro do look nice, the silver ‘real metal blade’ finish of the stem feels suitably premium. Comfort levels are very good. I’ve worn these for hours and not felt any discomfort. Small, medium and large ear tips are provided, with medium as default.

Samsung Galaxy Buds 4 Pro stem designSamsung Galaxy Buds 4 Pro stem design
Image Credit (Trusted Reviews)

I do, however, find that the seal for these earbuds can come loose while walking. Even munching on food can cause the fit to loosen and require a push back in.

Advertisement

Advertisement

Samsung continues with gesture/pinch controls. I’ve never been a big fan and I can’t say the Galaxy Buds 4 Pro have persuaded me to think otherwise. I find it fiddly, and often when I’ve tried to play/pause a track I’ve ended up lowering the volume. I often just use the controls in-app than use the onboard controls – it’s much easier for me.

Samsung Galaxy Buds 4 Pro and Galaxy Buds 4Samsung Galaxy Buds 4 Pro and Galaxy Buds 4
Image Credit (Trusted Reviews)

The charging case differs from the Galaxy Buds 3 Pro and is different from the Galaxy Buds 4 launched alongside the Pro version. Compared to the 3 Pro, this new case is more compact but also slightly taller – the see-through case is a nice visual touch. Rated at IP57, these earbuds put out a stiff hand against water, dirt and dust (more so than most premium true wireless), though the case doesn’t sound like it has any protection.

Colours come in white and black, but buy directly from Samsung and there’s the option of a Pink Gold (which looks like it costs the same).

Noise-Cancellation

  • Galaxy AI-supported features
  • 26 hours battery life with ANC
  • Galaxy Wearable app for customisation

In general, the Galaxy Buds 4 Pro’s adaptive noise-cancellation is good, very good even, especially in terms of how consistent the performance is.

Advertisement

Whether I’m on the Victoria Line, a train, a bus, an aeroplane, the DLR or walking outside with traffic going past, the level of quiet and calm has always been very high. The barometer I have with ANC headphones is whether I need to raise the volume to mask more noise and I never felt the need to do that with the Galaxy Buds 4 Pro.

Advertisement

Cars are reduced to hums, the Underground is no longer a constant noise machine, and compared to the Galaxy Buds 3 Pro, these do thin out noise better.

Samsung Galaxy Buds 4 Pro in front of caseSamsung Galaxy Buds 4 Pro in front of case
Image Credit (Trusted Reviews)

But against the Sony WF-1000XM6, Bose QuietComfort Ultra Earbuds 2 and Technics EAH-AZ100, they’re just a step beyond the Samsung, with more noise peering in when wearing the Galaxy Buds 4 Pro during a pink noise test. They’re not too far off though.

The transparency mode sounds clear – big and broad in terms of what you can hear, and it sounds natural enough, though again perhaps not to the same levels of Sony and Bose produce.

Call quality is very good. With Samsung Galaxy phones there’s a Super Clear Call feature that enhances clarity and reduces noise, but even using a OnePlus smartphone the performance was very good.

Advertisement

Advertisement

Background noise was reduced and though my voice did sound muffled – at times it was hard for the other person to hear some words – but overall the performance is good for use when you’re outside.

Features

  • Galaxy smartphone exclusive features
  • Wear app for non-Galaxy smartphones

The Galaxy Buds 4 Pro aren’t short of features, though you’ll need to have a Galaxy smartphone to get the most out of them, especially one that’s been updated to the One UI 8.5, as that has access to features not present in previous versions.

Like with Apple’s AirPods, the Galaxy Buds’ UI is built into the Samsung smartphone UX, but for everyone else, you’ll need to download the Galaxy Wear app.

Samsung Galaxy Buds 4 Pro appSamsung Galaxy Buds 4 Pro app
Image Credit (Trusted Reviews)

Wirelessly there’s Bluetooth 6.1, which brings some enhancements over Bluetooth 5 (everything is just better, in simple terms), and I’ve found the connection to be strong wherever I am (and that’s without a phone that supports Bluetooth 6). Instability and interference have barely been an issue.

Advertisement

The headphones support an interesting array of codecs for the Bluetooth fans out there, with SBC, AAC, LC3, and Samsung’s own SSC and SSC-UHQ, the latter acting as Samsung’s high quality codec of choice against Sony’s LDAC and Qualcomm’s aptX. SSC is only available on Samsung Galaxy phones though.

Advertisement

You can toggle it on in the app/settings and it lets loose 24-bit/96kHz audio over a Bluetooth connection – though don’t take that to mean it’s lossless. It’s very likely to be lossy (which means detail is lost).

Samsung Galaxy Buds 4 Pro audioSamsung Galaxy Buds 4 Pro audio
Image Credit (Trusted Reviews)

There’s 360 Audio with Head Tracking that builds on top of immersive audio formats such as Dolby Atmos music. The head tracking works well when listening to Sarah Kinsley’s Truth of Pursuit and Brent Faiyaz’s Other Side on Tidal, but there’s possibly the slightest lag when moving my head and waiting for music to respond.

Otherwise, I’m rather impressed in terms of clarity – immersive audio tends to sound less detailed and softer but the Galaxy Buds 4 Pro do a good job of keeping clarity levels high.

There are Head Gestures for accepting and rejecting calls (just added to One UI 8.5). and Earbud fit test (much less annoying than the Sony Sound Connect version): customisation of controls, battery life indicators and swapping through noise-cancelling modes, EQ options and Audio Broadcast (Auracast by another name). There are voice controls (mainly through Bixby) and accessibility controls if needed.

Advertisement

Advertisement

Want to translate words from a different language? You can read a transcript of what the earbuds hear, translated to your language via Galaxy AI, speaking of which, Samsung seems to have calmed down the AI narrative and rightly so. I don’t need to be told about AI, I just need it to work in the way it’s meant to.

Samsung Galaxy Buds 4 Pro exclusiveSamsung Galaxy Buds 4 Pro exclusive
Image Credit (Trusted Reviews)

You can switch to nearby devices without having to jump into pairing mode, though the fine print indicates these need to be connected to your Samsung account. With that in mind, Bluetooth multipoint is a slight mystery in that it is supported (with Samsung devices) and isn’t (with anything other than Samsung). I can’t have the Galaxy Buds 4 Pro connected to a Galaxy smartphone or OnePlus model at the same time.

Lose the earbuds and you can locate them through Samsung Find, although it seems to think I’m not in my house but next door – close enough I suppose. The Adapt Sound feature is not what I initially thought it’d be. It tunes the sound for how old you are, boosting frequencies based on your age. You can add a personalised sound profile by going through audio tests to determine your hearing.

Battery Life

  • Six hours per charge
  • Wireless charging

Samsung claims the Galaxy Buds 4 Pro are capable of six hours with ANC on, which doesn’t sound the most progressive (and isn’t), with a total of 26 hours with the charging case (without ANC it’s 7 and 30 hours).

Advertisement

Perhaps Samsung have erred on the safe side but I’ve found battery life pretty strong. An hour’s listening saw both earbuds fall to 87%, which would put the Galaxy Buds 4 Pro at around 8 hours, not six.

Advertisement

There’s fast-charging for those in a fix, and wireless charging support as well.

Sound Quality

  • Clear, detailed, spacious sound
  • Wide soundstage
  • Balanced, warm approach

Not too dissimilar to the Galaxy Buds 3 Pro, the Galaxy Buds 4 Pro feature a two-way speaker that’s been redesigned from before.

There’s now a wider woofer alongside a precision tweeter, with the aim of delivering deep, textured sub-bass to extended highs with a “faster transient response, a rich midrange body and sharp detail”. Each driver has its own amplifier, which should lead to reduced distortion.

Paired with a non-Galaxy smartphone and the results are… fine. The Galaxy Buds 4 Pro sound on the rich side but the bass isn’t the most assertive, the highs don’t come across as the brightest and they’re not the most dynamic or energetic sounding pair I’ve ever heard.

Advertisement

Advertisement
Samsung Galaxy Buds 4 Pro charging caseSamsung Galaxy Buds 4 Pro charging case
Image Credit (Trusted Reviews)

Some of the traits carry over when paired to a Galaxy smartphone, but to unlock the highest level of performance from the Galaxy Buds 4 Pro, you need to toggle on the SSC-UHQ feature. With that, these earbuds ascend to a higher level.

Which is not to say they match the Sony WF-1000XM6, which beat the Galaxy Buds 4 Pro for insight, detail and energy, but they have qualities of their own that aren’t to be dismissed.

The soundstage is very wide. Bass never hogs the limelight but I’d vote for having a bit more depth and energy to the low frequencies. With a track like Hard Life’s Skeletons, the bass performance leaves me wanting a bit more in terms of energy.

But it’s the clarity of these earbuds that impresses, as well as the natural tone they strike whether it’s with more upbeat K-pop tracks like ILLIT’s Magnetic or slower, more downtempo tracks like Amy Winehouse’s Back to Black – the wide soundstage, crisp tone to highs and levels of insight and detail with vocals stand out.

Samsung Galaxy Buds 4 Pro designSamsung Galaxy Buds 4 Pro design
Image Credit (Trusted Reviews)

Advertisement

Andreas Ihlebæk’s Come Summer is a track I play to try and catch headphones out – the highs sound bright, but it can expose a lack of precision and detail, sounding soft and almost too bright if headphones get it wrong. The Galaxy Buds 4 Pro stay on the right side of balanced, bringing brightness and variation to the highs while maintaining higher levels detail and clarity.

Advertisement

However, are the Galaxy Buds 4 Pro better sounding than the Galaxy Buds 3 Pro? Initially there’s a question mark around that. The approach both take is similar but the Galaxy Buds 4 Pro eke out a little more insight and detail from tracks, administering a slightly more natural tone. It isn’t enough to necessarily trade the older model for the newer ones, but if you’re coming from the Galaxy Buds 2, this level of sound is a jump up

Should you buy it?

If you’ve got a Galaxy phone

Enable the SSC-UHQ feature and the Galaxy Buds Pro 4 show off their best selves.

Advertisement

You don’t have a toe in the Samsung ecosystem

Advertisement

No Samsung Galaxy phone? Then much like with the Galaxy Buds 3 Pro, there’s not much point even glancing at these headphones.

Final Thoughts

I had to have a good think in terms of how to approach the scoring for these headphones. They are better paired with a Galaxy smartphone, in particular the Ultra series, and the way Samsung markets these headphones, there’s little reason to buy them if you’re not a Galaxy owner.
 
So the score relates to the experience you’d get with a Galaxy smartphone, much like AirPods work best with an iPhone.
 
The Galaxy Buds 4 Pro sound good, they could be a little more bolder and exciting but I’ve enjoyed them. It’s not quite Sony WF-1000XM6 or Status Pro X level, but for Samsung owners with the SSC-UHQ codec enabled, they’re a good listen.
 
The noise-cancelling impresses, the fit is comfortable, battery life is solid and the call quality is good. Overall, this is a strong effort from Samsung, and their best true wireless earbuds yet.

Advertisement

How We Test

The Samsung Galaxy Buds 4 Pro were tested over the course of a month.

They were used on public transport and aeroplanes to test the noise-cancellation, while a pink noise test was carried to test against other headphones. In cities such as London and Munich to test real-world performance.

A battery drain was carried to test the battery life, while calls were made to test the call quality

  • Tested for a month
  • Tested with real world use
  • Battery drain carried out

Advertisement

FAQs

Does the Samsung Galaxy Buds 4 Pro support fast charging?
Advertisement

Along with wireless charging, the Galaxy Buds 4 Pro support fast-charging via a USB input, with a 10-minute charge providing an hour of playback

Full Specs

  Samsung Galaxy Buds 4 Pro Review
UK RRP £219
Manufacturer Samsung
IP rating IP57
Battery Hours 26
Wireless charging Yes
Fast Charging Yes
ASIN B0G58R6868
Release Date 2026
Audio Resolution SBC, AAC, SSC, SSC-UHQ, LE Audio
Driver (s) Wide woofer, tweeter
Noise Cancellation? Yes
Connectivity Bluetooth 6.1
Colours Silver, Black, Pink Gold
Frequency Range – Hz
Headphone Type True Wireless
Voice Assistant Bixby

Source link

Advertisement
Continue Reading

Tech

Tesla Semi is finally going into production, and early drivers are already sold

Published

on


For Dakota Shearer, a driver with IMC Logistics, that shift began on a tight bend outside Sparks, Nevada. He took a wrong turn hauling a 40-foot trailer and found himself on a curve too narrow to complete. In a conventional rig, he would have had to climb in and out…
Read Entire Article
Source link

Continue Reading

Tech

From Sydney to South Lake Union: VR startup Vantari brings its ‘flight simulator for healthcare’ to Seattle

Published

on

Vantari co-CEO Vijay Paul, COO Jagrup Kahlon, and co-CEO Nishant Krishnanathan. (Vantari Photos)

Vantari, a virtual reality startup that builds “flight simulator” software for doctors and nurses, has officially moved its headquarters to Seattle as it ramps up work with health systems and device makers across North America.

CEO and co‑founder Nishanth Krishnananthan relocated from Australia to Seattle two years ago and recently officially established the company’s headquarters in the Emerald City.

The inspiration for Vantari came from his own experience as a surgical doctor in Australia and seeing how poorly procedural training prepared clinicians for real emergencies. He wondered why healthcare didn’t use the same training tactics as the aviation industry.

Founded in 2017, Vantari now works with more than 50 organizations in North America, Australia, and the UK. Customers include major academic medical centers such as Harvard, Yale, and Mount Sinai, and the company has established new “centers of excellence” with Seattle University and the University of Washington’s anesthesiology department.

Hospitals and universities use off‑the‑shelf Meta/Oculus headsets connected to laptops. Clinicians log in, select their specialty and procedure, and then perform the steps in a fully virtual environment mapped to college and best‑practice guidelines. An AI facilitator inside the headset guides users step‑by‑step, answers questions, and scores performance, while supervisors can later review the logged session data.

Advertisement

VR controllers mimic the feel of inserting catheters, puncturing tissue, and adjusting equipment. Vital signs change dynamically in response to each action.

The company has a library of procedures, ranging from anesthesia to critical care to cardiology. It has also patented an ultrasound system inside VR that allows trainees to perform imaging and guidance as part of the procedure. Many scenarios are co‑developed with device makers such as Boston Scientific, JNJ, and Sonosite.

Vantari’s VR software includes an ultrasound system that allows users to perform imaging and guidance during a mock procedure.

Vantari’s business runs on a B2B SaaS model, offering annual licenses and hardware bundles. Vantari also signs contracts with medical device and pharmaceutical companies, which co‑develop modules on the platform and design virtual versions of their devices. A third revenue stream comes from industry and accrediting bodies that co‑develop content.

To date, Vantari has raised about $7 million, largely from Australian VCs, family offices and high‑net‑worth doctors and physicians. Last year it raised $2 million from Seattle‑area backers SpringRock VC and Alliance of Angels.

Krishnananthan said the move to Seattle makes it easier to serve U.S. customers and attract additional capital from American investors. He also pointed to the strength of local tech giants and medical institutions — including Amazon, Microsoft, Seattle University and the University of Washington — as well as nearby medical device firms.

Advertisement

The team is roughly 18 people, split about 50/50 between Australia and the U.S., with most employees working remotely.

Looking ahead, Vantari wants to go beyond static content and is building an AI scenario builder that would let hospitals generate their own protocols and procedures on the platform. Krishnananthan’s long‑term vision is to use the interaction data it collects to create what he calls a “Google Maps of surgery,” offering live, mixed‑reality guidance during real procedures so clinicians receive step‑by‑step support at the bedside, rather than just training in a headset.

“That’s like the big North Star that I want to get to,” he said. “It’s a lot more accessible now with the technology advancements that are happening.”

Source link

Advertisement
Continue Reading

Tech

It’s about time: Your Samsung Galaxy S26 can now AirDrop files to an iPhone

Published

on


  • Samsung is updating Quick Share
  • The wireless file and photo sharing feature will now support iPhone’s AirDrop
  • Only the Galaxy S26 series for now

Samsung just broke through a major platform barrier, and one that is certain to thrill both iPhone and Samsung Galaxy owners: Its version of Quick Share will soon support Apple‘s AirDrop.

Quick Share and AirDrop perform essentially the same function but on distinctly separate platforms (Android and iOS, respectively). Each lets you quickly transfer files, photos, and videos wirelessly from one phone to another. Both use Bluetooth and Wi-Fi to establish the ad-hoc connection. Neither, until now, has worked across iPhone and Galaxy phones, but that’s about to change.

Starting on March 23 in South Korea and over the following week in the US, Quick Share will receive an update that lets Galaxy phones share files to iPhones via AirDrop. The caveat — and it’s a big one — is that it will only work with Samsung Galaxy S26 phones. Samsung says they’ll be adding more devices “at a later date.”

Article continues below

Samsung Galaxy S26 AirDrop support

(Image credit: Samsung)

Enabling the feature should be easy. On your Galaxy S26 device, open the Quick Panel and select Connected Devices and then Quick Share. Next, select the new “Share with Apple Devices.” After that, you’ll have the option to select a nearby iPhone, assuming they are open to Everyone (or Contacts, we presume).

Source link

Advertisement
Continue Reading

Tech

Samsung’s Galaxy S26 Phones Will Work With Apple’s AirDrop, Much Like the Pixel 10

Published

on

Samsung’s Galaxy S26 phones will gain the ability to use Apple’s AirDrop this week, allowing the company’s Galaxy phones to directly share photos and files with iPhone and Mac computers.

Samsung is announcing the new feature Sunday night, which will need to be turned on from the phone’s settings menu. The feature will be arriving in an update to devices over the course of this week, and when it does, the Quick Share settings menu will gain a Share with Apple devices toggle. 

The Share with Apple devices option will appear in the Quick Share menu.

Samsung

After it’s activated, the Quick Share feature on the Galaxy phone will be able to see Apple devices by opening the Quick Share menu, and can then send photos or files by selecting the device. For an iPhone to see the Galaxy phone, the device’s AirDrop settings need to be set to Everyone.

This is similar to how AirDrop compatibility works with Google’s Pixel 10 phones, which gained the feature in a software update last fall. Samsung says AirDrop compatibility will eventually come to more Galaxy phones and is starting with the S26 series.

Advertisement

Samsung says that the addition of AirDrop compatibility is meant to help with the company’s ongoing effort to have its phones work with other operating systems. And because Apple and Samsung often dominate the best-selling phone lists around the world, the ability to share photos and media using AirDrop and QuickShare could quickly become ubiquitous. This could be especially true if Samsung expands this to its lower-cost phone lineup eventually, such as the $200 Galaxy A17.

Source link

Advertisement
Continue Reading

Tech

Hackaday Links: March 22, 2026

Published

on

On Friday, Reuters reported that Amazon is going to try to get into the smartphone game…again. The Fire Phone was perhaps Amazon’s biggest commercial misstep, and was only on the market for about a year before it was discontinued in the summer of 2015. But now industry sources are saying that a new phone code-named “Transformer” is in the works from the e-commerce giant.

At this point, there’s no word on how much the phone would cost or when it would hit the market. The only information Reuters was able to squeeze out of their contacts was that the device would feature AI heavily. Real shocker there — anyone with an Echo device in their kitchen could tell you that Amazon is desperate to get you talking to their gadgets, presumably so they can convince you to buy something. While a smartphone with even more AI features we didn’t ask for certainly won’t be on our Wish List, if history is any indicator, we might be able to pick these things up cheap on the second-hand market.

On the subject of AI screwing everything up, earlier this week, the Electronic Frontier Foundation reported that The New York Times had started blocking the Internet Archive’s crawlers, citing concerns over their content being scraped up by bots for training data. The EFF likens this to a newspaper asking libraries to stop storing copies of their old editions, and warns that in an era where most people get their news via the Internet, not having an archived copy of sites like The Times will put holes in the digital record. They also point out that mirroring web pages for the purposes of making them more easily searchable is a widely accepted practice (ask Google) and has been legally recognized as fair use in court.

Assuming we take the NYT’s side of the story at face value, there’s a tiny part of our cold robotic heart that feels some sympathy for them. Over the last year or so, we’ve noticed some suspicious activity that we believe to be bots siphoning up content from the blog and Hackaday.io, and it’s resulted in a few technical headaches for us. On the other hand, what’s Hackaday here for if not to share information? Surely the same could be said for any newspaper, be it the local rag or The New York Times. If a chatbot learning some new phrases from us is the cost of doing business in 2026, so be it. Can’t stop the signal.

Advertisement

Switching gears to the world of aerospace, NASA’s X-59 supersonic research aircraft had to abort a test flight on Friday after just nine minutes in the air. The plane is designed to demonstrate techniques which promise to reduce or eliminate the sonic booms heard on the ground during supersonic flight, and is currently being put through its paces at Armstrong Flight Research Center in Edwards, California.

NASA’s very pointy X-59 aims to make supersonic flight more commercially viable.

The space agency hasn’t clarified exactly what the issue was, but after the pilot saw a warning indicator in the cockpit, the decision was made to end the flight early so engineers could take a look at the problem. Given that the X-59 went on to make an uneventful landing, it sounds like things weren’t too dire. Hopefully, that means it won’t be long before the sleek experimental aircraft is back in the air.

Friday also saw the towering Space Launch System rocket return to the launch pad ahead of a potential April 1st (no, really) liftoff for Artemis II. There are about a million things that could further delay the mission, from technical issues to suspicious looking cloud formations over Cape Canaveral, but we’re certainly in the final stretch now. The 10-day mission will see four astronauts run through a packed schedule of experiments and demonstrations as they become the first humans to swing by the Moon since the Apollo program ended in 1972.

Finally, the National Museum of the U.S. Air Force has released a video taken by a drone flying around their collection of Cold War era aircraft. Seasoned FPV pilots will probably notice it’s not the most technically impressive flight out there, but it does provide some viewpoints that simply wouldn’t be possible otherwise. It’s also a bit surreal to see these aircraft, once the absolute state-of-the-art and developed at an unimaginable cost, collecting dust while a $300 drone that packs in higher resolution optics and far more processing power literally flies circles around them.

Advertisement

See something interesting that you think would be a good fit for our weekly Links column? Drop us a line, we’d love to hear about it.

Source link

Advertisement
Continue Reading

Tech

8Today’s NYT Strands Hints, Answer and Help for March 23 #750

Published

on

Looking for the most recent Strands answer? Click here for our daily Strands hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle, Connections and Connections: Sports Edition puzzles.


Today’s NYT Strands puzzle has an intriguing mix of words. Some of the answers are difficult to unscramble, so if you need hints and answers, read on.

I go into depth about the rules for Strands in this story

Advertisement

If you’re looking for today’s Wordle, Connections and Mini Crossword answers, you can visit CNET’s NYT puzzle hints page.

Read more: NYT Connections Turns 1: These Are the 5 Toughest Puzzles So Far

Hint for today’s Strands puzzle

Today’s Strands theme is: In pieces

Advertisement

If that doesn’t help you, here’s a clue: Smash!

Clue words to unlock in-game hints

Your goal is to find hidden words that fit the puzzle’s theme. If you’re stuck, find any words you can. Every time you find three words of four letters or more, Strands will reveal one of the theme words. These are the words I used to get those hints but any words of four or more letters that you find will work:

  • PALE, LEAP, BACK, BACKS, RACK, TACK, PANS, HATE, CRACKER, BREAK, PEAL, DOWN, TOWN, PURE

Answers for today’s Strands puzzle

These are the answers that tie into the theme. The goal of the puzzle is to find them all, including the spangram, a theme word that reaches from one side of the puzzle to the other. When you have all of them (I originally thought there were always eight but learned that the number can vary), every letter on the board will be used. Here are the nonspangram answers:

  • SNAP, CRACK, RUPTURE, SHATTER, FRACTURE, SPLINTER

Today’s Strands spangram

completed NYT Strands puzzle for March 23, 2026

The completed NYT Strands puzzle for March 23, 2026.

NYT/Screenshot by CNET

Today’s Strands spangram is BREAKDOWN. To find it, start with the B that is five letters to the right and one letter down from the top-left corner, and wind up, then down.

Advertisement

Source link

Continue Reading

Tech

Videos: Tennis Playing Humanoid Robot, Horse Quadruped

Published

on

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2026: 1–5 June 2026, VIENNA
Summer School on Multi-Robot Systems: 29 July–4 August 2026, PRAGUE

Enjoy today’s videos!

Human athletes demonstrate versatile and highly dynamic tennis skills to successfully conduct competitive rallies with a high-speed tennis ball. However, reproducing such behaviors on humanoid robots is difficult, partially due to the lack of perfect humanoid action data or human kinematic motion data in tennis scenarios as reference. In this work, we propose LATENT, a system that Learns Athletic humanoid TEnnis skills from imperfect human motioN daTa.

[ LATENT ]

Advertisement

A beautifully designed robot inspired by Strandbeests.

[ Cranfield University ]

We believe we’re the first robotics company to demonstrate a robot peeling an apple with dual dexterous human-like hands. This breakthrough closes a key gap in robotics, achieving bimanual, contact-rich manipulation and moving far beyond the limits of simple grippers.

Advertisement

Today’s AI models (VLMs) are excellent at perception but struggle with action. Controlling high-degree-of-freedom hands for tasks like this is incredibly complex, and precise finger-level teleoperation is nearly impossible for humans. Our first step was a shared-autonomy system: rather than controlling every finger, the operator triggers pre-learned skills like a “rotate apple or tennis ball” primitive via a keyboard press or pedal. This makes scalable data collection and RL training possible.
How does the AI manage this? We created “MoDE-VLA” (Mixture of Dexterous Experts). It fuses vision, language, force, and touch data by using a team of specialist “experts,” making control in high-dimensional spaces stable and effective. The combination of these two innovations allows for seamless, contact-rich manipulation. The human provides high-level guidance, and the robot executes the complex in-hand coordination required.

[ Sharpa ]

Thanks, Alex!

It was great to see our name amongst the other “AI Native” companies during the NVIDIA GTC keynote. NVIDIA Isaac Lab helps us train reinforcement learning policies that enable the UMV to drive, jump, flip, and hop like a pro.

[ Robotics and AI Institute ]

Advertisement

This Finger-Tip Changer technology was jointly researched and developed through a collaboration between Tesollo and RoCogMan LaB at Hanyang University ERICA. The project integrates Tesollo’s practical robotic hand development experience with the lab’s expertise in robotic manipulation and gripper design.

I don’t know why more robots don’t do this. Also, those pointy fingertips are terrifying.

[ RoCogMan LaB ]

Here’s an upcoming ICRA paper from the Fluent Robotics Lab at the University of Michigan featuring an operational PR2! With functional batteries!!!

Advertisement

[ Fluent Robotics Lab ]

This video showcases the field tests and interaction capabilities of KAIST Humanoid v0.7, developed at the DRCD Lab featuring in-house actuators. The control policy was trained through deep reinforcement learning leveraging human demonstrations.

[ KAIST DRCD Lab ]

Advertisement

This needs to come in adult size.

[ DEEP Robotics ]

I did not know this, but apparently shoeboxes are really annoying to manipulate because if you grab them by the lid, they just open, so specialized hardware is required.

Advertisement

[ Nomagic ]

Thanks, Gilmarie!

This paper presents a method to recover quadrotor Unmanned Air Vehicles (UAVs) from a throw, when no control parameters are known before the throw.

Advertisement

[ MAVLab ]

Uh oh, robots can see glass doors now. We’re in trouble.

[ LimX Dynamics ]

Advertisement

This drone hugs trees <3

[ Stanford BDML ]

Electronic waste is one of the fastest-growing environmental problems in the world. As robotics and electronic systems become more widespread, their environmental footprint continues to increase. In this research, scientists developed a fully biodegradable soft robotic system that integrates electronic devices, sensors, and actuators, yet completely decomposes after use.

Advertisement

[ Nature ]

We developed a distributed algorithm that enables multiple aerial robots to flock together safely in complex environments, without explicit communication or prior knowledge of the surroundings, using only on-board sensors and computation. Our approach ensures collision avoidance, maintains proximity between robots, and handles uncertainties (tracking errors and sensor noise). Tested in simulations and real-world experiments with up to four drones in a dense forest, it proved robust and reliable.

[ RBL ]

The University of Pennsylvania’s 2025 President’s Sustainability Prize winner Piotr Lazarek has developed a system that uses satellite data to pinpoint inefficiencies in farmers’ fields, conducts real-time soil analysis with autonomous drones to understand why they occur, and generates precise fertilizer application maps. His startup Nirby aims to increase productivity in farm areas that are underperforming and reduce fertilizer in high-performing ones.

Advertisement

[ University of Pennsylvania ]

The production version of Atlas is a departure from the typical humanoid form factor, favoring industrial utility over human likeness. Intended for purposeful work in an industrial setting, Atlas has a form factor that signals its role as a machine rather than a companion or friendly assistant. Join two lead hardware engineers and our head of industrial design for a technical discussion of how key product requirements, ranging from passive thermal management to a modular architecture, dictated a bold new vision for a humanoid.

[ Boston Dynamics ]

Dr. Christian Hubicki gives a talk exploring the common themes of modern robotics research and his time on the reality competition show, Survivor.

Advertisement

[ Optimal Robotics Lab ]

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025