Connect with us
DAPA Banner

Tech

Flapping Airplanes on the future of AI: ‘We want to try really radically different things’

Published

on

There’s been a bunch of exciting research-focused AI labs popping up in recent months, and Flapping Airplanes is one of the most interesting. Propelled by its young and curious founders, Flapping Airplanes is focused on finding less data-hungry ways to train AI. It’s a potential game-changer for the economics and capabilities of AI models — and with $180 million in seed funding, they’ll have plenty of runway to figure it out.

Last week, I spoke with the lab’s three co-founders — brothers Ben and Asher Spector, and Aidan Smith — about why this is an exciting moment to start a new AI lab and why they keep coming back to ideas about the human brain.

I want to start by asking, why now? Labs like OpenAI and DeepMind have spent so much on scaling their models. I’m sure the competition seems daunting. Why did this feel like a good moment to launch a foundation model company?

Advertisement

Ben: There’s just so much to do. So, the advances that we’ve gotten over the last five to ten years have been spectacular. We love the tools. We use them every day. But the question is, is this the whole universe of things that needs to happen? And we thought about it very carefully and our answer was no, there’s a lot more to do. In our case, we thought that the data efficiency problem was sort of really the key thing to go look at. The current frontier models are trained on the sum totality of human knowledge, and humans can obviously make do with an awful lot less. So there’s a big gap there, and it’s worth understanding. 

What we’re doing is really a concentrated bet on three things. It’s a bet that this data efficiency problem is the important thing to be doing. Like, this is really a direction that is new and different and you can make progress on it. It’s a bet that this will be very commercially valuable and that will make the world a better place if we can do it. And it’s also a bet that’s sort of the right kind of team to do it is a creative and even in some ways inexperienced team that can go look at these problems again from the ground up.

Aidan: Yeah, absolutely. We don’t really see ourselves as competing with the other labs, because we think that we’re looking at just a very different set of problems. If you look at the human mind, it learns in an incredibly different way from transformers. And that’s not to say better, just very different. So we see these different trade offs. LLMs have an incredible ability to memorize, and draw on this great breadth of knowledge, but they can’t really pick up new skills very fast. It takes just rivers and rivers of data to adapt. And when you look inside the brain, you see that the algorithms that it uses are just fundamentally so different from gradient descent and some of the techniques that people use to train AI today. So that’s why we’re building a new guard of researchers to kind of address these problems and really think differently about the AI space.

Asher: This question is just so scientifically interesting: why are the systems that we have built that are intelligent also so different from what humans do? Where does this difference come from? How can we use knowledge of that difference to make better systems? But at the same time, I also think it’s actually very commercially viable and very good for the world. Lots of regimes that are really important are also highly data constrained, like robotics or scientific discovery. Even in enterprise applications, a model that’s a million times more data efficient is probably a million times easier to put into the economy. So for us, it was very exciting to take a fresh perspective on these approaches, and think, if we really had a model that’s vastly more data efficient, what could we do with it?

Advertisement

Techcrunch event

Boston, MA
|
June 23, 2026

Advertisement

This gets into my next question, which is sort of ties in also to the name, Flapping Airplanes. There’s this philosophical question in AI about how much we’re trying to recreate what humans do in their brain, versus creating some more abstract intelligence that takes a completely different path. Aidan is coming from Neuralink, which is all about the human brain. Do you see yourself as kind of pursuing a more neuromorphic view of AI? 

Aidan: The way I look at the brain is as an existence proof. We see it as evidence that there are other algorithms out there. There’s not just one orthodoxy. And the brain has some crazy constraints. When you look at the underlying hardware, there’s some crazy stuff. It takes a millisecond to fire an action potential. In that time, your computer can do just so so many operations. And so realistically, there’s probably an approach that’s actually much better than the brain out there, and also very different than the transformer. So we’re very inspired by some of the things that the brain does, but we don’t see ourselves being tied down by it.

Ben: Just to add on to that. it’s very much in our name: Flapping Airplanes. Think of the current systems as big, Boeing 787s. We’re not trying to build birds. That’s a step too far. We’re trying to build some kind of a flapping airplane. My perspective from computer systems is that the constraints of the brain and silicon are sufficiently different from each other that we should not expect these systems to end up looking the same. When the substrate is so different and you have genuinely very different trade-offs about the cost of compute, the cost of locality and moving data, you actually expect these systems to look a little bit different. But just because they will look somewhat different does not mean that we should not take inspiration from the brain and try to use the parts that we think are interesting to improve our own systems. 

It does feel like there’s now more freedom for labs to focus on research, as opposed to, just developing products. It feels like a big difference for this generation of labs. You have some that are very research focused, and others that are sort of “research focused for now.” What does that conversation look like within flapping airplanes?

Advertisement

Asher: I wish I could give you a timeline. I wish I could say, in three years, we’re going to have solved the research problem. This is how we’re going to commercialize. I can’t. We don’t know the answers. We’re looking for truth. That said, I do think we have commercial backgrounds. I spent a bunch of time developing technology for companies that made those companies a reasonable amount of money. Ben has incubated a bunch of startups that have commercial backgrounds, and we actually are excited to commercialize. We think it’s good for the world to take the value you’ve created and put it in the hands of people who can use it. So I don’t think we’re opposed to it. We just need to start by doing research, because if we start by signing big enterprise contracts, we’re going to get distracted, and we won’t do the research that’s valuable.

Aidan: Yeah, we want to try really, really radically different things, and sometimes radically even things are just worse than the paradigm. We’re exploring a set of different trade offs. It’s our hope that they will be different in the long run. 

Ben: Companies are at their best when they’re really focused on doing something well, right? Big companies can afford to do many, many different things at once. When you’re a startup, you really have to pick what is the most valuable thing you can do, and do that all the way. And we are creating the most value when we are all in on solving fundamental problems for the time being. 

I’m actually optimistic that reasonably soon, we might have made enough progress that we can then go start to touch grass in the real world. And you learn a lot by getting feedback from the real world. The amazing thing about the world is, it teaches you things constantly, right? It’s this tremendous vat of truth that you get to look into whenever you want. I think the main thing that I think has been enabled by the recent change in the economics and financing of these structures is the ability to let companies really focus on what they’re good at for longer periods of time. I think that focus, the thing that I’m most excited about, that will let us do really differentiated work. 

Advertisement

To spell out what I think you’re referring to: there’s so much excitement around and the opportunity for investors is so clear that they are willing to give $180 million in seed funding to a completely new company full of these very smart, but also very young people who didn’t just cash out of PayPal or anything. How was it engaging with that process? Did you know, going in, there is this appetite, or was it something you discovered, of like, actually, we can make this a bigger thing than we thought.

Ben: I would say it was a mixture of the two. The market has been hot for many months at this point. So it was not a secret that no large rounds were starting to come together. But you never quite know how the fundraising environment will respond to your particular ideas about the world. This is, again, a place where you have to let the world give you feedback about what you’re doing. Even over the course of our fundraise, we learned a lot and actually changed our ideas. And we refined our opinions of the things we should be prioritizing, and what the right timelines were for commercialization.

I think we were somewhat surprised by how well our message resonated, because it was something that was very clear to us, but you never know whether your ideas will turn out to be things that other people believe as well or if everyone else thinks you’re crazy. We have been extremely fortunate to have found a group of amazing investors who our message really resonated with and they said, “Yes, this is exactly what we’ve been looking for.” And that was amazing. It was, you know, surprising and wonderful.

Aidan: Yeah, a thirst for the age of research has kind of been in the water for a little bit now. And more and more, we find ourselves positioned as the player to pursue the age of research and really try these radical ideas.

Advertisement

At least for the scale-driven companies, there is this enormous cost of entry for foundation models. Just building a model at that scale is an incredibly compute-intensive thing. Research is a little bit in the middle, where presumably you are building foundation models, but if you’re doing it with less data and you’re not so scale-oriented, maybe you get a bit of a break. How much do you expect compute costs to be sort of limiting your runway.

Ben: One of the advantages of doing deep, fundamental research is that, somewhat paradoxically, it is much cheaper to do really crazy, radical ideas than it is to do incremental work. Because when you do incremental work, in order to find out whether or not it does work, you have to go very far up the scaling ladder. Many interventions that look good at small scale do not actually persist at large scale. So as a result, it’s very expensive to do that kind of work. Whereas if you have some crazy new idea about some new architecture optimizer, it’s probably just gonna fail on the first rum, right? So you don’t have to run this up the ladder. It’s already broken. That’s great. 

So, this doesn’t mean that scale is irrelevant for us. Scale is actually an important tool in the toolbox of all the things that you can do. Being able to scale up our ideas is certainly relevant to our company. So I wouldn’t frame us as the antithesis of scale, but I think it is a wonderful aspect of the kind of work we’re doing, that we can try many of our ideas at very small scale before we would even need to think about doing them at large scale.

Asher: Yeah, you should be able to use all the internet. But you shouldn’t need to. We find it really, really perplexing that you need to use all the Internet to really get this human level intelligence.

Advertisement

So, what becomes possible  if you’re able to train more efficiently on data, right? Presumably the model will be more powerful and intelligent. But do you have specific ideas about kind of where that goes? Are we looking at more out-of-distribution generalization, or are we looking at sort of models that get better at a particular task with less experience?

Asher: So, first, we’re doing science, so I don’t know the answer, but I can give you three hypotheses. So my first hypothesis is that there’s a broad spectrum between just looking for statistical patterns and something that has really deep understanding. And I think the current models live somewhere on that spectrum. I don’t think they’re all the way towards deep understanding, but they’re also clearly not just doing statistical pattern matching. And it’s possible that as you train models on less data, you really force the model to have incredibly deep understandings of everything it’s seen. And as you do that, the model may become more intelligent in very interesting ways. It may know less facts, but get better at reasoning. So that’s one potential hypothesis. 

Another hypothesis is similar to what you said, that at the moment, it’s very expensive, both operationally and also in pure monetary costs, to teach models new capabilities, because you need so much data to teach them those things. It’s possible that one output of what we’re doing is to get vastly more efficient at post training, so with only a couple of examples, you could really put a model into a new domain. 

And then it’s also possible that this just unlocks new verticals for AI. There are certain types of robotics, for instance, where for whatever reason, we can’t quite get the type of capabilities that really makes it commercially viable. My opinion is that it’s a limited data problem, not a hardware problem. The fact that you can tele-operate the robots to do stuff is proof that that the hardware is sufficiently good. Butthere’s lots of domains like this, like scientific discovery. 

Advertisement

Ben: One thing I’ll also double-click on is that when we think about the impact that AI can have on the world, one view you might have is that this is a deflationary technology. That is, the role of AI is to automate a bunch of jobs, and take that work and make it cheaper to do, so that you’re able to remove work from the economy and have it done by robots instead. And I’m sure that will happen. But this is not, to my mind, the most exciting vision of AI. The most exciting vision of AI is one where there’s all kinds of new science and technologies that we can construct that humans aren’t smart enough to come up with, but other systems can. 

On this aspect, I think that first axis that Ascher was talking about around the spectrum between sort of true generalization versus memorization or interpolation of the data, I think that axis is extremely important to have the deep insights that will lead to these new advances in medicine and science. It is important that the models are very much on the creativity side of the spectrum. And so, part of why I’m very excited about the work that we’re doing is that I think even beyond the individual economic impacts, I’m also just genuinely very kind of mission-oriented around the question of, can we actually get AI to do stuff that, like, fundamentally humans couldn’t do before? And that’s more than just, “Let’s go fire a bunch of people from their jobs.”

Absolutely. Does that put you in a particular camp on, like, the AGI conversation, the like out of distribution, generalization conversation.

Asher: I really don’t exactly know what AGI means. It’s clear that capabilities are advancing very quickly. It’s clear that there’s tremendous amounts of economic value that’s being created. I don’t think we’re very close to God-in-a-box, in my opinion. I don’t think that within two months or even two years, there’s going to be a singularity where suddenly humans are completely obsolete. I basically agree with what Ben said at the beginning, which is, it’s a really big world. There’s a lot of work to do. There’s a lot of amazing work being done, and we’re excited to contribute

Advertisement

Well, the idea about the brain and the neuromorphic part of it does feel relevant. You’re saying, really the relevant thing to compare LLMs to is the human brain, more than the Mechanical Turk or the deterministic computers that came before.

Aidan: I’ll emphasize, the brain is not the ceiling, right? The brain, in many ways, is the floor. Frankly, I see no evidence that the brain is not a knowable system that follows physical laws. In fact, we know it’s under many constraints. And so we would expect to be able to create capabilities that are much, much more interesting and different and potentially better than the brain in the long run. And so we’re excited to contribute to that future, whether that’s AGI or otherwise.

Asher: And I do think the brain is the relevant comparison, just because the brain helps us understand how big the space is. Like, it’s easy to see all the progress we’ve made and think, wow, we like, have the answer. We’re almost done. But if you look outward a little bit and try to have a bit more perspective, there’s a lot of stuff we don’t know. 

Ben: We’re not trying to be better, per se. We’re trying to be different, right? That’s the key thing I really want to hammer on here. All of these systems will almost certainly have different trade offs of them. You’ll get an advantage somewhere, and it’ll cost you somewhere else. And it’s a big world out there. There are so many different domains that have so many different trade offs that having more system, and more fundamental technologies that can address these different domains is very likely to make the kind of AI diffuse more effectively and more rapidly through the world.

Advertisement

One of the ways you’ve distinguished yourself, is in your hiring approach, getting people who are very, very young, in some cases, still in college or high school. What is it that clicks for you when you’re talking to someone and that makes you think, I want this person working with us on these research problems?

Aidan: It’s when you talk to someone and they just dazzle you, they have so many new ideas and they think about things in a way that many established researchers just can’t because they haven’t been polluted by the context of thousands and thousands of papers. Really, the number one thing we look for is creativity. Our team is so exceptionally creative, and every day, I feel really lucky to get to go in and talk about really radical solutions to some of the big problems in AI with people and dream up a very different future.

Ben:  Probably the number one signal that I’m personally looking for is just like, do they teach me something new when I spend time with them? If they teach me something new, the odds that they’re going to teach us something new about what we’re working on is also pretty good. When you’re doing research, those creative, new ideas are really the priority. 

Part of my background was during my undergrad and PhD., I helped start this incubator called Prod that worked with a bunch of companies that turned out well. And I think one of the things that we saw from that was that young people can absolutely compete in the very highest echelons of industry. Frankly, a big part of the unlock is just realizing, yeah, I can go do this stuff. You can absolutely go contribute at the highest level. 

Advertisement

Of course, we do recognize the value of experience. People who have worked on large scale systems are great, like, we’ve hired some of them, you know, we are excited to work with all sorts of folks. And I think our mission has resonated with the experienced folks as well. I just think that our key thing is that we want people who are not afraid to change the paradigm and can try to imagine a new system of how things might work.

One of things I’ve been puzzling about is, how different do you think the resulting AI systems are going to be? It’s easy for me to imagine something like Claude Opus that just works 20% better and can do 20% more things. But if it’s just completely new, it’s hard to think about where that goes or what the end result looks like.

Asher: I don’t know if you’ve ever had the privilege of talking to the GPT-4 base model, but it had a lot of really strange emerging capabilities. For example, you could take a snippet of an unwritten blog post of yours, and ask, who do you think wrote this, and it could identify it.

There’s a lot of capabilities like this, where models are smart in ways we cannot fathom. And future models will be smarter in even stranger ways. I think we should expect the future to be really weird and the architectures to be even weirder. We’re looking for 1000x wins in data efficiency. We’re not trying to make incremental change. And so we should expect the same kind of unknowable, alien changes and capabilities at the limit.

Advertisement

Ben: I broadly agree with that. I’m probably slightly more tempered in how these things will eventually become experienced by the world, just as the GPT-4 base model was tempered by OpenAI. You want to put things in forms where you’re not staring into the abyss as a consumer. I think that’s important. But I broadly agree that our research agenda is about building capabilities that really are quite fundamentally different from what can be done right now.

Fantastic! Are there ways people can engage with flapping airplanes? Is it too early for that? Or they should just stay tuned for when the research and the models come out well.

Asher: So, we have Hi@flappingairplanes.com. If you just want to say hi, We also have disagree@flappingairplanes.com if you want to disagree with us. We’ve actually had some really cool conversations where people, like, send us very long essays about why they think it’s impossible to do what we’re doing. And we’re happy to engage with it. 

Ben: But they haven’t convinced us yet. No one has convinced us yet.

Advertisement

Asher: The second thing is, you know, we are, we are looking for exceptional people who are trying to change the field and change the world. So if you’re interested, you should reach out.

Ben: And if you have another unorthodox background, it’s okay. You don’t need two PhDs. We really are looking for folks who think differently.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Samsung S95H OLED TV Hands-on Review: Bright, Museum Quality OLED

Published

on

The Samsung S95H is the company’s new flagship OLED TV for 2026, and it’s a decidedly different beast compared to previous Samsung S95 series OLEDs. What’s most immediately different about the Samsung S95H is a beveled metal frame surrounding the set’s screen. Samsung calls this new design FloatLayer, and it gives the TV a picture frame look when flush-mounted to a wall.

Along those lines, the S95H is the first OLED TV to support the Samsung Art Store, a subscription service that lets viewers display a selection of over 3,000 artworks on the TV, including ones from leading museums like the Met, the Museo del Prado, and the Louvre.

Adding to the S95H’s high art credentials is a new version of Samsung’s Glare Free screen with OLED HDR Pro to better maintain contrast even when viewing in brightly lit rooms. A new QD-OLED Penta Tandem display panel used in the S95H is also said to be 30% brighter than last year’s S95F, which is another factor that will help with bright room viewing. Samsung is a bit cagey about revealing which raw panels are used in which screen sizes, but we believe the QD-OLED panel will be used in 55-inch, 65-inch and 77-inch screen sizes while the 83-inch model will use a W-OLED panel.

Advertisement

With the S95H series, which is available in 55, 65, 75, and 83-inch screen sizes and priced from $2,499 to $6,499, Samsung is clearly attempting to port its The Frame TV concept to the premium OLED TV market. Is it a comfortable fit? Let’s take a look and find out.

Before we do that, let’s briefly cover two additional OLED TV series Samsung announced for 2026. The S90H series is available in 42, 48, 55, 65, 77 and 83-inch screen sizes priced from $1,399 to $5,299. These models also feature a Glare Free screen. Rounding out Samsung’s new OLED offerings is the S85H series, which will be sold in 48, 55, 65, 77 and 83-inch sizes priced from at $1,199.99 to $4,499.99.

Samsung S95H-1
The S95H’s new, enhanced Glare Free screen rejects screen reflections while maintaining better black levels and contrast than previous versions of the tech.

Features

The S95H series is Wireless One Connect Ready. That’s a new feature for a Samsung OLED TV, and one that gives you the option to pair it with Samsung’s Wireless One Connect Box. And doing so will also give you a total of eight HDMI 2.1 ports with 4K/165Hz support – four on the unit itself and 4 on the wireless One Connect box. That’s a lot of inputs. (The wireless connection is 4K/165Hz-capable.)

For the S95H series, Samsung is using the same NQ4 AI Gen3 Processor found in its 2025 flagship TVs. This processor brings a host of AI-based picture enhancements such as 4K AI Upscaling Pro for lower-resolution content, AI Motion Enhancer Pro, and an Adaptive Picture function that uses AI to optimize pictures based on the content genre. Another new feature is AI Customization, which can create a custom picture setting based on the viewer’s response to a series of images, and there’s also Real Depth Enhancer, a feature first introduced in 2025 Samsung TVs that analyzes pictures in real-time to enhance foreground detail.

Gamers who skip Samsung’s Wireless One Connect Box will still find plenty to work with. The S95H includes four HDMI 2.1 ports that support up to 165Hz, along with FreeSync Premium Pro and HDR10+ Gaming for smoother, more responsive play. Samsung’s Gaming Hub also returns with a deep bench of cloud services, including Xbox, NVIDIA GeForce Now, Luna, Blacknut, Antstream, and Boosteroid, giving players multiple ways to jump into games without a console.

Advertisement

A 4.2.2-channel, 70W speaker array is used for the S95H’s sound. Samsung’s Object Tracking Sound+ feature ensures that dialogue and sound effects accurately follow the onscreen action, and Active Voice Amplifier can be used to dynamically enhance dialogue. Owners of compatible Samsung soundbars can also take advantage of Q-Symphony, a feature that combines the output of the TV’s speakers with the soundbar for an enhanced presentation.

Last but not least, Samsung includes a set of support feet with the S95H for viewers who choose not to wall-mount the TV.

Advertisement. Scroll to continue reading.
Samsung S95H-2
The S95H is designed for a flush wall mount, but also ships with support feet for stand installations.

Hands-on with the Samsung S95H OLED TV

Samsung invited eCoustics to its New Jersey headquarters in early March to spend some hands-on time with the S95H and some of its other new TVs. As part of that process, I was able to make a full set of measurements on a 65-inch S95H.

As mentioned above, Samsung has said that the new S95H OLED is 30% brighter than last year’s Samsung S95F. While I didn’t review the S95F, I can confirm that the S95H is the brightest OLED TV I’ve yet measured, topping even the very bright LG G5 OLED on that front.

Advertisement

Measured on a 10% white window pattern, the S95H’s peak HDR brightness in the Standard picture mode was 2,553 nits, and it measured 251 nits on a 100% (fullscreen) pattern. Peak HDR brightness was notably lower in Filmmaker Mode, measuring 1,072 nits on a 10% window, and 251 nits fullscreen.

The S95H’s peak HDR brightness (10% window) in Standard mode is comparable to some of the best mini-LED TVs on the market, even exceeding Samsung’s own flagship QN90F mini-LED TV from 2025 on that parameter.

At the S95H’s default Filmmaker Mode picture settings, P3 color space coverage measured 99.9% and BT.2020 coverage was 88.4%. Those are stellar results, and they also exceed what I measured on last year’s OLED flagship from LG, the G5.

For subjective testing, I opted to use the TV’s Movie picture mode, which produced a brighter picture than Filmmaker mode. I also watched content in both dark and bright room conditions to evaluate the effectiveness of the S95H’s Glare Free screen.

Advertisement

Checking out Spider-Man: Into the Spider-Verse (in 4K, using a Kaleidescape movie player as a source), the S95H’s picture looked nothing short of fantastic. Shadows were deep and detailed, and the movie’s rich colors popped on the screen. The computer-generated animation in Into the Spider-Verse is finely textured, and the S95H easily revealed the intricate patterns and graphic overlays in the backgrounds.

Samsung S95H-3
The S95H has exceptional peak HDR brightness in its Standard picture mode.

The movie Alpha is a known torture test for HDR tone mapping on TVs owing to its 4,000 nits HDR transfer. (Most 4K/HDR movies max out at around 1,000 nits.) Watching a scene where a figure is positioned against a bright sunset, the S95H’s excellent tone mapping preserved image contrast without eliminating highlight detail.

Next up on the S95H was the opening sequence from the movie Baby Driver – another torture test, this one for motion handling. Samsung’s OLED showed a fair amount of motion judder on this scene in the default Movie mode. As usual with Samsung TVs, adjusting the blur and judder settings in the Custom motion preset fixed the issue, and it didn’t introduce any of the dreaded “soap opera” effect that makes movies look like TV shows.

Switching to the TV’s Standard picture mode to let the S95H display pictures at maximum brightness, I turned on the lights and watched some clips from the movie F1. While color accuracy took a hit in Standard mode, the picture looked incredibly bright for an OLED TV. And the set’s Glare Free screen also did a great job of preventing reflections from the room’s overhead lights without losing black depth and detail.

Samsung Micro RGB-4
Samsung’s rechargeable Solar Cell remote is used to control the S95H.

The Bottom Line

The Samsung S95H’s fancy FloatLayer design may not be for everyone, but it’s not surprising given the trend of TV makers trying to make their flagship models more luxurious and living room friendly. If that idea doesn’t ring a bell, check out the LG OLED evo W6 Wallpaper TV the company introduced at CES 2026.

Advertisement

Along with making their best TVs more visually appealing, companies like Samsung are pushing the brightness capabilities of OLED and introducing screen glare-reduction tech to negotiate the picture quality compromises that come with installing TVs in well-lit living rooms. From what I saw during the hour or so that I spent going hands-on with it, Samsung’s new flagship OLED TV manages to look great in both dark and bright lighting conditions, and the performance of its Glare Free screen is a marked improvement over last year’s Samsung S95F (which itself improved on its S95D predecessor when it came to black level retention).

Will the Samsung S95H turn out to be the best OLED TV of 2026? It’s a bit early out of the gate to make that determination, but Samsung’s new flagship OLED is certain to grab attention.

Advertisement. Scroll to continue reading.

Pros:

  • High brightness for an OLED TV
  • Refined overall picture quality
  • Effective Glare Free screen
  • Wireless One Connect Ready
  • Samsung Art Store support

Cons:

  • Pricey
  • New FloatLayer design not for everyone
  • Limited picture brightness in Filmmaker Mode

Where to buy:

  • 55-inch S95H: $2,499.99
  • 65-inch S95H: $3,399.99
  • 77-inch S95H: $4,499.99
  • 83-inch S95H: $6,499.99

Source link

Advertisement
Continue Reading

Tech

Money transfer app Duc exposed thousands of driver’s licenses and passports to the open web

Published

on

A publicly accessible Amazon-hosted storage server allowed anyone with a web browser to access potentially hundreds of thousands of people’s personal data without needing a password. This included driver’s licenses, passports, and other personal information collected by the Duc App, a money-transfer service owned by Toronto-based Duales.

The Canadian fintech company said it resolved the data exposure on Tuesday after TechCrunch alerted its chief executive that one of the company’s cloud storage servers was publicly listing its contents, without a password.

The data was also stored unencrypted, meaning anyone with a link to the data was able to view it in full.

Anurag Sen, a security researcher at CyPeace who discovered the security lapse earlier in the week, contacted TechCrunch in an effort to notify the data’s owner. Sen said that anyone could view and download the data using their browser just by knowing the easy-to-guess web address of the storage server.

Advertisement

According to Sen, the Amazon-hosted storage server listed over 360,000 files containing government-issued documents and other information used by customers to verify their identity through “know your customer” checks. These files included user-uploaded selfies to prove their real-world likeness.

TechCrunch could not ascertain the precise number of exposed driver’s licenses and passports; however, several folders in the exposed bucket each contained tens of thousands of user-uploaded files, a sampling of which listed driver’s licenses, passports, and selfies.

Duales touts its app as a way for users to send money to other users, including overseas in Cuba and elsewhere. Its Android app listing on the Google Play app store shows more than 100,000 user downloads to date.

The files, which dated back to September 2020 and were being uploaded daily, also contained spreadsheets listing customer names, home addresses, and the dates, times, and details of their transactions.

Advertisement

When reached by email, Duales chief executive Henry Martinez González told TechCrunch that the data was stored on a “staging site,” referring to a website used primarily for testing, but did not explain why customers’ personal information was publicly accessible in the same database.

“All protections are in place,” Martinez said. “We are notifying the appropriate parties. We have not contracted any services from you.”

After TechCrunch emailed the company, the files on the storage server were made inaccessible, though a list of the server’s contents is still visible.

Martinez would not say if the company had the technical means, such as logs, to determine who or how many people accessed the data. 

Advertisement

Duc App’s website appeared briefly down on Thursday, and displayed a “bad gateway” error.

It’s not clear how or for what reason Duales left its Amazon-hosted storage server publicly open to the internet. In recent years, Amazon has added security checks to prevent users from inadvertently exposing their data to the internet after a series of high-profile incidents where several corporate giants, including a U.S. spy agency, published sensitive data to the web due to misconfigurations.

When reached by TechCrunch as part of our outreach to contact the app’s owner, Canada’s privacy regulator said it was seeking more information from the company.

“The Office of the Privacy Commissioner of Canada has reached out to the company to obtain more information and determine next steps,” a spokesperson for the regulator told TechCrunch by email, declining to comment further.

Advertisement

Duc App is the latest app in a list of recent security lapses involving the exposure of other people’s sensitive identity data. This data exposure comes as apps and websites are increasingly requiring their users to upload their government-issued documents to verify who they say they are but without taking enough steps to secure the data that they collect. 

Last year, popular app TeaOnHer exposed thousands of its users’ passports and driver’s licenses, which the app required users to upload before allowing them into the app’s gated community. Discord last year also confirmed a data breach affecting around 70,000 government-issued documents uploaded by users who sought to verify their age, amid a worldwide effort to enact online age checking laws.

Source link

Advertisement
Continue Reading

Tech

‘This rootkit is highly persistent; a standard factory reset will not remove it’: “NoVoice” Android malware on Google Play infects 50 apps across 2.3 million devices, here’s what we know

Published

on


  • McAfee uncovers NoVoice malware hidden in 50+ Google Play apps with 2.3 million downloads
  • Malware exploits old Android kernel and GPU flaws, persists even after factory reset
  • Injects code into apps like WhatsApp to hijack sessions; Google has removed apps but infected devices remain compromised

Millions of Android devices were infected with malware spying on their WhatsApp chats and that even a factory reset wouldn’t wipe, experts have warned.

Researchers at McAfee have published an in-depth report on NoVoice, a new Android malware variant found in more than 50 apps hosted on the Google Play store, downloaded more than 2.3 million times combined.

Source link

Continue Reading

Tech

Nvidia Rolls Out Its Fix For PC Gaming’s ‘Compiling Shaders’ Wait Times

Published

on

Nvidia has begun rolling out a beta feature that automatically compiles game shaders while a PC is idle. It won’t eliminate shader compilation the first time a game runs, but Ars Technica reports it could help reduce those repeated wait times. From the report: Nvidia’s new Auto Shader Compilation system promises to “reduc[e] the frequency of game runtime compilation after driver updates” for users running Nvidia’s GeForce Game Ready Driver 595.97 WHQL or later. When the feature is active and your machine is idle, the app will automatically start rebuilding DirectX drivers for your games so they’re all set to roll the next time they launch.

While the feature defaults to being turned off when the Nvidia App is first downloaded, users can activate it by going to the Graphics Tab > Global Settings > Shader Cache. There, they can set aside disk space for precompiled shaders and decide how many system resources the compilation process should use. App users can also manually force shader recompilation through the app rather than waiting for the machine to go idle.

Unfortunately, Nvidia warns that users will still have to generate shaders in-game after downloading a title for the first time. The Auto Shader Compiler system only generates the new shaders needed after subsequent driver updates following that first run of a new title.

Source link

Advertisement
Continue Reading

Tech

What issues arise when code has the ability to write and review itself?

Published

on

Agustin Huerta discusses Anthropic’s new Code Review feature and the importance of AI governance.

As more and more organisations and professionals utilise technologies that make coding simpler, they potentially also introduce additional dangers, as the speed at which code can now be generated can lead to poor security practices and risky behaviours.  

In March, US AI and research company Anthropic launched Code Review, a new feature designed to catch and eliminate bugs before they ever make it into a software’s codebase. A move Globant’s senior vice-president of digital innovation, Agustin Huerta explained is reflective of a “shift in software development workflows as AI tools increasingly begin to own more of the software development lifecycle”.

He told SiliconRepublic.com, “It uses multiple specialised agents to review code for risks and bugs, cross-check amongst one another and prioritise the most relevant issues for reviewers.”

Advertisement

But he noted, while this does help teams to better manage higher volumes of code, it doesn’t replace human reviewers and raises a few concerns of its own when it comes to long-term security and best practice. 

Critical coding concerns?

“The concern isn’t that code can write and review itself, but that organisations may assume less oversight is needed,” said Huerta, who elaborated, saying that in reality the same principles that dictate and govern traditional software development remain equally as important when AI agents are involved, if not more so.

“The processes and workflow structures that once governed human coders should be adapted to govern agents, including workflow integration, human review, data readiness and observability. Teams need clear visibility into how code is generated, reviewed and promoted across environments, along with defined checkpoints to validate outputs.” 

He said, though agents can carry out a number of tasks, for example assist with, recommend and even execute prompts within a set of defined guidelines, code quality and risk management should remain the responsibility of people who themselves follow a clear process. 

Advertisement

He finds that nowadays, too many organisations are electing to delegate tasks, such as debugging and code writing to AI agents, rather than a real employee, amplifying the potential for risk, though it isn’t only AI hallucinations and errors sneaking past the automated workforce. 

“A more significant concern is an overreliance and unchecked trust in agent autonomy. Overdependence on agent-driven work without the right checks and balances can create blind spots and amplify small issues into larger problems, such as system outages or security risks.

“For example, version control systems and code repositories are a way to maintain observability over human-written code, supported by structured review processes. When these workflows become automated without incorporating an additional layer of human oversight, organisations risk compounding mistakes and introducing larger structural issues that are harder to detect or resolve.”

He finds, while human involvement is irreplaceable, equally as important, across the development lifecycle, is organisational transparency. “Organisations need visibility into how agents are accessing data, how they’re reasoning and why tasks are deemed complete. This level of observability is key in managing human-agent workflows, identifying areas for growth and maintaining accountability.”

Advertisement

Moreover, when correctly implemented and supervised there are clear and significant benefits.

Enterprising AI

AI agents undoubtedly bring a new element to the workplace, for better or for worse, but there are tangible benefits, such as the ability to boost productivity, minimise laborious, data complex tasks, support developers in the coding process and identify the issues or patterns that are often overlooked by people. 

Huerta said, “By taking on repetitive work that was previously handled by people, agents allow teams to focus on higher-value tasks and activities. These benefits are best realised when AI is used as an enhancement, not a replacement, for human judgment.

“The most successful models are a hybrid of human-agent teams, where the speed and scale of AI are combined with human oversight to refine and improve workflows, instead of just automating them.”

Advertisement

A key challenge going forward, he explained, will be in establishing balance between the adoption and implementation of AI agents and blending it seamlessly with responsible use. He said, as agents become more advanced and more capable, organisations risk losing sight of basic best practices in crucial areas such as those that govern software development.

“Leaders must continue to prioritise observability, governance and human-agent collaboration despite pressures to prove ROI from AI systems.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Asus VM670KA Review: A Beautiful All-in-One Desktop with Ryzen AI 7

Published

on

AiOs, or all-in-one computers, have been around for quite some time. And their promise is simple. They give you the big-screen experience of using a desktop, without the hassle of finding the right components and building a PC yourself. Despite me being a tech reviewer, AiOs have had me intrigued for a long time, since, spoiler alert, I cannot build a PC myself. It’s just intimidating, and the risk of ending up with something that doesn’t really work well for my workflow isn’t one I want to take. Asus is one of the few brands active in the AiO market, and their recently introduced VM670KA is the best of the bunch. That’s because it packs Ryzen AI 7 350, 16GB of RAM, and a 27-inch Full HD touchscreen display.

All this at a price of ₹1,12,990 sounds like a pretty sweet deal, especially considering the current world situation, which is plagued by sky-high RAM prices (blame your AI companions, please). But is it though? I called Asus and arranged to have the VM670KA AiO in for review. To do it justice, I swapped my MacBook and used the AiO as my primary WFH machine for over two weeks. Here’s how it stacked up.

Asus VM670KA Review

Hisan Kidwai

Advertisement

Summary

With the Asus VM670KA, you get a big-screen desktop to work or study on without fiddling with a separate PC. The display is plenty decent, albeit a little less pixel-dense than I’d like. The speakers are super, and the performance can handle everyone’s workdays and even some light gaming/video editing. Not to mention the beautiful white design that makes the AiO look sweet.

Advertisement

Design & Hardware

Side profile of the Asus AiO

My job as a tech reviewer is to work from home, meaning all I do every day is stare at my MacBook’s screen. It never really occurred to me that a 13-inch screen might be too small. However, the minute I configured the VM670, it struck me how much I was missing out. Everything was spaced out to perfection, which put less strain on my eyes. Coming back to the design, I think Asus has done an excellent job. It’s a sober yet sophisticated AiO that looks premium without being too loud. I do love the white color. Asus has shaved off 25% of the thickness compared to the VM670’s predecessor, and the bottom bezel is now narrower. All this translates to a sleeker setup that can rival any modern monitor.

The AiO comes with a stand that attaches easily with a single screw. The stand is made from metal, and it’s pretty sturdy since I’ve accidentally bumped into the table a few times. While there are no height-adjusting settings, you can tilt the screen up or down, which came in handy when I wanted to work standing up. The only gripe I have with the design is the retractable camera. Sure, it’s a great tool to protect one’s privacy by hiding away the webcam, but it also takes away the ability to mount any monitor lightbar. I’m a fan of those, so it was an annoyance. That said, the webcam quality was solid in artificial lighting.

Unlike modern laptops, the VM670 is full of useful ports. The backside houses three USB 3.2 Gen 1 Type A ports, a USB 3.2 Gen 1 Type-C port, a LAN, a DC-in (for power), an HDMI-in for making the AiO a secondary display for your laptop, and an HDMI-out to connect to external monitors. There’s more, as underneath the belly, there’s one more USB 2.0 port for connecting the keyboard and mouse, an HDMI mode switcher, a Kensington Lock, and a headphone/microphone jack.

Keyboard & Mouse

Keyboard and mouse that comes bundled

To help you get running quickly, Asus bundles a mouse and keyboard with the VM670, and both connect via a 2.5GHz dongle stored inside the mouse. While I wouldn’t describe the keyboard as groundbreaking, it’s not bad either. There’s ample travel, and there’s some feedback when they are pressed. It’s just that the keys aren’t as sharp as the ones on my MacBook. You can sometimes feel that mushiness, but it’s not a big con, and I did get used to the keyboard quickly, without losing much of my typing speed.

The mouse, on the other hand, is plenty good. I had no problem with its tracking, even when playing some games, for that matter. The grips felt comfortable in my hand, and my wrists, which are super prone to fatigue, did not ache after long periods of use. Beyond that, the clicks were accurate, and the latency wasn’t noticeable to my eyes.

Display & Speakers

Iron Man 2 scene played on YouTube

The Asus VM670KA features a 27-inch FHD IPS display with a 93% screen-to-body ratio and a 75Hz refresh rate. When I first got the AiO, I was worried that the 1080p resolution might not be enough for such a large display. Fortunately, I was proven wrong pretty quickly. From a normal viewing distance, I didn’t notice much pixelation when typing this review on the device. Still, I’d have loved to see a 1440p panel at this price. On the flip side, Asus has taken care of the color accuracy, with 100% coverage of the sRGB color space.

I recently caught up to the Breaking Bad hypetrain and decided to watch the season 3 finale on the VM670, and it was a very enjoyable experience. Colors looked super nice, the motion was smooth, and there wasn’t any glare from the light behind me since the display is matte-coated. The Dolby Atmos stereo speakers deserve the same praise as they can easily fill an entire room with powerful sound, without sounding harsh at higher volumes. The bass is decent, and the dialogue remains legible.

Advertisement

As mentioned earlier, the VM670KA has one more trick up its sleeve, and that’s a touchscreen. You might be wondering — what’s the point of a touchscreen on a desktop? The answer to that is children. An AiO makes perfect sense for parents to get for their children who might have online classes or need to work on a project. A touchscreen is a handy tool for that, and makes navigation much simpler.

Performance

youtube home page opened on the Asus VM670KA

Performance is what makes or breaks the experience with AiOs or any desktop, for that matter. If it can’t handle everyday work, then it’s of no use. At the beating heart of the Asus VM670KA sits the AMD Ryzen AI 7 350 processor, with 8 cores and 16 threads, rated for a maximum frequency of 5 GHz. Graphics is handled by the integrated Radeon 860M, and there’s 16GB of LPDDR5x RAM and 1TB M.2 NVMe PCIe 4.0 SSD.

All of this results in strong everyday performance. The VM670 doesn’t struggle with typical workloads at all. Run 30 Chrome tabs at once? Watch HDR videos on YouTube or quickly switch from a game to an eBook before your parents notice. Not a problem. Never once did I notice a stutter in these tasks, and if your work mainly involves the browser, as mine does, then the performance is more than good enough.

I’m no video editor, but as this is a review, I decided to try my hand at it. The experience? Not bad at all. For those who mainly edit reels in 1080p or even 4K, the VM670 packs a punch. The timeline played smoothly, and render times weren’t too high.

While benchmarks don’t tell the full story of performance, they do paint a picture of a device’s performance ceiling. The VM670 scored 2,833 in Geekbench’s single-core and 10,254 in the multi-core test. Then I moved away from stressing the CPU to stressing the GPU, where the Radeon 860M scored 22,042 in the Geekbench test. For context, this performance is similar to that of the Intel Core i7-13620H processor found in the Asus ExpertBook P1.

Advertisement

Can you game?

A person playing Counter Strike 2 on the AiO

Given the decent performance and appeal towards children, gaming may be on your radar as well. And I will set the expectations straight. You won’t be able to play AAA titles like Cyberpunk 2077 without dropping the quality to PS3 levels on the Asus VM670KA. If that’s a priority for you, the Strix or ROG line would serve you better.

That said, if you play light titles like Counter-Strike 2, Valorant, Fall Guys, or even F1 2025, then the AiO could be handy. I played all four and got over 60 fps in both Counter-Strike 2 and Valorant at medium settings. Fall Guys hit 60 FPS pretty easily, too, and F1 clocked about 45 FPS in medium settings. GTA V also runs, but the frame rates are limited to about 35-40.

Verdict

Image of Asus AiO from the front

At ₹1,12,990, the Asus VM670KA isn’t cheap. But what it promises isn’t something anyone else can do. For the money, you get a big-screen desktop to work or study on without fiddling with a separate PC. The display is plenty decent, albeit a little less pixel-dense than I’d like. The speakers are super, and the performance can handle everyone’s workdays and even some light gaming/video editing. Not to mention the beautiful white design that makes the VM670KA look sweet.

Source link

Continue Reading

Tech

Apple's brand-new M5 MacBook Air drops to record low price on Amazon

Published

on

Amazon is kicking off April with steeper discounts on Apple’s brand-new M5 MacBook Air.

Open MacBook Air M5 laptop with abstract blue radial pattern on screen, neon purple background, and bright yellow starburst badge in upper left corner displaying the word NEW
Apple’s M5 MacBook Air hits new low prices at Amazon – Image credit: Apple

Save $85 on the 13-inch Air with 24GB of RAM and 1TB of storage, bringing the price down to $1,415.50 in Silver.
Buy M5/24GB/1TB MacBook Air for $1,415
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Quanscient and Haiqu ran a 15-step nonlinear quantum fluid simulation

Published

on

A new quantum algorithm ran a 15-step nonlinear fluid simulation around a solid obstacle on real quantum hardware, the most physically complex publicly documented demonstration of its kind. The technique reduces qubit requirements and circuit depth, bringing industrial CFD applications closer to feasibility.


Finnish simulation company Quanscient and quantum middleware developer Haiqu have demonstrated what they describe as the most physically complex quantum computational fluid dynamics simulation run to date on real hardware.

The two companies ran a 15-step nonlinear fluid simulation around a solid obstacle,  fluid flowing around a shape, the kind of problem relevant to aircraft wing design or vehicle aerodynamics, on IBM’s Heron R3 quantum computer, using a new algorithm they developed together called the One-Step Simplified Lattice Boltzmann Method (OSSLBM).

Computational fluid dynamics, or CFD, is one of the most resource-intensive branches of engineering simulation. Modelling how fluids behave around complex shapes requires enormous classical computing power, and the demands grow non-linearly as simulations become more detailed.

Advertisement

Quantum computing has long been theorised as a potential path to simulations beyond classical limits, but turning that potential into practice has been constrained by the sheer number of qubits and the circuit depth, the length of the quantum computation, required to run even moderately complex scenarios without the calculation being overwhelmed by errors.

The OSSLBM algorithm addresses this directly. Built on the quantum Lattice Boltzmann Method (QLBM), an established approach to mapping classical fluid equations onto quantum computation, the new framework reduces the computational overhead of each step, allowing a longer multi-step simulation to stay within what current quantum hardware can reliably execute.

Haiqu’s middleware layer was central to this: it reduced circuit depth, developed new algorithmic subroutines, and applied targeted error-reduction techniques that allowed the system to complete a workflow that would otherwise have been out of reach for today’s devices.

The significance of the result lies in the obstacle. Previous quantum CFD demonstrations have largely focused on simpler linear scenarios, fluid behaviour without the complications of interacting with a solid boundary.

Advertisement

Modelling how a fluid moves around an object is a prerequisite for any industrially meaningful application. Professor Oleksandr Kyriienko, Chair in Quantum Technologies at the University of Sheffield, described the work as “an interesting and timely contribution to quantum CFD,” adding that more research of this kind is needed to reach industrially relevant quantum solutions.

Quanscient and Haiqu have been collaborating on quantum CFD since at least 2024, when they were finalists in the Airbus and BMW Quantum Mobility Challenge, and have previously demonstrated work on IonQ hardware via Amazon Braket. Industrial applications remain years away; the current work is a research milestone establishing that the approach is feasible on current hardware at this level of complexity.

Source link

Advertisement
Continue Reading

Tech

Commonwealth Fusion Systems leans on magnets for near-term revenue

Published

on

Commonwealth Fusion Systems said on Thursday it would sell high-temperature superconducting magnets to Realta Fusion, the second in a string of deals that suggests the company will lean heavily on its magnet technology in the coming years to bring in much-needed revenue.

“It’s the largest deal of this kind to date for CFS,” Rick Needham, the company’s COO, told reporters on a call. 

Commonwealth Fusion Systems, or CFS, previously sold magnets to the WHAM experiment at the University of Wisconsin, which fusion startup Realta collaborates closely with. The physics behind WHAM underpins Realta’s approach to fusion power, which is known as a magnetic mirror reactor. 

In a magnetic mirror, plasma is confined into a shape that resembles two 2-liter soda bottles connected at the base. On each end, powerful magnets punch the plasma and force it back toward the center. Weaker magnets encircle the middle of the bottle shape. 

Advertisement

To make a more powerful reactor, Khosla-backed Realta would only need to expand the middle section, and because those magnets are less powerful, they’re cheaper. Per kilowatt-hour costs should fall as Realta’s reactors increase in size.

CFS is pursuing another form of magnetic confinement fusion called a tokamak. In a tokamak, D-shaped magnets cast powerful fields to keep plasma circulating in a doughnut-like shape inside. Over the years, the company has refined its magnets in pursuit of putting electrons on the grid from Arc, its future commercial-scale reactor that’s slated to be built in Virginia.

Both CFS’s and Realta’s existence stems from the magnets themselves. CFS was founded in 2018 after scientists at MIT realized that a new class of commercially available high-temperature superconductors could underpin a viable tokamak design. Realta was founded a few years later when physicists at the University of Wisconsin “saw that there was a new technology, a game changer that would enable us to go back to the [magnetic] mirror and avail of those engineering advantages that the concept has,” co-founder and CEO Kieran Furlong said.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

In addition to the Realta and WHAM deals, CFS has also licensed its high-temperature superconducting magnet technology to Type One Fusion, which is working on a third type of reactor design known as a stellarator. While the latter deal doesn’t include CFS building actual magnets for the company, it could lead to that one day, Christine Dunn, CFS’s head of external communications, told TechCrunch.

Advertisement

The deals will help CFS pay off its investment in magnet manufacturing. The startup spent seven years and hundreds of millions of dollars building a factory capable of producing high-temperature superconducting tape designed to fusion-power specifications. So far, that material has gone toward building Sparc, the company’s demonstration reactor, which won’t turn on until later this year. There will be a gap until work begins in earnest on its commercial-scale power plant Arc. These deals keep the factory running in between.

“With Spark now 70% complete, it was excellent timing to start supporting Realta with our magnet manufacturing,” Needham said.

Because Realta and Type One are pursuing different reactor designs, CFS apparently doesn’t view them as directly competitive at the moment. In the marketplace, Realta and CFS are even further apart, with the former focusing initially on industrial applications that need large amounts of heat.

To date, CFS has raised nearly $3 billion — a large chunk of all venture dollars raised by fusion startups. That’s put the company in an enviable position, giving it the means to build key facilities like its magnet factory before competitors can. The startup pitches these deals as a service to the broader fusion industry, making available technologies that would cost many millions to replicate. That’s certainly true, but it also gives it access to even more venture investment, even if it’s in a roundabout way.

Advertisement

Source link

Continue Reading

Tech

United’s mobile app now shows TSA wait times at select airports

Published

on

United Airlines is updating its iOS and Android mobile apps with several new features, including estimated security wait times to give travelers a better idea of when they should arrive at the airport. The move comes as the ongoing partial government shutdown has left TSA checkpoints understaffed.

In the “Travel” section of the United mobile app, travelers can now view security wait times for the airline’s U.S. hub airports in Chicago, Denver, Houston, Los Angeles, New York/Newark, San Francisco, and Washington D.C. Users will see estimated wait times for specific lanes, including standard security and TSA PreCheck, throughout terminals serving United customers.

“We appreciate the work and professionalism of our TSA agents, and while most began receiving back pay earlier this week, the U.S. Department of Homeland Security shutdown continues and people want to stay informed about expected security wait times at our airports,” Jason Birnbaum, United’s chief information officer, said in a press release. “Our customers rely on our mobile app for all their travel needs, and this new feature lets them know what to expect and better plan their trip.”

The app is also rolling out updates designed for passengers with connecting flights. Travelers will now receive personalized, turn-by-turn directions to their next gate, complete with estimated walking times, real-time status updates, and tips for longer layovers. It will also provide a “heads up” if United can hold a plane for passengers with tight connections.

Advertisement

The app will offer automatic rebooking assistance as well. Instead of waiting in line to speak with an agent or manually searching for alternatives, United’s self-service tools will automatically present travelers with rebooking options, along with baggage tracking details and meal and hotel vouchers if they’re eligible for them, in cases where a flight is delayed or canceled.

The app has also integrated Apple’s “Share Item Location” feature for AirTag, allowing travelers who use an AirTag or other Find My network accessory to share their item’s location with United’s customer service team in the event that their baggage is lost.

Users will also receive text updates featuring real-time radar maps to inform them on how severe weather in one region of the country can affect flights in another.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025