Connect with us

Tech

What The FDA’s 2026 Wellness Device Update Means For Wearables

Published

on

Privacy as the Ultimate Moat in Crypto

With more and more sensors being crammed into the consumer devices that many of us wear every day, the question of where medical devices begin and end, and how they should be regulated become ever more pertinent. When a ‘watch’ no longer just shows the time, but can keep track of a dozen vital measurements, and the line between ‘earbud’ and ‘hearing aid’ is a rather fuzzy one, this necessitates that institutions like the US FDA update their medical device rules, as was done recently in its 2026 update.

This determines how exactly these devices are regulated, and in how far their data can be used for medical purposes. An important clarification made in the 2026 update is the distinction between ‘medical information’ and ‘signals/patterns’. Meaning that while a non-calibrated fitness tracker or smart watch does not provide medically valid information, it can be used to detect patterns and events that warrant a closer look, such as indications of arrhythmia or low blood oxygen saturation.

As detailed in the IEEE Spectrum article, these consumer devices are thus  ‘general wellness’ devices, and should be marketed as such, without embellished claims. Least of all should they be sold as devices that can provide medical information.

Another major aspect with these general wellness devices is what happens to the data that they generate. While not medical information, it does provide health information about a person that e.g. a marketing company would kill for to obtain. This privacy issue is unresolved in the US market, while other countries prescribe strict requirements about such data handling.

Advertisement

Effectively, this leaves the designers of wearables relatively free to do whatever they want, as long as they do not claim that the medical data being produced from any sensors is medical information. How this data is being handled is strictly regulated in most markets, except for the US, which is quite worrying and something you should definitely be aware of.

As for other medical device purposes like hearing aids, the earbuds capable of this fortunately do not generally collect information. They do need to have local regulatory approval to enable the feature, however, even if you can bypass any geofencing with some creative hacking.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Apple Event Set for March 4, Claimed to be a ‘Special Experience’

Published

on

Apple Event March 4 Products Invitation
Photo credit: Volodymyr Lenard via Yanko Design
Apple’s March 4th event is reportedly going to be a big one, as the company touts it as a “special Apple experience,” with in-person meetups in New York, London, and Shanghai at 9 a.m. ET, but there will be no keynote event from Cupertino. When the invites arrived, they were just a simple Apple logo broken up into yellow, green, and blue sections, a little detail that hints we should expect some new hardware to emerge from this.



People expect laptops to be the big event, and new MacBook Pros are on the way. They’ll include the M5 Pro and M5 Max CPUs, which offer a significant boost for applications like video editing and software development. You can guarantee that the 14-inch and 16-inch variants will receive an update following last year’s refresh.

Sale


Apple iPad 11-inch: A16 chip, 11-inch Model, Liquid Retina Display, 128GB, Wi-Fi 6, 12MP Front/12MP Back…
  • WHY IPAD — The 11-inch iPad is now more capable than ever with the superfast A16 chip, a stunning Liquid Retina display, advanced cameras, fast…
  • PERFORMANCE AND STORAGE — The superfast A16 chip delivers a boost in performance for your favorite activities. And with all-day battery life, iPad…
  • 11-INCH LIQUID RETINA DISPLAY — The gorgeous Liquid Retina display is an amazing way to watch movies or draw your next masterpiece.* True Tone…

The MacBook Air will also be along for the voyage, most likely running the ordinary M5 CPU for everyday use, which should be quite speedy and efficient without the power consumption of the Pro variants. One intriguing speculation suggests that Apple may be producing an entry-level MacBook priced below the Air. This one will use an A-series CPU, possibly an A18 Pro from the current iPhones, and a screen just shy of 13 inches. Its aluminum construction makes it light, and they want to offer it in a variety of colors other than silver and space gray. Pricing appears to be geared squarely at students and novice Mac users, who may be unable to purchase the full amount elsewhere.

Advertisement


Tablets are getting a look too, as the iPad Air is ready for an update and could include the M4 chip for a smoother experience and improved multitasking/creativity capabilities. The base iPad model is reportedly receiving an upgrade, probably up to an A18 processor, which should keep it running smoothly for surfing, streaming, and basic work functions. However, there will be little change in terms of design, as the emphasis will be on increasing power.

In terms of phones, there is one rumor that is looking very solid right now: the iPhone 17e. This one will be attempting to fill a gap in the lineup for a budget choice, with features like MagSafe charging and a decent display to boot, or at least compared to the 16e.

Source link

Advertisement
Continue Reading

Tech

I Tried Holafly eSIM in Vietnam For 2 Weeks: It’s the Best eSIM Service

Published

on

International travel in 2025 is a tricky business. That’s because there’s so much to worry about. Booked your tickets? Now, got to check whether the layover flight you booked requires a transit visa from the connecting country. Applied for the visa for the destination? Need to check if immigration is open 24/7, or you’ll be stuck at the airport at night for hours. The list doesn’t stop. And every time, I forget something, which often means my travel experiences involve running frantically through the airport. Another thing that has caused a lot of headaches is connectivity. Everyone needs a taxi from the airport to the hotel, and for that, you need data. Unfortunately, at most airports, I’ve only seen long queues to get a new SIM, which can sometimes take hours. Beyond that, buying a SIM at the airport is generally more expensive, making the overall experience less than ideal.

Fortunately, there is a solution to this headache, and that’s eSIM. As you may already know, eSIMs are just regular SIMs without the physical card. They function exactly the same and can be used in different countries. With an upcoming Vietnam trip, I decided to finally give eSIMs a go and chose Holafly for this test. These guys offer unlimited data plans in over 200 destinations, with no hidden charges or fuss.

Options

Holafly-image

As mentioned, Holafly covers over 200 destinations spanning across North America, South America, Asia, Africa, Europe, Oceania, and even the Caribbean. For all these places, the company provides unlimited data, without a phone number. This means you can browse the web, watch content, and even make calls using apps like WhatsApp and Telegram.

In addition, if you’re planning to visit multiple countries, Holafly also offers regional eSIMs and a global option that provides connectivity in 110+ countries and starts at $9.90 per day. The cost per day decreases with a longer plan. And if you use my code FOSSBYTES, you get a 5% on your eSIM.

Beyond that, if your work requires you to travel to various countries each week, Holafly recently introduced a subscription, called Holafly Plans, with coverage in 160+ destinations. The 25GB plan costs $49.90, the unlimited data plan costs $ 64.90 monthly, and you can cancel the service at any time. It is important to note that this is an introductory price offer, and you can get 10% off for 12 months using the coupon FOSSBYTES.

Advertisement

Holafly’s eSIMs are not transferable, and, like every other eSIM, your phone must be unlocked to use the service. The eSIM does not work with carrier-locked phones, so be sure to check the compatibility before buying. Also, older phones might not have eSIM technology, so check Holafly’s website or app to see if your mobile is compatible. Fortunately, if you do end up purchasing an eSIM and it’s not compatible, Holafly offers a generous 6-month return policy, which also applies if you cancelled your travel plans. Plus, for any help, you can contact the company’s 24/7 multilingual customer support team, with real people, who’d be happy to assist you.

The Setup

A person holding a phone with holafly installed

To have a pleasant travel experience, I needed to set up the eSIM before the actual flight date. After all, nobody wants to be stuck using the airport wifi to install the eSIM. Fortunately, Holafly’s setup is actually simple. I headed to the Holafly website, searched for Vietnam, and bought a 15-day plan for a total of $50,90. However, you can configure it from 1 to 90 days if you wish.

From there, it was another straightforward journey to install it on my Android device, and there are multiple ways to do so. You can either set up manually, using the QR code or the Holafly app for a one-button install, available for iPhone users with iOS 17.4 or above. I went with the QR code route, and it took just a few minutes.

The Experience

a person holding the vivo x300 pro

Everything was set up a day before flying, and for the test, I was using my daily driver, the vivo X300 Pro. As soon as my plane touched down at Tan Son Nhat International Airport in HCMC, I was instantly connected to the service, with mobile data running. Naturally, the first order of business was to inform my parents since this was my first solo international trip, and they were obviously worried. I sent them a text, and it went pretty smoothly.

Once I got out of the plane, the next item on the list was immigration. If you aren’t already aware, immigration in Vietnam can take hours. However, there’s a fast-track paid service that helps you get past all this nonsense. Since my Vietnamese speaking skills are basically non-existent, I used ChatGPT as my translator, which, thanks to the connection, worked super fine.

After reaching the hotel, it was time to push Holafly’s connection to the limit. For that, I first started downloading the new episodes of Squid Game Season 3. They were done in just a few minutes, and I was also able to track my data usage using the Holafly app. I then ran a series of speed tests. On average, download speeds ranged from 45.6 Mbps to 56 Mbps, while upload speeds reached 39 Mbps. These speeds are pretty fine for just about anyone, since I could do everything from streaming content to playing PUBG with my friends. However, some streaming services like Netflix did not play well with the eSIM, thanks to the weird “this content is not available at your location” error, which I faced a couple of times. I also faced a couple of errors with the Holafly app, which crashed when tracking my data usage.

Advertisement

After spending a few days in Ho Chi Minh City, the next destination on my travel list was Tuy Hoa. I’m the kind of person who likes to stay away from the touristy places, and it seemed like the best of the bunch. I took a train from Saigon to the region. Since Vietnamese trains don’t have Wi-Fi on board, I was relying on the Holafly connection, which, barring a few desolate forest regions, worked really well. There was decent coverage for about 90% of my journey. Even in the super quiet town of Tuy Hoa, the speeds were the same as in the big city, and I had no problems on my few excursions out of the city and into the wild as well.

Verdict

holafly campaign with fossbytes image

Holafly’s eSIM service is really good for just about anyone, simply because you get unlimited data. Throughout my Vietnam trip, I never worried about finishing my daily data quota, and that’s a very reassuring feeling. Beyond that, you get really solid coverage across almost the entire globe, a simple setup with various options, a global eSIM for frequent travellers, excellent customer support, and a generous 6-month refund policy. Sure, the app can be a bit buggy at times, and some streaming services can freak out, but Holafly is still a fantastic travel companion. If you have a trip coming up, be sure to give it a try, and don’t forget to save some money using my code FOSSBYTES.

Source link

Continue Reading

Tech

Scottish Equity Partners invests $50m in UK AI insurtech MEA

Published

on

Founded in 2021, MEA deploys AI for insurance underwriting, claims, and finance-related needs.

UK-based AI-native insurance technology company MEA has announced a $50m minority growth equity investment from Scottish Equity Partners (SEP).

According to data provided by MEA, operating costs represent around $2trn in annual costs incurred by the insurance industry. MEA’s agentic AI products, in some cases, have reduced those costs by 60pc, the company said.

Founded in 2021, MEA deploys AI for insurance underwriting, claims, reinsurance and finance-related needs.

Advertisement

According to the company, its platform increases broker productivity and margins by 30pc and increases the average underwriting capacity by 40pc. Its products are pre-trained in the language and specificity of insurance requirements, easing customer deployment and integration.

MEA has four offices across Bermuda, India, the UK and the US, and has clients across 21 countries with more than $400bn in gross written premium processed through the platform. Its clientele includes AXIS, CAN, Accenture and ServiceNow.

The company is in its fourth consecutive year of profitable growth, MEA said. SEP’s investment will support the company as it accelerates product development and customer engagement as it continues its expansion plans announced last October.

“We saw significant inbound interest from potential investors and chose SEP for their long-term perspective, collaborative style, and the strategic support they will provide as we enter our next phase of growth,” said MEA founder and CEO Martin Henley. Henley was previously the chief information officer at AXA.

Advertisement

“As the industry moves from AI experimentation to production, customers increasingly recognise the value of domain-specific technology that delivers results immediately.”

Angus Conroy, a managing partner at SEP added: “MEA has built a highly differentiated, production-grade platform with clear return on investment for global insurance groups.

“Strong customer adoption, growth, and capital efficiency reflect both the quality of the technology and the team’s deep insurance expertise.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Deal Alert: A Favorite Flagship-Level Robot Vac is $700 Off

Published

on

Autonomous robotic vacuums are the kind of gadget that has a huge gulf in performance between the entry level and the high end. Casual users might be OK with something cheap, but there’s an astronomical difference in performance between a $200 robot vac and a $1,000 combination vacuum and mop with a base station.

To that end, YEEDI has made some of our favorite automated cleaning robots in the last couple of years, and they’re typically outfitted with industry-leading technology implemented via thoughtful engineering. While YEEDI offers a wide variety across all price ranges, the best ones aren’t cheap — except for right now. Equipped with flagship features like a remarkable 18,000Pa of suction and the impressively effective Ozmo roller mop (which cleans itself, by the way), the YEEDI S14 Plus is now a whopping $700 off.

The robot’s outfitted with precisely tuned cleaning technology including the brand’s ZeroTangle 2.0 brushes that clean corners without getting lint or pet hair knotted up, and TruEdge 3D Sensor navigation that leverages a dtoF sensor, a visual camera, and dual structured light projection for impressive intelligent guidance. Plus, the multifunctional base station means you rarely even need to perform any maintenance or cleaning yourself. The robot does the vacuuming and mops up spills with a continuously self-cleaning spinning roller mop — it even constantly introduces clean mop water, as opposed to common rotating pads that just smear messes around. The YEEDI S14 Plus conveniently empties itself and gets a hot-water wash-up back at the base station, to the tune of 167°F.

From a usability standpoint, YEEDI S14 Plus has had plenty of time to refine. The app is straightforward and supports a range of useful mapping and exclusion zone functions, which makes it perfect for getting the most out of the exceptional 4+ hour battery life. It’s easy to set timed schedules, plan routines for cleaning different rooms on a rotating basis, and monitor battery life and device health in real time. It’s also compatible with Google Assistant, Amazon Alexa, and is newly Matter compatible, so you can easily call on it to spot-clean messes as soon as they happen or integrate it seamlessly into your broader smart home setup. If you’re in the market for a robot vacuum, it’s an awfully hard deal to pass up at $700 off.

Source link

Advertisement
Continue Reading

Tech

One Simple Mistake Can Turn A Standard Oil Change Into A $20K Headache

Published

on





For many of us, our vehicles rank among our most expensive and valuable possessions. It’s not only a big investment, but also a necessity for daily life. It takes us to work, gets our kids to school, and carries our groceries. It’s something we take care of, from rotating the tires to those vital oil changes that help keep the engine healthy. Typically, a maintenance issue that drivers face once or twice a year, or about every 7,000 to 10,000 miles, regular oil changes keep your engine well-lubricated, clean out debris, and protect engine parts from wear and tear.

You can often get your oil changed at the dealership where you bought your car or a mechanic, but many drivers instead choose fast-service locations where you simply drive in and wait, like Jiffy Lube or Valvoline. These locations typically offer faster service than your dealership, don’t require an appointment, and are sometimes less expensive. A Florida woman, however, will likely make a different choice the next time she needs an oil change — once her car is up and running again, that is.

Local ABC affiliate WFTV Channel 9 reported on Shannon Gerdauskas, who took her Mercedes to a Take 5 Oil Change in Deland in October 2025. Instead of changing her oil, an employee accidentally drained her transmission fluid instead, and she drove away without that fluid being replenished. She experienced issues almost immediately and returned to the shop, where the mistake was discovered. The estimated cost of repairs was more than $18,000, and Gerdauskas was almost on the hook for it.

Advertisement

Look for trusted shops

If you’re asking yourself, why was the customer almost responsible for the damages when they were caused by Take 5 Oil Change, Gerdauskas persistently asked the same thing. She did everything right but Take 5 uses another company to handle damage claims. That company, Fleet Response, approved a transmission flush but not the transmission replacement that her dealership says her vehicle required, denying her claim. It was only after WFTV got involved that Take 5 agreed to cover the repair costs and issued a statement to the new station: “While situations like this are rare, we strive to resolve matters fairly and transparently.”

These incidents may not happen often, but Gerdauskas is far from alone in her story. In February 2025, Baton Rouge CBS affiliate WAFB reported on a woman who took her 2018 Hyundai to Walmart for an oil change, only to discover the following day that they failed to put the oil drain plug back in. It caused almost $10,000 in damages, and Walmart agreed to cover only $6,000. And in late 2025, Motor1.com shared a story of a man who found an oil-soaked rag in his brake system after he experienced problems following an oil change at a Texas Take 5 location, posing a “serious safety hazard.”

Advertisement

Of course, don’t allow horror stories to prevent you from regularly changing your vehicle’s oil! We recommend that you take your vehicle to a trusted location that is clean and well-maintained, check reviews from other patrons, and look for locations with certified technicians that don’t pressure you to add extra services.



Advertisement

Source link

Continue Reading

Tech

Peter Steinberger joins OpenAI

Published

on

Not long ago, Peter Steinberger was experimenting with a side project that quickly caught fire across the developer world. His open-source AI assistant, OpenClaw, wasn’t just another chatbot; it could act on your behalf, from managing emails to integrating with calendars and messaging platforms. 

Today, that project has a new chapter: Steinberger is joining OpenAI to help build the next generation of personal AI agents

This move isn’t just about talent acquisition. It marks a switch in how the AI industry thinks about assistants: from reactive systems you talk to, toward agents that take initiative and perform tasks autonomously, with potential implications for productivity, workflows, and personal automation. 

OpenClaw first emerged in late 2025 under names like Clawdbot and Moltbot. What distinguished it was not fancy visuals or marketing, but its practical ambition: give users an AI that connects to their tools and executes workflows, booking flights, sorting messages, scheduling meetings, in ways that feel closer to agency than assistant

Advertisement

It quickly went viral on GitHub, drawing more than 100,000 stars and millions of visits to its project page within weeks. 

Advertisement

Rather than turning OpenClaw into a standalone company, Steinberger chose to partner with OpenAI, a decision he explained in a blog post as driven by a simple goal: bring intelligent agents to a broader audience as quickly as possible

According to him, OpenAI’s infrastructure, research resources, and product ecosystem offered the best path to scale such an ambitious idea. 

OpenAI’s CEO Sam Altman welcomed Steinberger’s move as strategic, underscoring that the company expects personal agents, systems capable of initiating, coordinating, and completing tasks across apps, to be an important part of future AI products.

Altman’s public post noted that OpenClaw will continue to exist as an open-source project under a new foundation supported by OpenAI, preserving its accessibility and community roots. 

Advertisement

The notion of AI that does things has been bubbling under the surface of tech discourse, but OpenClaw’s popularity crystallised it. Users interact with their agents through familiar interfaces like messaging platforms, but behind the scenes, these agents orchestrate API calls, automate scripts, handle notifications, and adapt to changing schedules, all without explicit commands after initial setup. 

This trajectory, from an experimental open-source project to a central piece of a major AI lab’s strategy, speaks to broader trends in the industry. Competitors from Anthropic to Google DeepMind have also indicated interest in multi-agent systems and autonomous workflows, but OpenAI’s move signals how seriously the category is now being taken.

It suggests a future where AI isn’t just conversational, but proactive and tightly integrated into everyday tooling. 

At the same time, this evolution raises fresh questions about governance and safety. OpenClaw’s open-source nature meant that developers could experiment freely, but that freedom also exposed potential attack surfaces; misconfigured agents with access to sensitive accounts or automation processes could be exploited if not properly safeguarded.

Advertisement

That is one reason why maintaining an open foundation with careful oversight matters as these tools scale. 

For OpenAI, Steinberger’s arrival embeds this agent-first thinking into its product roadmap at a critical moment. The company is already exploring “multi-agent” architectures, where specialised AIs coordinate with each other and with users to handle complex tasks more effectively than monolithic models alone. Steinberger brings an experimental sensibility and real-world experience that could accelerate those efforts. 

This could mean future versions of ChatGPT or other OpenAI products will be able to carry out tasks you define, rather than waiting for you to prompt them. That shift, from conversational replies to autonomous action, is the next frontier in how AI will fit into daily digital life.

And with OpenClaw’s creator now inside one of the most influential AI labs in the world, that future feels closer than ever.

Advertisement

Source link

Continue Reading

Tech

Flapping Airplanes on the future of AI: ‘We want to try really radically different things’

Published

on

There’s been a bunch of exciting research-focused AI labs popping up in recent months, and Flapping Airplanes is one of the most interesting. Propelled by its young and curious founders, Flapping Airplanes is focused on finding less data-hungry ways to train AI. It’s a potential game-changer for the economics and capabilities of AI models — and with $180 million in seed funding, they’ll have plenty of runway to figure it out.

Last week, I spoke with the lab’s three co-founders — brothers Ben and Asher Spector, and Aidan Smith — about why this is an exciting moment to start a new AI lab and why they keep coming back to ideas about the human brain.

I want to start by asking, why now? Labs like OpenAI and DeepMind have spent so much on scaling their models. I’m sure the competition seems daunting. Why did this feel like a good moment to launch a foundation model company?

Advertisement

Ben: There’s just so much to do. So, the advances that we’ve gotten over the last five to ten years have been spectacular. We love the tools. We use them every day. But the question is, is this the whole universe of things that needs to happen? And we thought about it very carefully and our answer was no, there’s a lot more to do. In our case, we thought that the data efficiency problem was sort of really the key thing to go look at. The current frontier models are trained on the sum totality of human knowledge, and humans can obviously make do with an awful lot less. So there’s a big gap there, and it’s worth understanding. 

What we’re doing is really a concentrated bet on three things. It’s a bet that this data efficiency problem is the important thing to be doing. Like, this is really a direction that is new and different and you can make progress on it. It’s a bet that this will be very commercially valuable and that will make the world a better place if we can do it. And it’s also a bet that’s sort of the right kind of team to do it is a creative and even in some ways inexperienced team that can go look at these problems again from the ground up.

Aidan: Yeah, absolutely. We don’t really see ourselves as competing with the other labs, because we think that we’re looking at just a very different set of problems. If you look at the human mind, it learns in an incredibly different way from transformers. And that’s not to say better, just very different. So we see these different trade offs. LLMs have an incredible ability to memorize, and draw on this great breadth of knowledge, but they can’t really pick up new skills very fast. It takes just rivers and rivers of data to adapt. And when you look inside the brain, you see that the algorithms that it uses are just fundamentally so different from gradient descent and some of the techniques that people use to train AI today. So that’s why we’re building a new guard of researchers to kind of address these problems and really think differently about the AI space.

Asher: This question is just so scientifically interesting: why are the systems that we have built that are intelligent also so different from what humans do? Where does this difference come from? How can we use knowledge of that difference to make better systems? But at the same time, I also think it’s actually very commercially viable and very good for the world. Lots of regimes that are really important are also highly data constrained, like robotics or scientific discovery. Even in enterprise applications, a model that’s a million times more data efficient is probably a million times easier to put into the economy. So for us, it was very exciting to take a fresh perspective on these approaches, and think, if we really had a model that’s vastly more data efficient, what could we do with it?

Advertisement

Techcrunch event

Boston, MA
|
June 23, 2026

Advertisement

This gets into my next question, which is sort of ties in also to the name, Flapping Airplanes. There’s this philosophical question in AI about how much we’re trying to recreate what humans do in their brain, versus creating some more abstract intelligence that takes a completely different path. Aidan is coming from Neuralink, which is all about the human brain. Do you see yourself as kind of pursuing a more neuromorphic view of AI? 

Aidan: The way I look at the brain is as an existence proof. We see it as evidence that there are other algorithms out there. There’s not just one orthodoxy. And the brain has some crazy constraints. When you look at the underlying hardware, there’s some crazy stuff. It takes a millisecond to fire an action potential. In that time, your computer can do just so so many operations. And so realistically, there’s probably an approach that’s actually much better than the brain out there, and also very different than the transformer. So we’re very inspired by some of the things that the brain does, but we don’t see ourselves being tied down by it.

Ben: Just to add on to that. it’s very much in our name: Flapping Airplanes. Think of the current systems as big, Boeing 787s. We’re not trying to build birds. That’s a step too far. We’re trying to build some kind of a flapping airplane. My perspective from computer systems is that the constraints of the brain and silicon are sufficiently different from each other that we should not expect these systems to end up looking the same. When the substrate is so different and you have genuinely very different trade-offs about the cost of compute, the cost of locality and moving data, you actually expect these systems to look a little bit different. But just because they will look somewhat different does not mean that we should not take inspiration from the brain and try to use the parts that we think are interesting to improve our own systems. 

It does feel like there’s now more freedom for labs to focus on research, as opposed to, just developing products. It feels like a big difference for this generation of labs. You have some that are very research focused, and others that are sort of “research focused for now.” What does that conversation look like within flapping airplanes?

Advertisement

Asher: I wish I could give you a timeline. I wish I could say, in three years, we’re going to have solved the research problem. This is how we’re going to commercialize. I can’t. We don’t know the answers. We’re looking for truth. That said, I do think we have commercial backgrounds. I spent a bunch of time developing technology for companies that made those companies a reasonable amount of money. Ben has incubated a bunch of startups that have commercial backgrounds, and we actually are excited to commercialize. We think it’s good for the world to take the value you’ve created and put it in the hands of people who can use it. So I don’t think we’re opposed to it. We just need to start by doing research, because if we start by signing big enterprise contracts, we’re going to get distracted, and we won’t do the research that’s valuable.

Aidan: Yeah, we want to try really, really radically different things, and sometimes radically even things are just worse than the paradigm. We’re exploring a set of different trade offs. It’s our hope that they will be different in the long run. 

Ben: Companies are at their best when they’re really focused on doing something well, right? Big companies can afford to do many, many different things at once. When you’re a startup, you really have to pick what is the most valuable thing you can do, and do that all the way. And we are creating the most value when we are all in on solving fundamental problems for the time being. 

I’m actually optimistic that reasonably soon, we might have made enough progress that we can then go start to touch grass in the real world. And you learn a lot by getting feedback from the real world. The amazing thing about the world is, it teaches you things constantly, right? It’s this tremendous vat of truth that you get to look into whenever you want. I think the main thing that I think has been enabled by the recent change in the economics and financing of these structures is the ability to let companies really focus on what they’re good at for longer periods of time. I think that focus, the thing that I’m most excited about, that will let us do really differentiated work. 

Advertisement

To spell out what I think you’re referring to: there’s so much excitement around and the opportunity for investors is so clear that they are willing to give $180 million in seed funding to a completely new company full of these very smart, but also very young people who didn’t just cash out of PayPal or anything. How was it engaging with that process? Did you know, going in, there is this appetite, or was it something you discovered, of like, actually, we can make this a bigger thing than we thought.

Ben: I would say it was a mixture of the two. The market has been hot for many months at this point. So it was not a secret that no large rounds were starting to come together. But you never quite know how the fundraising environment will respond to your particular ideas about the world. This is, again, a place where you have to let the world give you feedback about what you’re doing. Even over the course of our fundraise, we learned a lot and actually changed our ideas. And we refined our opinions of the things we should be prioritizing, and what the right timelines were for commercialization.

I think we were somewhat surprised by how well our message resonated, because it was something that was very clear to us, but you never know whether your ideas will turn out to be things that other people believe as well or if everyone else thinks you’re crazy. We have been extremely fortunate to have found a group of amazing investors who our message really resonated with and they said, “Yes, this is exactly what we’ve been looking for.” And that was amazing. It was, you know, surprising and wonderful.

Aidan: Yeah, a thirst for the age of research has kind of been in the water for a little bit now. And more and more, we find ourselves positioned as the player to pursue the age of research and really try these radical ideas.

Advertisement

At least for the scale-driven companies, there is this enormous cost of entry for foundation models. Just building a model at that scale is an incredibly compute-intensive thing. Research is a little bit in the middle, where presumably you are building foundation models, but if you’re doing it with less data and you’re not so scale-oriented, maybe you get a bit of a break. How much do you expect compute costs to be sort of limiting your runway.

Ben: One of the advantages of doing deep, fundamental research is that, somewhat paradoxically, it is much cheaper to do really crazy, radical ideas than it is to do incremental work. Because when you do incremental work, in order to find out whether or not it does work, you have to go very far up the scaling ladder. Many interventions that look good at small scale do not actually persist at large scale. So as a result, it’s very expensive to do that kind of work. Whereas if you have some crazy new idea about some new architecture optimizer, it’s probably just gonna fail on the first rum, right? So you don’t have to run this up the ladder. It’s already broken. That’s great. 

So, this doesn’t mean that scale is irrelevant for us. Scale is actually an important tool in the toolbox of all the things that you can do. Being able to scale up our ideas is certainly relevant to our company. So I wouldn’t frame us as the antithesis of scale, but I think it is a wonderful aspect of the kind of work we’re doing, that we can try many of our ideas at very small scale before we would even need to think about doing them at large scale.

Asher: Yeah, you should be able to use all the internet. But you shouldn’t need to. We find it really, really perplexing that you need to use all the Internet to really get this human level intelligence.

Advertisement

So, what becomes possible  if you’re able to train more efficiently on data, right? Presumably the model will be more powerful and intelligent. But do you have specific ideas about kind of where that goes? Are we looking at more out-of-distribution generalization, or are we looking at sort of models that get better at a particular task with less experience?

Asher: So, first, we’re doing science, so I don’t know the answer, but I can give you three hypotheses. So my first hypothesis is that there’s a broad spectrum between just looking for statistical patterns and something that has really deep understanding. And I think the current models live somewhere on that spectrum. I don’t think they’re all the way towards deep understanding, but they’re also clearly not just doing statistical pattern matching. And it’s possible that as you train models on less data, you really force the model to have incredibly deep understandings of everything it’s seen. And as you do that, the model may become more intelligent in very interesting ways. It may know less facts, but get better at reasoning. So that’s one potential hypothesis. 

Another hypothesis is similar to what you said, that at the moment, it’s very expensive, both operationally and also in pure monetary costs, to teach models new capabilities, because you need so much data to teach them those things. It’s possible that one output of what we’re doing is to get vastly more efficient at post training, so with only a couple of examples, you could really put a model into a new domain. 

And then it’s also possible that this just unlocks new verticals for AI. There are certain types of robotics, for instance, where for whatever reason, we can’t quite get the type of capabilities that really makes it commercially viable. My opinion is that it’s a limited data problem, not a hardware problem. The fact that you can tele-operate the robots to do stuff is proof that that the hardware is sufficiently good. Butthere’s lots of domains like this, like scientific discovery. 

Advertisement

Ben: One thing I’ll also double-click on is that when we think about the impact that AI can have on the world, one view you might have is that this is a deflationary technology. That is, the role of AI is to automate a bunch of jobs, and take that work and make it cheaper to do, so that you’re able to remove work from the economy and have it done by robots instead. And I’m sure that will happen. But this is not, to my mind, the most exciting vision of AI. The most exciting vision of AI is one where there’s all kinds of new science and technologies that we can construct that humans aren’t smart enough to come up with, but other systems can. 

On this aspect, I think that first axis that Ascher was talking about around the spectrum between sort of true generalization versus memorization or interpolation of the data, I think that axis is extremely important to have the deep insights that will lead to these new advances in medicine and science. It is important that the models are very much on the creativity side of the spectrum. And so, part of why I’m very excited about the work that we’re doing is that I think even beyond the individual economic impacts, I’m also just genuinely very kind of mission-oriented around the question of, can we actually get AI to do stuff that, like, fundamentally humans couldn’t do before? And that’s more than just, “Let’s go fire a bunch of people from their jobs.”

Absolutely. Does that put you in a particular camp on, like, the AGI conversation, the like out of distribution, generalization conversation.

Asher: I really don’t exactly know what AGI means. It’s clear that capabilities are advancing very quickly. It’s clear that there’s tremendous amounts of economic value that’s being created. I don’t think we’re very close to God-in-a-box, in my opinion. I don’t think that within two months or even two years, there’s going to be a singularity where suddenly humans are completely obsolete. I basically agree with what Ben said at the beginning, which is, it’s a really big world. There’s a lot of work to do. There’s a lot of amazing work being done, and we’re excited to contribute

Advertisement

Well, the idea about the brain and the neuromorphic part of it does feel relevant. You’re saying, really the relevant thing to compare LLMs to is the human brain, more than the Mechanical Turk or the deterministic computers that came before.

Aidan: I’ll emphasize, the brain is not the ceiling, right? The brain, in many ways, is the floor. Frankly, I see no evidence that the brain is not a knowable system that follows physical laws. In fact, we know it’s under many constraints. And so we would expect to be able to create capabilities that are much, much more interesting and different and potentially better than the brain in the long run. And so we’re excited to contribute to that future, whether that’s AGI or otherwise.

Asher: And I do think the brain is the relevant comparison, just because the brain helps us understand how big the space is. Like, it’s easy to see all the progress we’ve made and think, wow, we like, have the answer. We’re almost done. But if you look outward a little bit and try to have a bit more perspective, there’s a lot of stuff we don’t know. 

Ben: We’re not trying to be better, per se. We’re trying to be different, right? That’s the key thing I really want to hammer on here. All of these systems will almost certainly have different trade offs of them. You’ll get an advantage somewhere, and it’ll cost you somewhere else. And it’s a big world out there. There are so many different domains that have so many different trade offs that having more system, and more fundamental technologies that can address these different domains is very likely to make the kind of AI diffuse more effectively and more rapidly through the world.

Advertisement

One of the ways you’ve distinguished yourself, is in your hiring approach, getting people who are very, very young, in some cases, still in college or high school. What is it that clicks for you when you’re talking to someone and that makes you think, I want this person working with us on these research problems?

Aidan: It’s when you talk to someone and they just dazzle you, they have so many new ideas and they think about things in a way that many established researchers just can’t because they haven’t been polluted by the context of thousands and thousands of papers. Really, the number one thing we look for is creativity. Our team is so exceptionally creative, and every day, I feel really lucky to get to go in and talk about really radical solutions to some of the big problems in AI with people and dream up a very different future.

Ben:  Probably the number one signal that I’m personally looking for is just like, do they teach me something new when I spend time with them? If they teach me something new, the odds that they’re going to teach us something new about what we’re working on is also pretty good. When you’re doing research, those creative, new ideas are really the priority. 

Part of my background was during my undergrad and PhD., I helped start this incubator called Prod that worked with a bunch of companies that turned out well. And I think one of the things that we saw from that was that young people can absolutely compete in the very highest echelons of industry. Frankly, a big part of the unlock is just realizing, yeah, I can go do this stuff. You can absolutely go contribute at the highest level. 

Advertisement

Of course, we do recognize the value of experience. People who have worked on large scale systems are great, like, we’ve hired some of them, you know, we are excited to work with all sorts of folks. And I think our mission has resonated with the experienced folks as well. I just think that our key thing is that we want people who are not afraid to change the paradigm and can try to imagine a new system of how things might work.

One of things I’ve been puzzling about is, how different do you think the resulting AI systems are going to be? It’s easy for me to imagine something like Claude Opus that just works 20% better and can do 20% more things. But if it’s just completely new, it’s hard to think about where that goes or what the end result looks like.

Asher: I don’t know if you’ve ever had the privilege of talking to the GPT-4 base model, but it had a lot of really strange emerging capabilities. For example, you could take a snippet of an unwritten blog post of yours, and ask, who do you think wrote this, and it could identify it.

There’s a lot of capabilities like this, where models are smart in ways we cannot fathom. And future models will be smarter in even stranger ways. I think we should expect the future to be really weird and the architectures to be even weirder. We’re looking for 1000x wins in data efficiency. We’re not trying to make incremental change. And so we should expect the same kind of unknowable, alien changes and capabilities at the limit.

Advertisement

Ben: I broadly agree with that. I’m probably slightly more tempered in how these things will eventually become experienced by the world, just as the GPT-4 base model was tempered by OpenAI. You want to put things in forms where you’re not staring into the abyss as a consumer. I think that’s important. But I broadly agree that our research agenda is about building capabilities that really are quite fundamentally different from what can be done right now.

Fantastic! Are there ways people can engage with flapping airplanes? Is it too early for that? Or they should just stay tuned for when the research and the models come out well.

Asher: So, we have Hi@flappingairplanes.com. If you just want to say hi, We also have disagree@flappingairplanes.com if you want to disagree with us. We’ve actually had some really cool conversations where people, like, send us very long essays about why they think it’s impossible to do what we’re doing. And we’re happy to engage with it. 

Ben: But they haven’t convinced us yet. No one has convinced us yet.

Advertisement

Asher: The second thing is, you know, we are, we are looking for exceptional people who are trying to change the field and change the world. So if you’re interested, you should reach out.

Ben: And if you have another unorthodox background, it’s okay. You don’t need two PhDs. We really are looking for folks who think differently.

Source link

Advertisement
Continue Reading

Tech

Memory shortages could delay PlayStation 6 launch until 2029, raise Switch 2 price

Published

on


Multiple analysts and industry insiders have recently claimed that the next-generation PlayStation console could be delayed due to the AI-fueled memory crisis. Bloomberg has now reiterated the rumors, claiming that Sony is unlikely to release the PlayStation 6 next year, even though the next-gen Xbox is still said to be…
Read Entire Article
Source link

Continue Reading

Tech

What my CS team was missing

Published

on

I need to say something that might make CS leaders uncomfortable: most of what your team does before a renewal is valuable, but it’s listening to only one channel. Your EBRs, your health scores, your stakeholder maps. They capture what your customer is willing to tell you directly. What they don’t capture is the conversation happening everywhere else. And that’s usually where churn starts.

I know because I ran the standard playbook for years. EBRs, stakeholder mapping, health score reviews, and renewal prep meetings, where we rated our gut feeling on a scale of green to red. We had dashboards. We had strong CSMs who genuinely cared about their accounts. And we still got blindsided.

The $2M quarter is the one I can’t forget. Two enterprise accounts churned in the same 90-day window. Both were green in every system we had. One had an NPS of 72.

When I dug into what happened, I didn’t find a CS execution problem. I found a coverage gap. Every signal had been there. Just not in the places our process was designed to look. I sat in the post-mortem knowing we’d done everything our process asked us to do. That was the problem.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Later in this article, I’ll show you what both of those accounts would have looked like inside Renewal Fix. Before anyone on my team knew there was a problem.

What your EBR captures, and what it can’t

I’m not saying EBRs are useless. A well-run EBR builds relationship depth, gives your champion ammunition internally, and surfaces problems the customer is willing to raise directly. But even the best EBR has a structural limitation: it only captures what someone chooses to say out loud, in a meeting, to a vendor.

Advertisement

The real conversation about your product is happening in a Slack channel you’ll never see, in a procurement review you weren’t invited to, and in a 1:1 between your champion and their new boss who just joined from a company that used your competitor. The EBR gives you one essential channel. The danger is treating it as the only one.

The signals are everywhere. Just not in your CRM.

Here’s what was actually happening in those two accounts that churned on me.

Account one: their engineering team had filed 23 support tickets about API latency over four months. Not “the product is broken” tickets. Small, specific, technical complaints that got resolved individually. Nobody in CS ever saw them because they never escalated to “critical.” But lined up chronologically, the pattern was unmistakable: this team was losing patience, one resolved ticket at a time.

Account two: three of their five power users updated their LinkedIn profiles in the same two-week window. One started posting about a competitor’s product. Our champion’s title changed from “Head of” to “Senior Manager.” A quiet demotion nobody noticed because we were watching product usage dashboards, not org charts.

Advertisement

Every CS leader I know has lost an account and later found out the champion left months ago. The customer’s reaction is always the same: “We assumed you knew.” They expect you to track publicly available professional changes, the same information any recruiter monitors. Not tracking them isn’t respectful. It’s a blind spot.

Neither signal lived in our CRM. Neither showed up in our health score. They were sitting in plain sight in systems our CS team had no reason to check.

What your health score measures, and the lag problem

Health scores aren’t the problem. Treating them as the whole picture is. A typical health score aggregates NPS, login frequency, support ticket count, and feature adoption. Green means safe. Red means act. But these are lagging indicators. By the time login frequency drops, the decision to evaluate alternatives may already be in motion.

When I started tracking leading indicators alongside our existing health model, the difference was striking. Across roughly 300 mid-market accounts over 18 months, we found that support ticket velocity, specifically the rate of increase in non-critical tickets over a rolling 90-day window, predicted churn at T-90 at roughly 2x the accuracy of our composite health score. The signals that actually predict churn aren’t the ones most CS platforms are designed to track.

Advertisement

Building the Signal Coverage Model

The teams with the strongest renewal rates don’t abandon their existing processes. They add a signal layer on top. The highest-signal sources break into three tiers.

Tier 1: Support ticket patterns. Not the count, but the velocity, the sentiment trend, and whether the same team keeps filing. A steady trickle of “resolved” tickets from one engineering team is often a louder signal than a single P1 escalation. At scale, this becomes cohort-level complaint clustering across a segment.

Tier 2: People changes. Champion turnover, re-orgs, title changes, and new executives from a competitor’s customer base. The person who bought your product and the person renewing it are often not the same person. At scale, you’re watching for patterns of org instability across your book.

Tier 3: Competitive exposure. Whether your customer is being actively pitched, attending competitor events, or has team members engaging with competitor content online. At scale, you’re tracking which segments your competitors are targeting hardest.

Advertisement

The real challenge isn’t knowing what to track. It’s that these signals live in five or six different systems, and nobody’s job is to stitch them together. Your CSM sees Zendesk. Your SE sees Jira. Your AE sees Salesforce. The full picture only exists if someone manually assembles it.

What this looks like in practice

One team I worked with built a manual version of this: CSMs logging signals from six different sources every Friday. About 90 minutes per account per week. Their renewal rate hit 96%. But the approach doesn’t scale past a 25-account book.

At 80 accounts in a mid-market motion, you need automation. At 150+ in a PLG model, the signals are still there, you’re watching for cohort-level drops in feature adoption or clusters of the same complaint across a segment, but you cannot find them without automation.

The teams doing this manually are logging into six tools every Friday. The teams doing this with automation get a Slack message when something changes. No dashboard to check. No Friday ritual.

Advertisement

Detection without a playbook is just anxiety. The point of catching signals early isn’t to panic. It’s to have time to act. An executive sponsor who hasn’t logged in for 90 days needs a different intervention than an account with a competitor POC in their Salesforce sandbox. The signal tells you what’s happening. The response has to match.

That gap between knowing what to track and actually tracking it consistently is why I built Renewal Fix. Not to replace the manual process, but to remove the ceiling on it. The platform pulls signals from support tickets, call recordings, CRM data, and engineering channels automatically, stitches them into a single account view, and flags them before they become a renewal surprise.

See it for yourself

Enter your work email at renewalfix.com. In 30 seconds, you’ll get a one-page executive brief showing your blind spots: 10 accounts that look like they belong in your CS platform, built from your company’s products, competitive landscape, and integration stack, each with a health score and risk signals sourced from support tickets, call recordings, and org changes that your current dashboard would never surface. No demo, no sales call.

Advertisement

Find the account that looks most like Account One. Health score in the 70s, risk signals hiding underneath. Then click “Executive Brief” for a one-page summary of your portfolio’s total risk exposure, with dollar amounts and prioritized actions. That view is what Renewal Fix delivers weekly in production.

Your green accounts aren’t necessarily at risk. But they might be quieter than you realize.

Source link

Advertisement
Continue Reading

Tech

HP Introduces Affordable DeskJet Printers for Indian Households

Published

on

HP has introduced a new range of DeskJet All-in-One printers in India. The lineup is designed for students, parents, and working professionals who need a dependable, easy-to-use printer at home. With a simple setup, smooth wireless connectivity, affordable ink options, and a modern design, the new DeskJet series aims to make everyday printing more convenient and hassle-free for Indian households.

According to HP, families today want printers that are simple, compact, and connected. Therefore, the new DeskJet range offers plug-and-play installation and reliable Wi-Fi connectivity for smooth printing. The printers support wireless printing, so family members can print directly from their phones or laptops. Their compact design and fresh color options make them suitable for study tables, work desks, or small home offices.

Models in the New DeskJet Lineup

image_for_HP_DeskJet_Printers_optimized_250

The new range includes three categories:

  • HP DeskJet
  • HP DeskJet Ink Advantage
  • HP DeskJet Ultra Ink Advantage

In total, HP has introduced six models, each of which is geared towards fulfilling different home printing requirements. There are models suitable for casual home use, while others are better suited to frequent home printers.

All of these models have a 60-sheet input tray, along with print speeds of up to 7.5 pages per minute in black, and up to 5.5 pages per minute in color. There are also print speeds of up to 8.5 pages per minute in black available in certain models. In addition, the DeskJet Ink Advantage 4388 features an Automatic Document Feeder, making it easier to print multiple pages at once.

Easy Connectivity and Smart Features

The latest DeskJets promise strong, reliable networking. With dual-band Wi-Fi, you’ll benefit from improved signal strength and stability. And with self-healing, the printer will automatically restore its network connection if it’s lost.

Advertisement

As a result, printing remains consistent and uninterrupted. Multiple users in the household can print wirelessly using the HP app through Wi-Fi or Bluetooth. The control panel is easy to understand, allowing quick operation without any technical knowledge.

Affordable Printing with High-Yield Inks

HP_Deskjet_Ink_Advantage_2986_All_in-One-printer_optimized_250

Another important advantage of the new DeskJet series is that these printers use affordable and high-yield ink cartridges. Since these cartridges last longer, you will not have to replace them as often. Therefore, this will also reduce the overall cost. The printer will also enable you to produce crisp, clean black-and-white documents.

Price and Availability in India

HP has already launched some of its printer models in the country. The company will also introduce more products in the coming days. The HP DeskJet Ink Advantage 2986 and 2989 are priced at Rs. 6,999 each. The DeskJet Ink Advantage 4388 will cost Rs. 7,999. The Ultra Ink Advantage 5135 and 5185, along with the DeskJet 2931, will soon hit the market. Buyers can purchase the available models through HP World outlets or the HP Online Store.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025