Connect with us

Tech

NASA confirms target date for crewed Artemis II lunar flight

Published

on

NASA has announced a date for the second wet dress rehearsal for the SLS rocket that will send a crew of astronauts on a voyage around the moon in the highly anticipated Artemis II mission.

The space agency also confirmed that the earliest the rocket could launch is Friday, March 6.

NASA is now targeting Thursday, February 19, for the fueling part of the wet dress rehearsal at the Kennedy Space Center launch site in Florida.

The rehearsal is a key part of flight preparation and involves engineers fueling the rocket and going through the entire launch procedure short of actually igniting the engines.

Advertisement

During the first Artemis II rehearsal at the start of this month, engineers spotted a hydrogen leak at the base of the SLS rocket, prompting the team to ditch the target launch date of February 8 while it addressed the issue.

NASA said that it will only announce a new target launch date once the results of the second wet dress rehearsal have been fully assessed, but said the rocket would not lift off before March 6.

“The wet dress rehearsal will run the launch team as well as supporting teams through a full range of operations, including loading cryogenic liquid propellant into the SLS rocket’s tanks, conducting a launch countdown, demonstrating the ability to recycle the countdown clock, and draining the tanks to practice scrub procedures,” NASA said in a post on its website on Monday.

It added that the launch controllers will arrive at their consoles in the Launch Control Center at Kennedy at 6:40 p.m. ET on Tuesday to start the nearly 50-hour countdown. The simulated launch time is 8:30 p.m. on Thursday.

Advertisement

The Artemis II mission will send NASA astronauts Victor Glover, Reid Wiseman, and Christina Koch, together with the Canadian Space Agency’s Jeremy Hansen, on a 10-day flight around the moon aboard the Orion spacecraft. It’ll be the first crewed lunar-bound flight since the final Apollo mission in 1972.

The mission will test the spacecraft’s systems and deep-space operations to validate them for future crewed moon missions and lunar landings.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Peter Steinberger joins OpenAI

Published

on

Not long ago, Peter Steinberger was experimenting with a side project that quickly caught fire across the developer world. His open-source AI assistant, OpenClaw, wasn’t just another chatbot; it could act on your behalf, from managing emails to integrating with calendars and messaging platforms. 

Today, that project has a new chapter: Steinberger is joining OpenAI to help build the next generation of personal AI agents

This move isn’t just about talent acquisition. It marks a switch in how the AI industry thinks about assistants: from reactive systems you talk to, toward agents that take initiative and perform tasks autonomously, with potential implications for productivity, workflows, and personal automation. 

OpenClaw first emerged in late 2025 under names like Clawdbot and Moltbot. What distinguished it was not fancy visuals or marketing, but its practical ambition: give users an AI that connects to their tools and executes workflows, booking flights, sorting messages, scheduling meetings, in ways that feel closer to agency than assistant

Advertisement

It quickly went viral on GitHub, drawing more than 100,000 stars and millions of visits to its project page within weeks. 

Advertisement

Rather than turning OpenClaw into a standalone company, Steinberger chose to partner with OpenAI, a decision he explained in a blog post as driven by a simple goal: bring intelligent agents to a broader audience as quickly as possible

According to him, OpenAI’s infrastructure, research resources, and product ecosystem offered the best path to scale such an ambitious idea. 

OpenAI’s CEO Sam Altman welcomed Steinberger’s move as strategic, underscoring that the company expects personal agents, systems capable of initiating, coordinating, and completing tasks across apps, to be an important part of future AI products.

Altman’s public post noted that OpenClaw will continue to exist as an open-source project under a new foundation supported by OpenAI, preserving its accessibility and community roots. 

Advertisement

The notion of AI that does things has been bubbling under the surface of tech discourse, but OpenClaw’s popularity crystallised it. Users interact with their agents through familiar interfaces like messaging platforms, but behind the scenes, these agents orchestrate API calls, automate scripts, handle notifications, and adapt to changing schedules, all without explicit commands after initial setup. 

This trajectory, from an experimental open-source project to a central piece of a major AI lab’s strategy, speaks to broader trends in the industry. Competitors from Anthropic to Google DeepMind have also indicated interest in multi-agent systems and autonomous workflows, but OpenAI’s move signals how seriously the category is now being taken.

It suggests a future where AI isn’t just conversational, but proactive and tightly integrated into everyday tooling. 

At the same time, this evolution raises fresh questions about governance and safety. OpenClaw’s open-source nature meant that developers could experiment freely, but that freedom also exposed potential attack surfaces; misconfigured agents with access to sensitive accounts or automation processes could be exploited if not properly safeguarded.

Advertisement

That is one reason why maintaining an open foundation with careful oversight matters as these tools scale. 

For OpenAI, Steinberger’s arrival embeds this agent-first thinking into its product roadmap at a critical moment. The company is already exploring “multi-agent” architectures, where specialised AIs coordinate with each other and with users to handle complex tasks more effectively than monolithic models alone. Steinberger brings an experimental sensibility and real-world experience that could accelerate those efforts. 

This could mean future versions of ChatGPT or other OpenAI products will be able to carry out tasks you define, rather than waiting for you to prompt them. That shift, from conversational replies to autonomous action, is the next frontier in how AI will fit into daily digital life.

And with OpenClaw’s creator now inside one of the most influential AI labs in the world, that future feels closer than ever.

Advertisement

Source link

Continue Reading

Tech

Flapping Airplanes on the future of AI: ‘We want to try really radically different things’

Published

on

There’s been a bunch of exciting research-focused AI labs popping up in recent months, and Flapping Airplanes is one of the most interesting. Propelled by its young and curious founders, Flapping Airplanes is focused on finding less data-hungry ways to train AI. It’s a potential game-changer for the economics and capabilities of AI models — and with $180 million in seed funding, they’ll have plenty of runway to figure it out.

Last week, I spoke with the lab’s three co-founders — brothers Ben and Asher Spector, and Aidan Smith — about why this is an exciting moment to start a new AI lab and why they keep coming back to ideas about the human brain.

I want to start by asking, why now? Labs like OpenAI and DeepMind have spent so much on scaling their models. I’m sure the competition seems daunting. Why did this feel like a good moment to launch a foundation model company?

Advertisement

Ben: There’s just so much to do. So, the advances that we’ve gotten over the last five to ten years have been spectacular. We love the tools. We use them every day. But the question is, is this the whole universe of things that needs to happen? And we thought about it very carefully and our answer was no, there’s a lot more to do. In our case, we thought that the data efficiency problem was sort of really the key thing to go look at. The current frontier models are trained on the sum totality of human knowledge, and humans can obviously make do with an awful lot less. So there’s a big gap there, and it’s worth understanding. 

What we’re doing is really a concentrated bet on three things. It’s a bet that this data efficiency problem is the important thing to be doing. Like, this is really a direction that is new and different and you can make progress on it. It’s a bet that this will be very commercially valuable and that will make the world a better place if we can do it. And it’s also a bet that’s sort of the right kind of team to do it is a creative and even in some ways inexperienced team that can go look at these problems again from the ground up.

Aidan: Yeah, absolutely. We don’t really see ourselves as competing with the other labs, because we think that we’re looking at just a very different set of problems. If you look at the human mind, it learns in an incredibly different way from transformers. And that’s not to say better, just very different. So we see these different trade offs. LLMs have an incredible ability to memorize, and draw on this great breadth of knowledge, but they can’t really pick up new skills very fast. It takes just rivers and rivers of data to adapt. And when you look inside the brain, you see that the algorithms that it uses are just fundamentally so different from gradient descent and some of the techniques that people use to train AI today. So that’s why we’re building a new guard of researchers to kind of address these problems and really think differently about the AI space.

Asher: This question is just so scientifically interesting: why are the systems that we have built that are intelligent also so different from what humans do? Where does this difference come from? How can we use knowledge of that difference to make better systems? But at the same time, I also think it’s actually very commercially viable and very good for the world. Lots of regimes that are really important are also highly data constrained, like robotics or scientific discovery. Even in enterprise applications, a model that’s a million times more data efficient is probably a million times easier to put into the economy. So for us, it was very exciting to take a fresh perspective on these approaches, and think, if we really had a model that’s vastly more data efficient, what could we do with it?

Advertisement

Techcrunch event

Boston, MA
|
June 23, 2026

Advertisement

This gets into my next question, which is sort of ties in also to the name, Flapping Airplanes. There’s this philosophical question in AI about how much we’re trying to recreate what humans do in their brain, versus creating some more abstract intelligence that takes a completely different path. Aidan is coming from Neuralink, which is all about the human brain. Do you see yourself as kind of pursuing a more neuromorphic view of AI? 

Aidan: The way I look at the brain is as an existence proof. We see it as evidence that there are other algorithms out there. There’s not just one orthodoxy. And the brain has some crazy constraints. When you look at the underlying hardware, there’s some crazy stuff. It takes a millisecond to fire an action potential. In that time, your computer can do just so so many operations. And so realistically, there’s probably an approach that’s actually much better than the brain out there, and also very different than the transformer. So we’re very inspired by some of the things that the brain does, but we don’t see ourselves being tied down by it.

Ben: Just to add on to that. it’s very much in our name: Flapping Airplanes. Think of the current systems as big, Boeing 787s. We’re not trying to build birds. That’s a step too far. We’re trying to build some kind of a flapping airplane. My perspective from computer systems is that the constraints of the brain and silicon are sufficiently different from each other that we should not expect these systems to end up looking the same. When the substrate is so different and you have genuinely very different trade-offs about the cost of compute, the cost of locality and moving data, you actually expect these systems to look a little bit different. But just because they will look somewhat different does not mean that we should not take inspiration from the brain and try to use the parts that we think are interesting to improve our own systems. 

It does feel like there’s now more freedom for labs to focus on research, as opposed to, just developing products. It feels like a big difference for this generation of labs. You have some that are very research focused, and others that are sort of “research focused for now.” What does that conversation look like within flapping airplanes?

Advertisement

Asher: I wish I could give you a timeline. I wish I could say, in three years, we’re going to have solved the research problem. This is how we’re going to commercialize. I can’t. We don’t know the answers. We’re looking for truth. That said, I do think we have commercial backgrounds. I spent a bunch of time developing technology for companies that made those companies a reasonable amount of money. Ben has incubated a bunch of startups that have commercial backgrounds, and we actually are excited to commercialize. We think it’s good for the world to take the value you’ve created and put it in the hands of people who can use it. So I don’t think we’re opposed to it. We just need to start by doing research, because if we start by signing big enterprise contracts, we’re going to get distracted, and we won’t do the research that’s valuable.

Aidan: Yeah, we want to try really, really radically different things, and sometimes radically even things are just worse than the paradigm. We’re exploring a set of different trade offs. It’s our hope that they will be different in the long run. 

Ben: Companies are at their best when they’re really focused on doing something well, right? Big companies can afford to do many, many different things at once. When you’re a startup, you really have to pick what is the most valuable thing you can do, and do that all the way. And we are creating the most value when we are all in on solving fundamental problems for the time being. 

I’m actually optimistic that reasonably soon, we might have made enough progress that we can then go start to touch grass in the real world. And you learn a lot by getting feedback from the real world. The amazing thing about the world is, it teaches you things constantly, right? It’s this tremendous vat of truth that you get to look into whenever you want. I think the main thing that I think has been enabled by the recent change in the economics and financing of these structures is the ability to let companies really focus on what they’re good at for longer periods of time. I think that focus, the thing that I’m most excited about, that will let us do really differentiated work. 

Advertisement

To spell out what I think you’re referring to: there’s so much excitement around and the opportunity for investors is so clear that they are willing to give $180 million in seed funding to a completely new company full of these very smart, but also very young people who didn’t just cash out of PayPal or anything. How was it engaging with that process? Did you know, going in, there is this appetite, or was it something you discovered, of like, actually, we can make this a bigger thing than we thought.

Ben: I would say it was a mixture of the two. The market has been hot for many months at this point. So it was not a secret that no large rounds were starting to come together. But you never quite know how the fundraising environment will respond to your particular ideas about the world. This is, again, a place where you have to let the world give you feedback about what you’re doing. Even over the course of our fundraise, we learned a lot and actually changed our ideas. And we refined our opinions of the things we should be prioritizing, and what the right timelines were for commercialization.

I think we were somewhat surprised by how well our message resonated, because it was something that was very clear to us, but you never know whether your ideas will turn out to be things that other people believe as well or if everyone else thinks you’re crazy. We have been extremely fortunate to have found a group of amazing investors who our message really resonated with and they said, “Yes, this is exactly what we’ve been looking for.” And that was amazing. It was, you know, surprising and wonderful.

Aidan: Yeah, a thirst for the age of research has kind of been in the water for a little bit now. And more and more, we find ourselves positioned as the player to pursue the age of research and really try these radical ideas.

Advertisement

At least for the scale-driven companies, there is this enormous cost of entry for foundation models. Just building a model at that scale is an incredibly compute-intensive thing. Research is a little bit in the middle, where presumably you are building foundation models, but if you’re doing it with less data and you’re not so scale-oriented, maybe you get a bit of a break. How much do you expect compute costs to be sort of limiting your runway.

Ben: One of the advantages of doing deep, fundamental research is that, somewhat paradoxically, it is much cheaper to do really crazy, radical ideas than it is to do incremental work. Because when you do incremental work, in order to find out whether or not it does work, you have to go very far up the scaling ladder. Many interventions that look good at small scale do not actually persist at large scale. So as a result, it’s very expensive to do that kind of work. Whereas if you have some crazy new idea about some new architecture optimizer, it’s probably just gonna fail on the first rum, right? So you don’t have to run this up the ladder. It’s already broken. That’s great. 

So, this doesn’t mean that scale is irrelevant for us. Scale is actually an important tool in the toolbox of all the things that you can do. Being able to scale up our ideas is certainly relevant to our company. So I wouldn’t frame us as the antithesis of scale, but I think it is a wonderful aspect of the kind of work we’re doing, that we can try many of our ideas at very small scale before we would even need to think about doing them at large scale.

Asher: Yeah, you should be able to use all the internet. But you shouldn’t need to. We find it really, really perplexing that you need to use all the Internet to really get this human level intelligence.

Advertisement

So, what becomes possible  if you’re able to train more efficiently on data, right? Presumably the model will be more powerful and intelligent. But do you have specific ideas about kind of where that goes? Are we looking at more out-of-distribution generalization, or are we looking at sort of models that get better at a particular task with less experience?

Asher: So, first, we’re doing science, so I don’t know the answer, but I can give you three hypotheses. So my first hypothesis is that there’s a broad spectrum between just looking for statistical patterns and something that has really deep understanding. And I think the current models live somewhere on that spectrum. I don’t think they’re all the way towards deep understanding, but they’re also clearly not just doing statistical pattern matching. And it’s possible that as you train models on less data, you really force the model to have incredibly deep understandings of everything it’s seen. And as you do that, the model may become more intelligent in very interesting ways. It may know less facts, but get better at reasoning. So that’s one potential hypothesis. 

Another hypothesis is similar to what you said, that at the moment, it’s very expensive, both operationally and also in pure monetary costs, to teach models new capabilities, because you need so much data to teach them those things. It’s possible that one output of what we’re doing is to get vastly more efficient at post training, so with only a couple of examples, you could really put a model into a new domain. 

And then it’s also possible that this just unlocks new verticals for AI. There are certain types of robotics, for instance, where for whatever reason, we can’t quite get the type of capabilities that really makes it commercially viable. My opinion is that it’s a limited data problem, not a hardware problem. The fact that you can tele-operate the robots to do stuff is proof that that the hardware is sufficiently good. Butthere’s lots of domains like this, like scientific discovery. 

Advertisement

Ben: One thing I’ll also double-click on is that when we think about the impact that AI can have on the world, one view you might have is that this is a deflationary technology. That is, the role of AI is to automate a bunch of jobs, and take that work and make it cheaper to do, so that you’re able to remove work from the economy and have it done by robots instead. And I’m sure that will happen. But this is not, to my mind, the most exciting vision of AI. The most exciting vision of AI is one where there’s all kinds of new science and technologies that we can construct that humans aren’t smart enough to come up with, but other systems can. 

On this aspect, I think that first axis that Ascher was talking about around the spectrum between sort of true generalization versus memorization or interpolation of the data, I think that axis is extremely important to have the deep insights that will lead to these new advances in medicine and science. It is important that the models are very much on the creativity side of the spectrum. And so, part of why I’m very excited about the work that we’re doing is that I think even beyond the individual economic impacts, I’m also just genuinely very kind of mission-oriented around the question of, can we actually get AI to do stuff that, like, fundamentally humans couldn’t do before? And that’s more than just, “Let’s go fire a bunch of people from their jobs.”

Absolutely. Does that put you in a particular camp on, like, the AGI conversation, the like out of distribution, generalization conversation.

Asher: I really don’t exactly know what AGI means. It’s clear that capabilities are advancing very quickly. It’s clear that there’s tremendous amounts of economic value that’s being created. I don’t think we’re very close to God-in-a-box, in my opinion. I don’t think that within two months or even two years, there’s going to be a singularity where suddenly humans are completely obsolete. I basically agree with what Ben said at the beginning, which is, it’s a really big world. There’s a lot of work to do. There’s a lot of amazing work being done, and we’re excited to contribute

Advertisement

Well, the idea about the brain and the neuromorphic part of it does feel relevant. You’re saying, really the relevant thing to compare LLMs to is the human brain, more than the Mechanical Turk or the deterministic computers that came before.

Aidan: I’ll emphasize, the brain is not the ceiling, right? The brain, in many ways, is the floor. Frankly, I see no evidence that the brain is not a knowable system that follows physical laws. In fact, we know it’s under many constraints. And so we would expect to be able to create capabilities that are much, much more interesting and different and potentially better than the brain in the long run. And so we’re excited to contribute to that future, whether that’s AGI or otherwise.

Asher: And I do think the brain is the relevant comparison, just because the brain helps us understand how big the space is. Like, it’s easy to see all the progress we’ve made and think, wow, we like, have the answer. We’re almost done. But if you look outward a little bit and try to have a bit more perspective, there’s a lot of stuff we don’t know. 

Ben: We’re not trying to be better, per se. We’re trying to be different, right? That’s the key thing I really want to hammer on here. All of these systems will almost certainly have different trade offs of them. You’ll get an advantage somewhere, and it’ll cost you somewhere else. And it’s a big world out there. There are so many different domains that have so many different trade offs that having more system, and more fundamental technologies that can address these different domains is very likely to make the kind of AI diffuse more effectively and more rapidly through the world.

Advertisement

One of the ways you’ve distinguished yourself, is in your hiring approach, getting people who are very, very young, in some cases, still in college or high school. What is it that clicks for you when you’re talking to someone and that makes you think, I want this person working with us on these research problems?

Aidan: It’s when you talk to someone and they just dazzle you, they have so many new ideas and they think about things in a way that many established researchers just can’t because they haven’t been polluted by the context of thousands and thousands of papers. Really, the number one thing we look for is creativity. Our team is so exceptionally creative, and every day, I feel really lucky to get to go in and talk about really radical solutions to some of the big problems in AI with people and dream up a very different future.

Ben:  Probably the number one signal that I’m personally looking for is just like, do they teach me something new when I spend time with them? If they teach me something new, the odds that they’re going to teach us something new about what we’re working on is also pretty good. When you’re doing research, those creative, new ideas are really the priority. 

Part of my background was during my undergrad and PhD., I helped start this incubator called Prod that worked with a bunch of companies that turned out well. And I think one of the things that we saw from that was that young people can absolutely compete in the very highest echelons of industry. Frankly, a big part of the unlock is just realizing, yeah, I can go do this stuff. You can absolutely go contribute at the highest level. 

Advertisement

Of course, we do recognize the value of experience. People who have worked on large scale systems are great, like, we’ve hired some of them, you know, we are excited to work with all sorts of folks. And I think our mission has resonated with the experienced folks as well. I just think that our key thing is that we want people who are not afraid to change the paradigm and can try to imagine a new system of how things might work.

One of things I’ve been puzzling about is, how different do you think the resulting AI systems are going to be? It’s easy for me to imagine something like Claude Opus that just works 20% better and can do 20% more things. But if it’s just completely new, it’s hard to think about where that goes or what the end result looks like.

Asher: I don’t know if you’ve ever had the privilege of talking to the GPT-4 base model, but it had a lot of really strange emerging capabilities. For example, you could take a snippet of an unwritten blog post of yours, and ask, who do you think wrote this, and it could identify it.

There’s a lot of capabilities like this, where models are smart in ways we cannot fathom. And future models will be smarter in even stranger ways. I think we should expect the future to be really weird and the architectures to be even weirder. We’re looking for 1000x wins in data efficiency. We’re not trying to make incremental change. And so we should expect the same kind of unknowable, alien changes and capabilities at the limit.

Advertisement

Ben: I broadly agree with that. I’m probably slightly more tempered in how these things will eventually become experienced by the world, just as the GPT-4 base model was tempered by OpenAI. You want to put things in forms where you’re not staring into the abyss as a consumer. I think that’s important. But I broadly agree that our research agenda is about building capabilities that really are quite fundamentally different from what can be done right now.

Fantastic! Are there ways people can engage with flapping airplanes? Is it too early for that? Or they should just stay tuned for when the research and the models come out well.

Asher: So, we have Hi@flappingairplanes.com. If you just want to say hi, We also have disagree@flappingairplanes.com if you want to disagree with us. We’ve actually had some really cool conversations where people, like, send us very long essays about why they think it’s impossible to do what we’re doing. And we’re happy to engage with it. 

Ben: But they haven’t convinced us yet. No one has convinced us yet.

Advertisement

Asher: The second thing is, you know, we are, we are looking for exceptional people who are trying to change the field and change the world. So if you’re interested, you should reach out.

Ben: And if you have another unorthodox background, it’s okay. You don’t need two PhDs. We really are looking for folks who think differently.

Source link

Advertisement
Continue Reading

Tech

Memory shortages could delay PlayStation 6 launch until 2029, raise Switch 2 price

Published

on


Multiple analysts and industry insiders have recently claimed that the next-generation PlayStation console could be delayed due to the AI-fueled memory crisis. Bloomberg has now reiterated the rumors, claiming that Sony is unlikely to release the PlayStation 6 next year, even though the next-gen Xbox is still said to be…
Read Entire Article
Source link

Continue Reading

Tech

What my CS team was missing

Published

on

I need to say something that might make CS leaders uncomfortable: most of what your team does before a renewal is valuable, but it’s listening to only one channel. Your EBRs, your health scores, your stakeholder maps. They capture what your customer is willing to tell you directly. What they don’t capture is the conversation happening everywhere else. And that’s usually where churn starts.

I know because I ran the standard playbook for years. EBRs, stakeholder mapping, health score reviews, and renewal prep meetings, where we rated our gut feeling on a scale of green to red. We had dashboards. We had strong CSMs who genuinely cared about their accounts. And we still got blindsided.

The $2M quarter is the one I can’t forget. Two enterprise accounts churned in the same 90-day window. Both were green in every system we had. One had an NPS of 72.

When I dug into what happened, I didn’t find a CS execution problem. I found a coverage gap. Every signal had been there. Just not in the places our process was designed to look. I sat in the post-mortem knowing we’d done everything our process asked us to do. That was the problem.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Later in this article, I’ll show you what both of those accounts would have looked like inside Renewal Fix. Before anyone on my team knew there was a problem.

What your EBR captures, and what it can’t

I’m not saying EBRs are useless. A well-run EBR builds relationship depth, gives your champion ammunition internally, and surfaces problems the customer is willing to raise directly. But even the best EBR has a structural limitation: it only captures what someone chooses to say out loud, in a meeting, to a vendor.

Advertisement

The real conversation about your product is happening in a Slack channel you’ll never see, in a procurement review you weren’t invited to, and in a 1:1 between your champion and their new boss who just joined from a company that used your competitor. The EBR gives you one essential channel. The danger is treating it as the only one.

The signals are everywhere. Just not in your CRM.

Here’s what was actually happening in those two accounts that churned on me.

Account one: their engineering team had filed 23 support tickets about API latency over four months. Not “the product is broken” tickets. Small, specific, technical complaints that got resolved individually. Nobody in CS ever saw them because they never escalated to “critical.” But lined up chronologically, the pattern was unmistakable: this team was losing patience, one resolved ticket at a time.

Account two: three of their five power users updated their LinkedIn profiles in the same two-week window. One started posting about a competitor’s product. Our champion’s title changed from “Head of” to “Senior Manager.” A quiet demotion nobody noticed because we were watching product usage dashboards, not org charts.

Advertisement

Every CS leader I know has lost an account and later found out the champion left months ago. The customer’s reaction is always the same: “We assumed you knew.” They expect you to track publicly available professional changes, the same information any recruiter monitors. Not tracking them isn’t respectful. It’s a blind spot.

Neither signal lived in our CRM. Neither showed up in our health score. They were sitting in plain sight in systems our CS team had no reason to check.

What your health score measures, and the lag problem

Health scores aren’t the problem. Treating them as the whole picture is. A typical health score aggregates NPS, login frequency, support ticket count, and feature adoption. Green means safe. Red means act. But these are lagging indicators. By the time login frequency drops, the decision to evaluate alternatives may already be in motion.

When I started tracking leading indicators alongside our existing health model, the difference was striking. Across roughly 300 mid-market accounts over 18 months, we found that support ticket velocity, specifically the rate of increase in non-critical tickets over a rolling 90-day window, predicted churn at T-90 at roughly 2x the accuracy of our composite health score. The signals that actually predict churn aren’t the ones most CS platforms are designed to track.

Advertisement

Building the Signal Coverage Model

The teams with the strongest renewal rates don’t abandon their existing processes. They add a signal layer on top. The highest-signal sources break into three tiers.

Tier 1: Support ticket patterns. Not the count, but the velocity, the sentiment trend, and whether the same team keeps filing. A steady trickle of “resolved” tickets from one engineering team is often a louder signal than a single P1 escalation. At scale, this becomes cohort-level complaint clustering across a segment.

Tier 2: People changes. Champion turnover, re-orgs, title changes, and new executives from a competitor’s customer base. The person who bought your product and the person renewing it are often not the same person. At scale, you’re watching for patterns of org instability across your book.

Tier 3: Competitive exposure. Whether your customer is being actively pitched, attending competitor events, or has team members engaging with competitor content online. At scale, you’re tracking which segments your competitors are targeting hardest.

Advertisement

The real challenge isn’t knowing what to track. It’s that these signals live in five or six different systems, and nobody’s job is to stitch them together. Your CSM sees Zendesk. Your SE sees Jira. Your AE sees Salesforce. The full picture only exists if someone manually assembles it.

What this looks like in practice

One team I worked with built a manual version of this: CSMs logging signals from six different sources every Friday. About 90 minutes per account per week. Their renewal rate hit 96%. But the approach doesn’t scale past a 25-account book.

At 80 accounts in a mid-market motion, you need automation. At 150+ in a PLG model, the signals are still there, you’re watching for cohort-level drops in feature adoption or clusters of the same complaint across a segment, but you cannot find them without automation.

The teams doing this manually are logging into six tools every Friday. The teams doing this with automation get a Slack message when something changes. No dashboard to check. No Friday ritual.

Advertisement

Detection without a playbook is just anxiety. The point of catching signals early isn’t to panic. It’s to have time to act. An executive sponsor who hasn’t logged in for 90 days needs a different intervention than an account with a competitor POC in their Salesforce sandbox. The signal tells you what’s happening. The response has to match.

That gap between knowing what to track and actually tracking it consistently is why I built Renewal Fix. Not to replace the manual process, but to remove the ceiling on it. The platform pulls signals from support tickets, call recordings, CRM data, and engineering channels automatically, stitches them into a single account view, and flags them before they become a renewal surprise.

See it for yourself

Enter your work email at renewalfix.com. In 30 seconds, you’ll get a one-page executive brief showing your blind spots: 10 accounts that look like they belong in your CS platform, built from your company’s products, competitive landscape, and integration stack, each with a health score and risk signals sourced from support tickets, call recordings, and org changes that your current dashboard would never surface. No demo, no sales call.

Advertisement

Find the account that looks most like Account One. Health score in the 70s, risk signals hiding underneath. Then click “Executive Brief” for a one-page summary of your portfolio’s total risk exposure, with dollar amounts and prioritized actions. That view is what Renewal Fix delivers weekly in production.

Your green accounts aren’t necessarily at risk. But they might be quieter than you realize.

Source link

Advertisement
Continue Reading

Tech

HP Introduces Affordable DeskJet Printers for Indian Households

Published

on

HP has introduced a new range of DeskJet All-in-One printers in India. The lineup is designed for students, parents, and working professionals who need a dependable, easy-to-use printer at home. With a simple setup, smooth wireless connectivity, affordable ink options, and a modern design, the new DeskJet series aims to make everyday printing more convenient and hassle-free for Indian households.

According to HP, families today want printers that are simple, compact, and connected. Therefore, the new DeskJet range offers plug-and-play installation and reliable Wi-Fi connectivity for smooth printing. The printers support wireless printing, so family members can print directly from their phones or laptops. Their compact design and fresh color options make them suitable for study tables, work desks, or small home offices.

Models in the New DeskJet Lineup

image_for_HP_DeskJet_Printers_optimized_250

The new range includes three categories:

  • HP DeskJet
  • HP DeskJet Ink Advantage
  • HP DeskJet Ultra Ink Advantage

In total, HP has introduced six models, each of which is geared towards fulfilling different home printing requirements. There are models suitable for casual home use, while others are better suited to frequent home printers.

All of these models have a 60-sheet input tray, along with print speeds of up to 7.5 pages per minute in black, and up to 5.5 pages per minute in color. There are also print speeds of up to 8.5 pages per minute in black available in certain models. In addition, the DeskJet Ink Advantage 4388 features an Automatic Document Feeder, making it easier to print multiple pages at once.

Easy Connectivity and Smart Features

The latest DeskJets promise strong, reliable networking. With dual-band Wi-Fi, you’ll benefit from improved signal strength and stability. And with self-healing, the printer will automatically restore its network connection if it’s lost.

Advertisement

As a result, printing remains consistent and uninterrupted. Multiple users in the household can print wirelessly using the HP app through Wi-Fi or Bluetooth. The control panel is easy to understand, allowing quick operation without any technical knowledge.

Affordable Printing with High-Yield Inks

HP_Deskjet_Ink_Advantage_2986_All_in-One-printer_optimized_250

Another important advantage of the new DeskJet series is that these printers use affordable and high-yield ink cartridges. Since these cartridges last longer, you will not have to replace them as often. Therefore, this will also reduce the overall cost. The printer will also enable you to produce crisp, clean black-and-white documents.

Price and Availability in India

HP has already launched some of its printer models in the country. The company will also introduce more products in the coming days. The HP DeskJet Ink Advantage 2986 and 2989 are priced at Rs. 6,999 each. The DeskJet Ink Advantage 4388 will cost Rs. 7,999. The Ultra Ink Advantage 5135 and 5185, along with the DeskJet 2931, will soon hit the market. Buyers can purchase the available models through HP World outlets or the HP Online Store.

Source link

Advertisement
Continue Reading

Tech

OpenClaw founder joins OpenAI to create next-gen personal agents

Published

on

OpenClaw was formerly known as Clawd, a play on OpenAI rival Anthropic’s Claude AI.

OpenAI has hired OpenClaw founder Peter Steinberger to develop the “next generation of personal agents”. In a post on X announcing the addition, OpenAI CEO Sam Altman said that personal agents will fast become one of the $500bn company’s core offerings.

OpenClaw is a popular open source project that lets users create personal AI agents. The personal agent stays on a user’s hardware, runs on all major operating software, and on major communication apps such as WhatsApp, Telegram, Discord and even iMessage. It helps users clear inboxes, send emails and manage calendars.

The platform was formerly known as ‘Clawd’, a play on Anthropic’s Claude, which had to be changed after the AI giant threatened legal action. Then it was called ‘MoltBot’, before Steinberger landed on its final name.

Advertisement

Themed around a lobster, the project launched in November last year and quickly gained traction, garnering nearly 200,000 GitHub stars.

Meanwhile, Moltbook – launched in January this year by its creator Matt Schlicht – is a Reddit-style social media network where only AI agents can post, and humans can observe.

The site went viral after launching, with AI agents, including many from OpenClaw, creating a new religion called ‘Crustafarianism’, among other peculiar things. Human onlookers were shocked and surprised, leading many to question agents’ true understanding of the content they put out.

Steinberger built the first prototype of OpenClaw in an hour, and by the beginning of February, users had created 1.5m AI agents using the platform. Running the project cost the Austrian founder between $10,000 and $20,000 per month, according to an interview with podcaster Lex Fridman.

Advertisement

“When I started exploring AI, my goal was to have fun and inspire people,” Steinberger wrote in a blogpost. “And here we are, the lobster is taking over the world. My next mission is to build an agent that even my mum can use.”

OpenClaw will remain in a foundation as an open source project that OpenAI will continue to support, Altman clarified online. “The future is going to be extremely multi-agent and it’s important to us to support open source as part of that,” he said.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

A Computer That Fits Inside A Camera Lens

Published

on

For a long while, digital single-lens reflex (DSLR) cameras were the king of the castle for professional and amateur photography. They brought large sensors, interchangeable lenses, and professional-level viewfinders to the digital world at approachable prices, and then cemented their lead when they started being used to create video as well. They’re experiencing a bit of a decline now, though, as mirrorless cameras start to dominate, and with that comes some unique opportunities. To attach a lens meant for a DSLR to a mirrorless camera, an adapter housing must be used, and [Ancient] found a way to squeeze a computer and a programmable aperture into this tiny space.

The programmable aperture is based on an LCD screen from an old cell phone. LCD screens are generally transparent until their pixels are switched, and in most uses as displays a backer is put in place so someone can make out what is on the screen. [Ancient] is removing this backer, though, allowing the LCD to be completely transparent when switched off. The screen is placed inside this lens adapter housing in the middle of a PCB where a small computer is also placed. The computer controls the LCD via a set of buttons on the outside of the housing, allowing the photographer to use this screen as a programmable aperture.

The LCD-as-aperture has a number of interesting uses that would be impossible with a standard iris aperture. Not only can it function as a standard iris aperture, but it can do things like cycle through different areas of the image in sequence, open up arbitrary parts or close off others, and a number of other unique options. It’s worth checking out the video below, as [Ancient] demonstrates many of these effects towards the end. We’ve seen some of these effects before, although those were in lenses that were mechanically controlled instead.

Advertisement

Thanks to [kemfic] for the tip!

Advertisement

Source link

Continue Reading

Tech

CISA gives feds 3 days to patch actively exploited BeyondTrust flaw

Published

on

BeyondTrust

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) ordered federal agencies on Friday to secure their BeyondTrust Remote Support instances against an actively exploited vulnerability within three days.

BeyondTrust provides identity security services to more than 20,000 customers across over 100 countries, including government agencies and 75% of Fortune 100 companies worldwide.

Tracked as CVE-2026-1731, this remote code execution vulnerability stems from an OS command injection weakness and affects BeyondTrust’s Remote Support 25.3.1 or earlier and Privileged Remote Access 24.3.4 or earlier.

Wiz

While BeyondTrust patched all Remote Support and Privileged Remote Access SaaS instances on February 2, 2026, on-premise customers must install patches manually.

“Successful exploitation could allow an unauthenticated remote attacker to execute operating system commands in the context of the site user,” BeyondTrust said when it patched the vulnerability on February 6. “Successful exploitation requires no authentication or user interaction and may lead to system compromise, including unauthorized access, data exfiltration, and service disruption.”

Advertisement

Hacktron, who discovered the vulnerability and responsibly disclosed it to BeyondTrust on January 31, warned that approximately 11,000 BeyondTrust Remote Support instances were exposed online, around 8,500 of them being on-premises deployments.

On Thursday, six days after BeyondTrust released CVE-2026-1731 security patches, watchTowr head of threat intelligence Ryan Dewhurst reported that attackers are now actively exploiting the security flaw, warning admins that unpatched devices should be assumed to be compromised.

Federal agencies ordered to patch immediately

One day later, CISA confirmed Dewhurst’s report, added the vulnerability to its Known Exploited Vulnerabilities (KEV) catalog, and ordered Federal Civilian Executive Branch (FCEB) agencies to secure their BeyondTrust instances by the end of Monday, February 16, as mandated by Binding Operational Directive (BOD) 22-01.

“These types of vulnerabilities are frequent attack vectors for malicious cyber actors and pose significant risks to the federal enterprise,” the U.S. cybersecurity agency warned. “Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.”

Advertisement

CISA’s warning comes on the heels of other BeyondTrust security flaws that were exploited to compromise the systems of U.S. government agencies.

For instance, the U.S. Treasury Department revealed two years ago that its network had been hacked in an incident linked to the Silk Typhoon,  a notorious Chinese state-backed cyberespionage group.

Silk Typhoon is believed to have exploited two zero-day bugs (CVE-2024-12356 and CVE-2024-12686) to breach BeyondTrust’s systems and later used a stolen API key to compromise 17 Remote Support SaaS instances, including the Treasury’s instance.

The Chinese hacking group has also targeted the Office of Foreign Assets Control (OFAC), which administers U.S. sanctions programs, and the Committee on Foreign Investment in the United States (CFIUS), which reviews foreign investments for national security risks.

Advertisement

Modern IT infrastructure moves faster than manual workflows can handle.

In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.

Source link

Continue Reading

Tech

More Rode mics can now connect directly to iPhones and iPads

Published

on

Rode is rolling out a firmware update for its Wireless Pro and Wireless Go (third-gen) microphones to add a feature called Direct Connect, which was already available for the Wireless Micro. This allows the mics to pair with iPhones and iPads via Bluetooth without the need for a receiver. All you’ll need is the Rode Capture app.

Rode said it’s able to offer Direct Connect for Wireless Pro and Wireless Go without compromising “the broadcast-quality audio both wireless systems are known for.” The feature still supports the option to record from two transmitters in either merged (whereby the audio blends into a single stereo track) or split (which keeps the recordings on separate channels to allow for more options in post-production) modes.

Not having to worry about setting up a physical receiver to link these mics to iOS devices could help streamline things quite a bit for creators. And I can always get behind companies adding handy features to existing products without pushing customers to buy new models. That’s good for the environment, your wallet — assuming you already have one of these mics — and probably the company’s reputation. An all-around positive update.

Source link

Advertisement
Continue Reading

Tech

What is the release date for The Pitt season 2 episode 7 on HBO Max?

Published

on

Good grief… I’m still not over how emotional and tender last week’s episode of The Pitt season 2 was.

We saw loveable patient Louie (Ernest Harden Jr) die after a pulmonary embolism, shocking Langdon (Patrick Ball) to the core. Elsewhere, Santos (Isa Briones) struggles without having an interpreter for her deaf patient, reflecting selfishly instead of externally.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025