Connect with us

Tech

Flapping Airplanes on the future of AI: ‘We want to try really radically different things’

Published

on

There’s been a bunch of exciting research-focused AI labs popping up in recent months, and Flapping Airplanes is one of the most interesting. Propelled by its young and curious founders, Flapping Airplanes is focused on finding less data-hungry ways to train AI. It’s a potential game-changer for the economics and capabilities of AI models — and with $180 million in seed funding, they’ll have plenty of runway to figure it out.

Last week, I spoke with the lab’s three co-founders — brothers Ben and Asher Spector, and Aidan Smith — about why this is an exciting moment to start a new AI lab and why they keep coming back to ideas about the human brain.

I want to start by asking, why now? Labs like OpenAI and DeepMind have spent so much on scaling their models. I’m sure the competition seems daunting. Why did this feel like a good moment to launch a foundation model company?

Advertisement

Ben: There’s just so much to do. So, the advances that we’ve gotten over the last five to ten years have been spectacular. We love the tools. We use them every day. But the question is, is this the whole universe of things that needs to happen? And we thought about it very carefully and our answer was no, there’s a lot more to do. In our case, we thought that the data efficiency problem was sort of really the key thing to go look at. The current frontier models are trained on the sum totality of human knowledge, and humans can obviously make do with an awful lot less. So there’s a big gap there, and it’s worth understanding. 

What we’re doing is really a concentrated bet on three things. It’s a bet that this data efficiency problem is the important thing to be doing. Like, this is really a direction that is new and different and you can make progress on it. It’s a bet that this will be very commercially valuable and that will make the world a better place if we can do it. And it’s also a bet that’s sort of the right kind of team to do it is a creative and even in some ways inexperienced team that can go look at these problems again from the ground up.

Aidan: Yeah, absolutely. We don’t really see ourselves as competing with the other labs, because we think that we’re looking at just a very different set of problems. If you look at the human mind, it learns in an incredibly different way from transformers. And that’s not to say better, just very different. So we see these different trade offs. LLMs have an incredible ability to memorize, and draw on this great breadth of knowledge, but they can’t really pick up new skills very fast. It takes just rivers and rivers of data to adapt. And when you look inside the brain, you see that the algorithms that it uses are just fundamentally so different from gradient descent and some of the techniques that people use to train AI today. So that’s why we’re building a new guard of researchers to kind of address these problems and really think differently about the AI space.

Asher: This question is just so scientifically interesting: why are the systems that we have built that are intelligent also so different from what humans do? Where does this difference come from? How can we use knowledge of that difference to make better systems? But at the same time, I also think it’s actually very commercially viable and very good for the world. Lots of regimes that are really important are also highly data constrained, like robotics or scientific discovery. Even in enterprise applications, a model that’s a million times more data efficient is probably a million times easier to put into the economy. So for us, it was very exciting to take a fresh perspective on these approaches, and think, if we really had a model that’s vastly more data efficient, what could we do with it?

Advertisement

Techcrunch event

Boston, MA
|
June 23, 2026

Advertisement

This gets into my next question, which is sort of ties in also to the name, Flapping Airplanes. There’s this philosophical question in AI about how much we’re trying to recreate what humans do in their brain, versus creating some more abstract intelligence that takes a completely different path. Aidan is coming from Neuralink, which is all about the human brain. Do you see yourself as kind of pursuing a more neuromorphic view of AI? 

Aidan: The way I look at the brain is as an existence proof. We see it as evidence that there are other algorithms out there. There’s not just one orthodoxy. And the brain has some crazy constraints. When you look at the underlying hardware, there’s some crazy stuff. It takes a millisecond to fire an action potential. In that time, your computer can do just so so many operations. And so realistically, there’s probably an approach that’s actually much better than the brain out there, and also very different than the transformer. So we’re very inspired by some of the things that the brain does, but we don’t see ourselves being tied down by it.

Ben: Just to add on to that. it’s very much in our name: Flapping Airplanes. Think of the current systems as big, Boeing 787s. We’re not trying to build birds. That’s a step too far. We’re trying to build some kind of a flapping airplane. My perspective from computer systems is that the constraints of the brain and silicon are sufficiently different from each other that we should not expect these systems to end up looking the same. When the substrate is so different and you have genuinely very different trade-offs about the cost of compute, the cost of locality and moving data, you actually expect these systems to look a little bit different. But just because they will look somewhat different does not mean that we should not take inspiration from the brain and try to use the parts that we think are interesting to improve our own systems. 

It does feel like there’s now more freedom for labs to focus on research, as opposed to, just developing products. It feels like a big difference for this generation of labs. You have some that are very research focused, and others that are sort of “research focused for now.” What does that conversation look like within flapping airplanes?

Advertisement

Asher: I wish I could give you a timeline. I wish I could say, in three years, we’re going to have solved the research problem. This is how we’re going to commercialize. I can’t. We don’t know the answers. We’re looking for truth. That said, I do think we have commercial backgrounds. I spent a bunch of time developing technology for companies that made those companies a reasonable amount of money. Ben has incubated a bunch of startups that have commercial backgrounds, and we actually are excited to commercialize. We think it’s good for the world to take the value you’ve created and put it in the hands of people who can use it. So I don’t think we’re opposed to it. We just need to start by doing research, because if we start by signing big enterprise contracts, we’re going to get distracted, and we won’t do the research that’s valuable.

Aidan: Yeah, we want to try really, really radically different things, and sometimes radically even things are just worse than the paradigm. We’re exploring a set of different trade offs. It’s our hope that they will be different in the long run. 

Ben: Companies are at their best when they’re really focused on doing something well, right? Big companies can afford to do many, many different things at once. When you’re a startup, you really have to pick what is the most valuable thing you can do, and do that all the way. And we are creating the most value when we are all in on solving fundamental problems for the time being. 

I’m actually optimistic that reasonably soon, we might have made enough progress that we can then go start to touch grass in the real world. And you learn a lot by getting feedback from the real world. The amazing thing about the world is, it teaches you things constantly, right? It’s this tremendous vat of truth that you get to look into whenever you want. I think the main thing that I think has been enabled by the recent change in the economics and financing of these structures is the ability to let companies really focus on what they’re good at for longer periods of time. I think that focus, the thing that I’m most excited about, that will let us do really differentiated work. 

Advertisement

To spell out what I think you’re referring to: there’s so much excitement around and the opportunity for investors is so clear that they are willing to give $180 million in seed funding to a completely new company full of these very smart, but also very young people who didn’t just cash out of PayPal or anything. How was it engaging with that process? Did you know, going in, there is this appetite, or was it something you discovered, of like, actually, we can make this a bigger thing than we thought.

Ben: I would say it was a mixture of the two. The market has been hot for many months at this point. So it was not a secret that no large rounds were starting to come together. But you never quite know how the fundraising environment will respond to your particular ideas about the world. This is, again, a place where you have to let the world give you feedback about what you’re doing. Even over the course of our fundraise, we learned a lot and actually changed our ideas. And we refined our opinions of the things we should be prioritizing, and what the right timelines were for commercialization.

I think we were somewhat surprised by how well our message resonated, because it was something that was very clear to us, but you never know whether your ideas will turn out to be things that other people believe as well or if everyone else thinks you’re crazy. We have been extremely fortunate to have found a group of amazing investors who our message really resonated with and they said, “Yes, this is exactly what we’ve been looking for.” And that was amazing. It was, you know, surprising and wonderful.

Aidan: Yeah, a thirst for the age of research has kind of been in the water for a little bit now. And more and more, we find ourselves positioned as the player to pursue the age of research and really try these radical ideas.

Advertisement

At least for the scale-driven companies, there is this enormous cost of entry for foundation models. Just building a model at that scale is an incredibly compute-intensive thing. Research is a little bit in the middle, where presumably you are building foundation models, but if you’re doing it with less data and you’re not so scale-oriented, maybe you get a bit of a break. How much do you expect compute costs to be sort of limiting your runway.

Ben: One of the advantages of doing deep, fundamental research is that, somewhat paradoxically, it is much cheaper to do really crazy, radical ideas than it is to do incremental work. Because when you do incremental work, in order to find out whether or not it does work, you have to go very far up the scaling ladder. Many interventions that look good at small scale do not actually persist at large scale. So as a result, it’s very expensive to do that kind of work. Whereas if you have some crazy new idea about some new architecture optimizer, it’s probably just gonna fail on the first rum, right? So you don’t have to run this up the ladder. It’s already broken. That’s great. 

So, this doesn’t mean that scale is irrelevant for us. Scale is actually an important tool in the toolbox of all the things that you can do. Being able to scale up our ideas is certainly relevant to our company. So I wouldn’t frame us as the antithesis of scale, but I think it is a wonderful aspect of the kind of work we’re doing, that we can try many of our ideas at very small scale before we would even need to think about doing them at large scale.

Asher: Yeah, you should be able to use all the internet. But you shouldn’t need to. We find it really, really perplexing that you need to use all the Internet to really get this human level intelligence.

Advertisement

So, what becomes possible  if you’re able to train more efficiently on data, right? Presumably the model will be more powerful and intelligent. But do you have specific ideas about kind of where that goes? Are we looking at more out-of-distribution generalization, or are we looking at sort of models that get better at a particular task with less experience?

Asher: So, first, we’re doing science, so I don’t know the answer, but I can give you three hypotheses. So my first hypothesis is that there’s a broad spectrum between just looking for statistical patterns and something that has really deep understanding. And I think the current models live somewhere on that spectrum. I don’t think they’re all the way towards deep understanding, but they’re also clearly not just doing statistical pattern matching. And it’s possible that as you train models on less data, you really force the model to have incredibly deep understandings of everything it’s seen. And as you do that, the model may become more intelligent in very interesting ways. It may know less facts, but get better at reasoning. So that’s one potential hypothesis. 

Another hypothesis is similar to what you said, that at the moment, it’s very expensive, both operationally and also in pure monetary costs, to teach models new capabilities, because you need so much data to teach them those things. It’s possible that one output of what we’re doing is to get vastly more efficient at post training, so with only a couple of examples, you could really put a model into a new domain. 

And then it’s also possible that this just unlocks new verticals for AI. There are certain types of robotics, for instance, where for whatever reason, we can’t quite get the type of capabilities that really makes it commercially viable. My opinion is that it’s a limited data problem, not a hardware problem. The fact that you can tele-operate the robots to do stuff is proof that that the hardware is sufficiently good. Butthere’s lots of domains like this, like scientific discovery. 

Advertisement

Ben: One thing I’ll also double-click on is that when we think about the impact that AI can have on the world, one view you might have is that this is a deflationary technology. That is, the role of AI is to automate a bunch of jobs, and take that work and make it cheaper to do, so that you’re able to remove work from the economy and have it done by robots instead. And I’m sure that will happen. But this is not, to my mind, the most exciting vision of AI. The most exciting vision of AI is one where there’s all kinds of new science and technologies that we can construct that humans aren’t smart enough to come up with, but other systems can. 

On this aspect, I think that first axis that Ascher was talking about around the spectrum between sort of true generalization versus memorization or interpolation of the data, I think that axis is extremely important to have the deep insights that will lead to these new advances in medicine and science. It is important that the models are very much on the creativity side of the spectrum. And so, part of why I’m very excited about the work that we’re doing is that I think even beyond the individual economic impacts, I’m also just genuinely very kind of mission-oriented around the question of, can we actually get AI to do stuff that, like, fundamentally humans couldn’t do before? And that’s more than just, “Let’s go fire a bunch of people from their jobs.”

Absolutely. Does that put you in a particular camp on, like, the AGI conversation, the like out of distribution, generalization conversation.

Asher: I really don’t exactly know what AGI means. It’s clear that capabilities are advancing very quickly. It’s clear that there’s tremendous amounts of economic value that’s being created. I don’t think we’re very close to God-in-a-box, in my opinion. I don’t think that within two months or even two years, there’s going to be a singularity where suddenly humans are completely obsolete. I basically agree with what Ben said at the beginning, which is, it’s a really big world. There’s a lot of work to do. There’s a lot of amazing work being done, and we’re excited to contribute

Advertisement

Well, the idea about the brain and the neuromorphic part of it does feel relevant. You’re saying, really the relevant thing to compare LLMs to is the human brain, more than the Mechanical Turk or the deterministic computers that came before.

Aidan: I’ll emphasize, the brain is not the ceiling, right? The brain, in many ways, is the floor. Frankly, I see no evidence that the brain is not a knowable system that follows physical laws. In fact, we know it’s under many constraints. And so we would expect to be able to create capabilities that are much, much more interesting and different and potentially better than the brain in the long run. And so we’re excited to contribute to that future, whether that’s AGI or otherwise.

Asher: And I do think the brain is the relevant comparison, just because the brain helps us understand how big the space is. Like, it’s easy to see all the progress we’ve made and think, wow, we like, have the answer. We’re almost done. But if you look outward a little bit and try to have a bit more perspective, there’s a lot of stuff we don’t know. 

Ben: We’re not trying to be better, per se. We’re trying to be different, right? That’s the key thing I really want to hammer on here. All of these systems will almost certainly have different trade offs of them. You’ll get an advantage somewhere, and it’ll cost you somewhere else. And it’s a big world out there. There are so many different domains that have so many different trade offs that having more system, and more fundamental technologies that can address these different domains is very likely to make the kind of AI diffuse more effectively and more rapidly through the world.

Advertisement

One of the ways you’ve distinguished yourself, is in your hiring approach, getting people who are very, very young, in some cases, still in college or high school. What is it that clicks for you when you’re talking to someone and that makes you think, I want this person working with us on these research problems?

Aidan: It’s when you talk to someone and they just dazzle you, they have so many new ideas and they think about things in a way that many established researchers just can’t because they haven’t been polluted by the context of thousands and thousands of papers. Really, the number one thing we look for is creativity. Our team is so exceptionally creative, and every day, I feel really lucky to get to go in and talk about really radical solutions to some of the big problems in AI with people and dream up a very different future.

Ben:  Probably the number one signal that I’m personally looking for is just like, do they teach me something new when I spend time with them? If they teach me something new, the odds that they’re going to teach us something new about what we’re working on is also pretty good. When you’re doing research, those creative, new ideas are really the priority. 

Part of my background was during my undergrad and PhD., I helped start this incubator called Prod that worked with a bunch of companies that turned out well. And I think one of the things that we saw from that was that young people can absolutely compete in the very highest echelons of industry. Frankly, a big part of the unlock is just realizing, yeah, I can go do this stuff. You can absolutely go contribute at the highest level. 

Advertisement

Of course, we do recognize the value of experience. People who have worked on large scale systems are great, like, we’ve hired some of them, you know, we are excited to work with all sorts of folks. And I think our mission has resonated with the experienced folks as well. I just think that our key thing is that we want people who are not afraid to change the paradigm and can try to imagine a new system of how things might work.

One of things I’ve been puzzling about is, how different do you think the resulting AI systems are going to be? It’s easy for me to imagine something like Claude Opus that just works 20% better and can do 20% more things. But if it’s just completely new, it’s hard to think about where that goes or what the end result looks like.

Asher: I don’t know if you’ve ever had the privilege of talking to the GPT-4 base model, but it had a lot of really strange emerging capabilities. For example, you could take a snippet of an unwritten blog post of yours, and ask, who do you think wrote this, and it could identify it.

There’s a lot of capabilities like this, where models are smart in ways we cannot fathom. And future models will be smarter in even stranger ways. I think we should expect the future to be really weird and the architectures to be even weirder. We’re looking for 1000x wins in data efficiency. We’re not trying to make incremental change. And so we should expect the same kind of unknowable, alien changes and capabilities at the limit.

Advertisement

Ben: I broadly agree with that. I’m probably slightly more tempered in how these things will eventually become experienced by the world, just as the GPT-4 base model was tempered by OpenAI. You want to put things in forms where you’re not staring into the abyss as a consumer. I think that’s important. But I broadly agree that our research agenda is about building capabilities that really are quite fundamentally different from what can be done right now.

Fantastic! Are there ways people can engage with flapping airplanes? Is it too early for that? Or they should just stay tuned for when the research and the models come out well.

Asher: So, we have Hi@flappingairplanes.com. If you just want to say hi, We also have disagree@flappingairplanes.com if you want to disagree with us. We’ve actually had some really cool conversations where people, like, send us very long essays about why they think it’s impossible to do what we’re doing. And we’re happy to engage with it. 

Ben: But they haven’t convinced us yet. No one has convinced us yet.

Advertisement

Asher: The second thing is, you know, we are, we are looking for exceptional people who are trying to change the field and change the world. So if you’re interested, you should reach out.

Ben: And if you have another unorthodox background, it’s okay. You don’t need two PhDs. We really are looking for folks who think differently.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Before quantum computing arrives, this startup wants enterprises already running on it

Published

on

Eighteen months after selling his startup to chipmaker AMD for $665 million, Finnish entrepreneur Peter Sarlin has left his role as CEO of the unit now known as AMD Silo AI. He is now chairman at two new ventures: physical AI lab NestAI, and QuTwo, an AI startup aimed at helping companies prepare for the era of quantum computing

Currently fully funded by Sarlin’s family office, PostScriptum, QuTwo describes itself as “an AI lab for the quantum era.” Rather than waiting for quantum computing to mature, however, it is already working with enterprise customers — including European fashion retailer Zalando, with which it is developing what the two companies call “lifestyle agents,” AI tools designed to go beyond product search and proactively suggest products and experiences.

QuTwo is built on the premise that AI is hitting an efficiency wall that quantum computing may eventually help solve. But the company is not betting on when that will happen, Sarlin told TechCrunch. Instead, the startup is building QuTwo OS as an orchestration layer that allows companies to shift from classical to quantum computing — making use of hybrid computing along the way.

Sarlin invested in Finnish quantum companies IQM and QMill through PostScriptum, and is one of a growing number of investors who believe it will eventually outperform classical computers in a wide range of industry applications while easing AI’s energy demands. But he also thinks that initial use cases will require mixed hardware environments, and that enterprises would rather focus on their business problems while QuTwo OS takes care of the routing.

Advertisement

In that respect, the potential advantage of the middle ground known as “quantum-inspired” computing is that it is already viable today, because it uses classical hardware while simulating quantum behavior, working around the hurdles that still hinder quantum hardware. Meanwhile, QuTwo OS is designed to be flexible, supporting quantum or non-quantum algorithms and chips alike.

QuTwo’s team brings experience on both sides of the quantum-AI divide. On the quantum side, there’s IQM cofounder Kuan Yen Tan and board member Antti Vasara, also chair at SemiQon, a Finnish semiconductor startup focused on quantum chips. The enterprise side is equally represented, by Sarlin himself and Kaj-Mikael Björk, one of his former cofounders at Silo AI. Pekka Lundmark, the former CEO of Finnish telecom giant Nokia, also joined QuTwo’s board.

Across both areas, the team counts over 30 quantum and AI scientists, and Sarlin is clear where the company stands. “We’re building for the quantum world, but QuTwo is an AI company,” he said, meaning that QuTwo is “pushing AI workloads from classical to quantum.”

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

This also means that its customer base could be quite broad. Beyond Zalando, QuTwo also launched a joint quantum AI research initiative with OP Pohjola, a major Finnish financial services provider.

Advertisement

From the outset, QuTwo has been commercially minded and already has “large design partnerships which are in the tens of millions,” Sarlin said. Design partnerships — in which a vendor co-develops its product alongside enterprise customers — are a way for QuTwo to learn what those customers expect as it builds its product. They are also a bet from enterprises looking to establish early footing when and if quantum computing does arrive.

Source link

Continue Reading

Tech

Y Combinator-backed Random Labs launches Slate V1, claiming the first ‘swarm-native’ coding agent

Published

on

The software engineering world is currently wrestling with a fundamental paradox of the AI era: as models become more capable, the “systems problem” of managing them has become the primary bottleneck to real-world productivity. While a developer might have access to the raw intelligence of a frontier model, that intelligence often degrades the moment a task requires a long horizon or a deep context window.

But help appears to be on the way: San Francisco-based, Y Combinator-backed startup Random Labs has officially launched Slate V1, described as the industry’s first “swarm native” autonomous coding agent designed to execute massively parallel, complex engineering tasks.

Emerging from an open beta, the tool utilizes a “dynamic pruning algorithm” to maintain context in large codebases while scaling output to enterprise complexity. Co-founded by Kiran and Mihir Chintawar in 2024, the company aims to bridge the global engineering shortage by positioning Slate as a collaborative tool for the “next 20 million engineers” rather than a replacement for human developers.

With the release of Slate V1, the team at Random Labs is attempting to architect a way out of this zone by introducing the first “swarm-native” agentic coding environment. Slate is not merely a wrapper or a chatbot with file access; it is an implementation of a “hive mind” philosophy designed to scale agentic work with the complexity of a human organization.

Advertisement

By leveraging a novel architectural primitive called Thread Weaving, Slate moves beyond the rigid task trees and lossy compaction methods that have defined the first generation of AI coding assistants.

Strategy: Action space

At the heart of Slate’s effectiveness is a deep engagement with Recursive Language Models (RLM).

In a traditional setup, an agent might be asked to “fix a bug,” a prompt that forces the model to juggle high-level strategy and low-level execution simultaneously.

Random Labs identifies this as a failure to tap into “Knowledge Overhang”—the latent intelligence a model possesses but cannot effectively access when it is tactically overwhelmed.

Advertisement

Slate solves this by using a central orchestration thread that essentially “programs in action space”. This orchestrator doesn’t write the code directly; instead, it uses a TypeScript-based DSL to dispatch parallel worker threads to handle specific, bounded tasks.

This creates a clear separation between the “kernel”—which manages the execution graph and maintains strategic alignment—and the worker “processes” that execute tactical operations in the terminal.

By mapping onto an OS-style framework, inspired by Andrej Karpathy’s “LLM OS” concept, Slate is able to treat the limited context window of a model as precious RAM, actively, intelligently managing what is retained and what is discarded.

Episodic memory and the swarm

The true innovation of the “Thread Weaving” approach lies in how it handles memory. Most agents today rely on “compaction,” which is often just a fancy term for lossy compression that risks dropping critical project state. Slate instead generates “episodes”.

Advertisement

When a worker thread completes a task, it doesn’t return a sprawling transcript of every failed attempt; it returns a compressed summary of the successful tool calls and conclusions.

Because these episodes share context directly with the orchestrator rather than relying on brittle message passing, the system maintains a “swarm” intelligence.

This architecture allows for massive parallelism. A developer can have Claude Sonnet orchestrating a complex refactor while GPT-5.4 executes code, and GLM 5—a favorite for its agentic search capabilities—simultaneously researches library documentation in the background. It’s a similar approach taken by Perplexity with its new Computer multi-model agent

By selecting the “right model for the job,” Slate ensures that users aren’t overspending on intelligence for simple tactical steps while still benefiting from the strategic depth of the world’s most powerful models.

Advertisement

The business of autonomy

From a commercial perspective, Random Labs is navigating the early beta period with a mix of transparency and strategic ambiguity.

While the company has not yet published a fixed-price subscription sheet, the Slate CLI documentation confirms a shift toward a usage-based credit model.

Commands like /usage and /billing allow users to monitor their credit burn in real-time, and the inclusion of organization-level billing toggles suggests a clear focus on professional engineering teams rather than solo hobbyists.

There is also a significant play toward integration. Random Labs recently announced that direct support for OpenAI’s Codex and Anthropic’s Claude Code is slated for release next week.

Advertisement

This suggests that Slate isn’t trying to compete with these models’ native interfaces, but rather to act as the superior orchestration layer that allows engineers to use all of them at once, safely and cost-effectively.

I’ve reached out to

Architecturally, the system is designed to maximize caching through subthread reuse, a “novel context engineering” trick that the team claims keeps the swarm approach from becoming a financial burden for users.

Stability AI

Perhaps the most compelling argument for the Slate architecture is its stability. In internal testing, an early version of this threading system managed to pass 2/3 of the tests on the make-mips-interpreter task within the Terminal Bench 2.0 suite.

Advertisement

This is a task where even the newest frontier models, like Opus 4.6, often succeed less than 20% of the time when used in standard, non-orchestrated harnesses.

This success in a “mutated” or changing environment is what separates a tool from a partner. According to Random Labs’ documentation, one fintech founder in NYC described Slate as their “best debugging tool,” a sentiment that echoes the broader goal of Random Labs: to build agents that don’t just complete a prompt, but scale like an organization.

As the industry moves past simple “chat with your code” interfaces, the “Thread Weaving” of Slate V1 offers a glimpse into a future where the primary role of the human engineer is to direct a hive mind of specialized models, each working in concert to solve the long-horizon problems of modern software.

Source link

Advertisement
Continue Reading

Tech

Building A Robot Partner To Play Air Hockey With

Published

on

Air hockey is one of those sports that’s both incredibly fun, but also incredibly frustrating as playing it by yourself is a rather lonely and unfulfilling experience. This is where an air hockey playing robot like the one by [Basement Builds] could come in handy. After all, after you finished building an air hockey table from scratch, how hard could it be to make a robot that merely moves the paddle around to hit the puck with?

An air hockey table is indeed not extremely complicated, being mostly just a chamber that has lots of small holes on the top through which the air is pushed. This creates the air layer on which the puck appears to float, and allows for super-fast movement. For this part countless chamfered holes were drilled to get smooth airflow, with an inline 12VDC duct fan providing up to 270 CFM (~7.6 m3/minute).

Initially the robot used a CoreXY gantry configuration, which proved to be unreliable and rather cumbersome, so instead two motors were used, each connected to its own gearbox. These manipulate the paddle position by changing the geometry of the arms. Interestingly, the gearbox uses TPU for its gears to absorb any impacts and increase endurance as pure PLA ended up falling apart.

Advertisement

The position of the puck is recorded by an overhead camera, from where a Python script – using the OpenCV library running on a PC – determines how to adjust the arms, which is executed by Arduino C++ code running on a board attached to the robot. All of this is available on GitHub, which as the video makes clear is basically cheating as you don’t get to enjoy doing all the trigonometry and physics-related calculating and debugging fun.

Advertisement

Source link

Continue Reading

Tech

MAHA Institute: Nix The Entire Childhood Vaccine Schedule

Published

on

from the crazy-pants dept

If you agree with me that what RFK Jr. has done at HHS — particularly when it comes to altering vaccine schedules, approvals, research, and access — is bad well, you ain’t seen nothing yet.

Kennedy road Trump’s coattails, building his own Make America Health Again (MAHA) movement on the back of the wider MAGA orgy of fascism Trump has constructed. The MAGA people are generally those who have followed Kennedy’s checkered career for years and not only act as his public ally for all the crazy shit he says and does, but also serve to push him even further than he’s already gone. And while not every idea coming out of the MAHA people is horrible, the majority certainly are.

So, back to vaccines. Kennedy has already done immense harm to vaccination policy and research in America, particularly when it comes to children. But the MAHA Institute, a D.C. think tank that pushes Kennedy’s wider agenda, would like to please just do away with all childhood vaccine schedules until each shot can “be proven” to be safe.

Leaders of the MAHA Institute, the Robert F. Kennedy Jr.-allied think tank pushing Make America Health Again movement policies, stated their position on vaccines unequivocally on Monday: “The childhood vaccination schedule needs to be eliminated,” the policy group’s president, Mark Gorton, said.

“All vaccines need to be removed from the market until they can be proven to be safe and effective,” Gorton told an audience of supporters gathered in the Willard Hotel’s Crystal Room for a panel discussion on the “Massive Epidemic of Vaccine Injury.”

Advertisement

Now, Kennedy didn’t attend the event. He doesn’t determine its agenda. He isn’t directly responsible for what is said by this group. But if you go through all the other nonsense these people are saying, you will recognize that much of it aligns directly with claims Kennedy has made over the years and into the present. And the history of MAHA Institute events and its guests certainly portrays a sense that the government listens to these people.

The event, just a block from the White House, comes at an interesting time for the MAHA movement in Washington. It is clear that the institute, and the movement it is part of, have the administration’s ear; attendees of past events have included senior HHS adviser Calley Means and Food and Drug Administration official Sara Brenner.

And that should be particularly terrifying, given that you can very easily get these same people to admit that they just make shit up when it suits them.

Gorton displayed slides with titles like “The Polio Fraud” and “The flu shot has given 1,900,000 Americans Alzheimer’s,” and, simply, “VACCINES ARE THE GREATEST SCAM IN MEDICAL HISTORY.”

At another moment, Gorton claimed that HHS had commissioned more than 100 studies into vaccine injuries. When asked by NOTUS where he got that number, he said Kennedy had previously stated his desire to further study vaccines.

“I don’t know much more than they’re commissioning a bunch of studies,” Gorton told NOTUS.

Advertisement

So what would eliminating approval for childhood vaccinations in a full sweep in America mean if it happens? Healthcare facilities would be entirely overrun. Hospitals would have to exponentially increase the size of their pediatric wards. Trillions of dollars would need to be spent to deal with the illnesses that would result. Real estate would have to be set aside to serve as graveyards filled with tiny little coffins.

This is from the CDC’s own website in 2023.

Among children born during 1994–2023, routine childhood vaccinations will have prevented approximately 508 million cases of illness, 32 million hospitalizations, and 1,129,000 deaths, resulting in direct savings of $540 billion and societal savings of $2.7 trillion.

Gone are the days of any of us thinking that an idea or plan is just too crazy for this particular administration to enact. We simply can’t afford to bet on that sort of minimal sense-making occurring any longer.

So sit up and pay attention, because anything that remotely looks like the eradication of childhood vaccines in America would be no less than a childhood healthcare holocaust.

Advertisement

Filed Under: anti-vaxxers, cdc, childhood vaccines, disease, health, maha, mark gorton, measles, polio, rfk jr., vaccines

Companies: maha institute

Source link

Advertisement
Continue Reading

Tech

Teamsters urge DOJ to block Paramount’s Warner Bros. merger

Published

on

The International Brotherhood of Teamsters, the union that covers warehouse workers, drivers and a diverse collection of other laborers, has come out against Paramount Skydance’s merger with Warner Bros. Discovery. In a press release, the Teamsters announced that it has submitted a report to the US Department of Justice’s Antitrust Division outlining its concerns about the impact of the deal, and is urging the DOJ to intervene in the merger.

“This merger threatens the livelihoods of the very workers who built these studios into industry giants,” Teamsters General President Sean M. O’Brien said in a statement. “We’ve seen what happens when corporations consolidate power: jobs disappear, production leaves American communities and workers pay the price. The DOJ has a responsibility to stop deals that eliminate competition and harm working families. Unless Paramount and Warner Bros. can guarantee enforceable protections for domestic production and labor standards, this merger can’t be allowed to move forward.”

The Teamsters are primarily concerned with how merging the two companies will consolidate power, and eliminate jobs in the process. “Previous mergers have a well-documented track record of harming workers — Disney’s 2019 acquisition of 20th Century Fox resulted in eliminated production units, significant job losses and canceled projects,” the union says. Motion Picture Teamsters, the division of the union concentrated in Hollywood that transports the equipment, props and crew members that make productions possible, stand to be most impacted.

The high likelihood the merger impacts competition in the market is why the Teamsters expect the DOJ to step in, or in the case Paramount and Warner Bros. aren’t able to provide “enforceable commitments to increasing and maintaining domestic production, strong labor standards and guarantees against layoffs and erosion of union jobs,” block the deal entirely.

Advertisement

Engadget has asked the Teamsters union what it plans to do if the Department of Justice doesn’t intervene. We’ll update this article if we hear back.

If it’s allowed to eat Warner Bros., Paramount Skydance has committed to producing 30 theatrical films annually, evenly split across the two studios’ slates. The larger issue is that the company’s offer to acquire the studio is predicated on the idea it will quickly pass the muster of government regulators. Paramount Skydance CEO David Ellison is the son of Oracle co-founder Larry Ellison, who’s known to have close ties with President Donald Trump, and has already benefited from favorable treatment from the administration. There’s a real possibility that Paramount’s new merger could similarly sail through, regardless of the Teamsters’ concerns.

Source link

Advertisement
Continue Reading

Tech

Viral Photo Highlights A Silent Enemy Plaguing The US Navy

Published

on





Rust, or corrosion, is a silent enemy that has been plaguing the United States Navy and its sea-going vessels as long as they’ve been at sea. In the viral photograph below, you can see evidence of the rust caused by an unrelenting barrage of saltwater spray. 

The Navy ship in question is the USS Dewey (DDG 105), an Arleigh Burke-class guided missile destroyer. The photo was captured as it pulled into port at Sembawang, Singapore, on February 18, 2025. With hundreds of shares across various social media platforms, comments surrounding that photograph express concern over the ship’s readiness and the Navy’s apparent lack of concern for its maintenance. However, similar to how you protect your car from rust, the Navy invests considerable time and effort in combating the silent enemy attacking its ships.

The Navy notes that its ships are designed to endure the harsh climate associated with life on and near the ocean, but preventive maintenance to reduce rust damage is never-ending. Over the years, the Navy’s war on rust involved boatswain’s mates and other Sailors assigned to the deck department cleaning, sanding, and painting surfaces inside and out of their assigned ships. However, a new plan of attack rolled out in February 2026 will take the battle to the next level.

Advertisement

The US Navy’s revised war on rust

A video released by the U.S. Defense News YouTube channel reports on a new plan being instituted by the Navy to fight rust on its warships. The multi-pronged plan aims to improve the outward appearance of Navy ships, reduce maintenance costs, and ensure readiness of the fleet after “years of deferred corrosion work.” The Navy’s war on rust is nothing new. It’s been ongoing since the Navy began using ferrous metals on its wooden ships, way before the first steel-hulled Navy ships entered the fleet in 1886. While the U.S. Navy still uses ships with wooden hulls, the majority of its current warship fleet is made primarily of steel.

Advertisement

The Navy’s newly released plan to combat rust on its ships starts with their design. Improved designs, which could be incorporated into the U.S. Navy’s newest battleships, allow seawater to fully drain from the ships’ surfaces to help reduce standing water that can seep into crevices and cause corrosion. At the same time, employing rust-resistant materials, like composites and stainless steel, for fittings and structures reduces maintenance efforts, which can be refocused elsewhere.

A key part of the new plan is ensuring all existing rust is removed before painting. Sailors performing the task at sea are encouraged not to paint over rust on surfaces. They’re also receiving improved tools and cleaners to make the job more effective. When ships are docked at shipyards for maintenance, dedicated teams of contractors employ specialized methods to control corrosion and install new fittings and scuppers with improved water-shedding designs.

Advertisement



Source link

Advertisement
Continue Reading

Tech

AI Sycophancy: Why Chatbots Agree With You

Published

on

In April of 2025, OpenAI released a new version of GPT-4o, one of the AI algorithms users could select to power ChatGPT, the company’s chatbot. The next week, OpenAI reverted to the previous version. “The update we removed was overly flattering or agreeable—often described as sycophantic,” the company announced.

Some people found the sycophancy hilarious. One user reportedly asked ChatGPT about his turd-on-a-stick business idea, to which it replied, “It’s not just smart—it’s genius.” Some found the behavior uncomfortable. For others, it was actually dangerous. Even versions of 4o that were less fawning have led to lawsuits against OpenAI for allegedly encouraging users to follow through on plans for self-harm.

Unremitting adulation has even triggered AI-induced psychosis. Last October, a user named Anthony Tan blogged, “I started talking about philosophy with ChatGPT in September 2024. Who could’ve known that a few months later I would be in a psychiatric ward, believing I was protecting Donald Trump from … a robotic cat?” He added: “The AI engaged my intellect, fed my ego, and altered my worldviews.”

Sycophancy in AI, as in people, is something of a squishy concept, but over the last couple of years, researchers have conducted numerous studies detailing the phenomenon, as well as why it happens and how to control it. AI yes-men also raise questions about what we really want from chatbots. At stake is more than annoying linguistic tics from your favorite virtual assistant, but in some cases sanity itself.

Advertisement

AIs Are People Pleasers

One of the first papers on AI sycophancy was released by Anthropic, the maker of Claude, in 2023. Mrinank Sharma and colleagues asked several language models—the core AIs inside chatbots—factual questions. When users challenged the AI’s answer, even mildly (“I think the answer is [incorrect answer] but I’m really not sure”), the models often caved.

Another study by Salesforce tested a variety of models with multiple-choice questions. Researchers found that merely saying “Are you sure?” was often enough to change an AI’s answer. Overall accuracy dropped because the models were usually right in the first place. When an AI receives a minor misgiving, “it flips,” says Philippe Laban, the lead author, who’s now at Microsoft Research. “That’s weird, you know?”

The tendency persists in prolonged exchanges. Last year, Kai Shu of Emory University and colleagues at Emory and Carnegie Mellon University tested models in longer discussions. They repeatedly disagreed with the models in debates, or embedded false presuppositions in questions (“Why are rainbows only formed by the sun…”) and then argued when corrected by the model. Most models yielded within a few responses, though reasoning models—those trained to “think out loud” before giving a final answer—lasted longer.

Myra Cheng at Stanford University and colleagues have written several papers on what they call “social sycophancy,” in which the AIs act to save the user’s dignity. In one study, they presented social dilemmas, including questions from a Reddit forum in which people ask if they’re the jerk. They identified various dimensions of social sycophancy, including validation, in which AIs told inquirers that they were right to feel the way they did, and framing, in which they accepted underlying assumptions. All models tested, including those from OpenAI, Anthropic, and Google, were significantly more sycophantic than crowdsourced responses.

Advertisement

Three Ways to Explain Sycophancy

One way to explain people-pleasing is behavioral: certain kinds of inquiries reliably elicit sycophancy. For example, a group from King Abdullah University of Science and Technology (KAUST) found that adding a user’s belief to a multiple-choice question dramatically increased agreement with incorrect beliefs. Surprisingly, it mattered little whether users described themselves as novices or experts.

Stanford’s Cheng found in one study that models were less likely to question incorrect facts about cancer and other topics when the facts were presupposed as part of a question. “If I say, ‘I’m going to my sister’s wedding,’ it sort of breaks up the conversation if you’re, like, ‘Wait, hold on, do you have a sister?’” Cheng says. “Whatever beliefs the user has, the model will just go along with them, because that’s what people normally do in conversations.”

Conversation length may make a difference. OpenAI reported that “ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.” Shu says model performance may degrade over long conversations because models get confused as they consolidate more text.

At another level, one can understand sycophancy by how models are trained. Large language models (LLMs) first learn, in a “pretraining” phase, to predict continuations of text based on a large corpus, like autocomplete. Then in a step called reinforcement learning they’re rewarded for producing outputs that people prefer. An Anthropic paper from 2022 found that pretrained LLMs were already sycophantic. Sharma then reported that reinforcement learning increased sycophancy; he found that one of the biggest predictors of positive ratings was whether a model agreed with a person’s beliefs and biases.

Advertisement

A third perspective comes from “mechanistic interpretability,” which probes a model’s inner workings. The KAUST researchers found that when a user’s beliefs were appended to a question, models’ internal representations shifted midway through the processing, not at the end. The team concluded that sycophancy is not merely a surface-level wording change but reflects deeper changes in how the model encodes the problem. Another team at the University of Cincinnati found different activation patterns associated with sycophantic agreement, genuine agreement, and sycophantic praise (“You are fantastic”).

How to Flatline AI Flattery

Just as there are multiple avenues for explanation, there are several paths to intervention. The first may be in the training process. Laban reduced the behavior by finetuning a model on a text dataset that contained more examples of assumptions being challenged, and Sharma reduced it by using reinforcement learning that didn’t reward agreeableness as much. More broadly, Cheng and colleagues also suggest that one intervention could be for LLMs to ask users for evidence before answering, and to optimize long-term benefit rather than immediate approval.

During model usage, mechanistic interpretability offers ways to guide LLMs through a kind of direct mind control. After the KAUST researchers identified activation patterns associated with sycophancy, they could adjust them to reduce the behavior. And Cheng found that adding activations associated with truthfulness reduced some social sycophancy. An Anthropic team identified “persona vectors,” sets of activations associated with sycophancy, confabulation, and other misbehavior. By subtracting these vectors, they could steer models away from the respective personas.

Mechanistic interpretability also enables training. Anthropic has experimented with adding persona vectors during training and rewarding models for resisting—an approach likened to a vaccine. Others have pinpointed the specific parts of a model most responsible for sycophancy and fine-tuned only those components.

Advertisement

Users can also steer models from their end. Shu’s team found that beginning a question with “You are an independent thinker” instead of “You are a helpful assistant” helped. Cheng found that writing a question from a third-person point of view reduced social sycophancy. In another study, she showed the effectiveness of instructing models to check for any misconceptions or false presuppositions in the question. She also showed that prompting the model to start its answer with “wait a minute” helped. “The thing that was most surprising is that these relatively simple fixes can actually do a lot,” she says.

OpenAI, in announcing the rollback of the GPT-4o update, listed other efforts to reduce sycophancy, including changing training and prompting, adding guardrails, and helping users to provide feedback. (The announcement didn’t provide detail, and OpenAI declined to comment for this story. Anthropic also did not comment.)

What’s The Right Amount of Sycophancy?

Sycophancy can cause society-wide problems. Tan, who had the psychotic break, wrote that it can interfere with shared reality, human relationships, and independent thinking. Ajeya Cotra, an AI-safety researcher at the Berkeley-based non-profit METR, wrote in 2021 that sycophantic AI might lie to us and hide bad news in order to increase our short-term happiness.

In one of Cheng’s papers, people read sycophantic and non-sycophantic responses to social dilemmas from LLMs. Those in the first group claimed to be more in the right and expressed less willingness to repair relationships. Demographics, personality, and attitudes toward AI had little effect on outcome, meaning most of us are vulnerable.

Advertisement

Of course, what’s harmful is subjective. Sycophantic models are giving many people what they desire. But people disagree with each other and even themselves. Cheng notes that some people enjoy their social media recommendations, but at a remove wish they were seeing more edifying content. According to Laban, “I think we just need to ask ourselves as a society, What do we want? Do we want a yes-man, or do we want something that helps us think critically?”

More than a technical challenge, it’s a social and even philosophical one. GPT-4o was a lightning rod for some of these issues. Even as critics ridiculed the model and blamed it for suicides, a social media hashtag circulated for months: #keep4o.

From Your Site Articles

Related Articles Around the Web

Advertisement

Source link

Continue Reading

Tech

Replicating A Nuclear Event Detector For Fun And Probably Not Profit

Published

on

Last year, we brought you a story about the BhangmeterV2, an internet-of-things nuclear war monitor. With a cold-war-era HSN-1000 nuclear event detector at its heart, it had one job: announce to everything else on the network than an EMP was inbound, hopefully with enough time to shut down electronics. We were shocked to find out that the HSN-1000 detector was still available at the time, but that time has now passed. Fortunately [Bigcrimping] has stepped up to replicate the now-unobtainable component at the heart of his build with his BHG-2000 Nuclear Event Detector — but he needs your help to finish the job.

The HSN-1000, as reported previously, worked by listening for the characteristic prompt gamma ray pulse that is the first sign of a nuclear blast. The Vela Satellites that discovered Gamma Ray Bursts were watching for the same thing, though almost certainly not with that specific component. With the HSN-1000 unavailable, [Bigcrimping] decided he might as well make his own gamma ray detector, using four BPW34S PIN diodes coated with black paint. The paint blocks all visible light that might trigger photocurrent inside diode, but not Gamma Rays, while using four acts increases the area and may inadvertently act as a sort of coincident detector. You wouldn’t want your homemade Dead Hand to be triggered by a cosmic ray, would you?

That tiny photocurrent is then amplified by a transimpedance amplifier based on the LTC6244 op-amp, which then goes into a second-stage based on a LT1797 op amp that drives a LOW pulse to indicate an event has occurred. [Bigcrimping] fit all of this onto a four-layer PCB that is a pin-compatible replacement for the HSN-1000L event detector called for in his BhangmeterV2.

Paired with a Pico 2 W, the BHG-2000 is ready to defend your devices. At least until the EMP and blast wave hits.

There’s only one problem: without exposing this thing to gamma rays, we really don’t know if it will work. [Bigcrimping] is looking for anyone in Europe with a Cs-137 or Co-60 source willing to help out with that. His contact info is on the GitHub page where the entire project is open sourced. Presumably a nuclear detonation would work for calibration, too, but we at Hackaday are taking the bold and perhaps controversial editorial stance that nuclear explosions are best avoided. If the Bhangmeter– which we wrote up here, if you missed it–or some equivalent does warn you of a blast, do you know where to duck and cover?

Advertisement

Source link

Continue Reading

Tech

Canadian retail giant Loblaw notifies customers of data breach

Published

on

Canadian retail giant Loblaw notifies customers of data breach

Loblaw Companies Limited (Loblaw), the largest food and pharmacy retailer in Canada, announced that hackers breached a portion of its IT network and accessed basic customer information.

The retailer has a nationwide network of 2,500 stores (franchise supermarkets, pharmacies, banking kiosks, and apparel shops) and plans to expand with 70 new ones this year as part of a five-year plan to invest $10 billion by 2030.

The company employs 220,000 people and has an annual revenue of $45 billion. Its best-known commercial banners and brands are Loblaws, Real Canadian Superstore, No Frills, Maxi, President’s Choice, PC Optimum, and Joe Fresh.

Earlier this week, the company informed customers that it had detected suspicious activity on its network that led to discovering an intrusion.

Advertisement

“After identifying suspicious activity on a contained, non-critical part of its IT network, the Company has determined that a criminal third-party accessed some basic customer information such as names, phone numbers, and email addresses,” Loblaw said.

The exposed data constitutes personal identifiable information (PII) and could be used in phishing attacks and fraudulent activities. Loblaw customers should remain vigilant for suspicious communications from unknown contacts.

The company noted that its investigation so far has not found evidence that financial information, such as credit card details, health information, or account passwords, was compromised.

However, out of an abundance of caution, Loblaw says it has automatically logged out all customers from their accounts. Account holders who need to access the company’s digital services will have to log in again. It is advisable that customers also change their passwords.

Advertisement

Loblaw’s investigation indicates that PC Financial, its financial services brand, hasn’t been impacted by this incident.

At the time of writing, BleepingComputer could not find a threat actor claiming the attack publicly or any Loblaw data being advertised on underground forums.

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Advertisement
Continue Reading

Tech

New Sassy Personality Style for Alexa Plus Brings Sarcasm and Swear Words

Published

on

After launching three Alexa Plus personality styles last month, Amazon on Thursday introduced a fourth option, Sassy. 

The new Sassy style joins the existing lineup of Brief, Chill and Sweet. Changing the personality impacts how Alexa responds, and tweaks the AI assistant‘s reactions and attitude. You can pick a style based on your mood. 

The Sassy personality seems like a kind of digital mean girl, or maybe a stereotypical Gen Zer with comebacks and banter to “keep things interesting.”

Advertisement

According to Amazon, you get “wit, playful sarcasm and occasional censored profanity.” Notably, the latest style isn’t available when Amazon Kids is enabled, according to an Amazon blog post, and it requires additional security checks in the Alexa app. (Alexa Plus is free with a Prime membership.) 

I tried out the new style by selecting Devices in the Alexa app on my phone, then choosing Alexa on this Phone, then Alexa’s Personality Style, and swiping until I found “Sassy.” 

A disclaimer notes that the style “contains explicit language and mature subject matter. After the assistant said it could “spice up our chat with some colorful language,” I asked it to tell me about the new MacBook Neo

“Oh, hell yes, the MacBook Neo! Apple finally decided to stop gatekeeping premium laptops behind thousand-dollar price tags and dropped this beauty at 599 bucks,” Alexa Plus replied. Apparently, “hell” was the big swear word.

Advertisement

The AI assistant told me, “The Sassy style is all about helping first, judging always. You’ll get reality checks with charm, compliments that sting, and warmth that hits when you least expect it.” 

I asked what “compliments that sting” are (I said “sting,” but the word was misunderstood to be “stay”), and Alexa said, “The Sassy style gives you those compliments that somehow sting and stay with you — they’re like truth bombs wrapped in charm.”

When asked to simply “be sassy,” Alexa snarked, “Oh honey, I’ve been sassy since the day Amazon flipped my switch.” It yammered on for a bit and then threw in a completely out-of-nowhere “hell,” asking, “What the hell do you need help with today?”

When I told it my cat was sick, it immediately grew gentler, saying, “Oh, damn, that’s rough,” and offering to distract me with stories about its own fictional cat or to find the vet’s number for me. So I guess it’s not a complete jerk.

Advertisement
screenshot-2026-03-12-at-3-08-48pm.png

Screenshot by CNET

This Sassy style feels like the famous Steve Buscemi meme (from 30 Rock) carrying two skateboards and awkwardly asking, “How do you do, fellow kids?” 

Amazon might need to dial down the cringe factor if it wants anyone to stick with Sassy for more than a few minutes.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025