When enterprises fine-tune LLMs for new tasks, they risk breaking everything the models already know. This forces companies to maintain separate models for every skill.
Researchers at MIT, the Improbable AI Lab and ETH Zurich have developed a new technique that enables large language models to learn new skills and knowledge without forgetting their past capabilities.
Their technique, called self-distillation fine-tuning (SDFT), allows models to learn directly from demonstrations and their own experiments by leveraging the inherent in-context learning abilities of modern LLMs. Experiments show that SDFT consistently outperforms traditional supervised fine-tuning (SFT) while addressing the limitations of reinforcement learning algorithms.
For enterprise applications, the method enables a single model to accumulate multiple skills over time without suffering from performance regression on earlier tasks. This offers a potential pathway for building AI agents that can adapt to dynamic business environments, gathering new proprietary knowledge and skills as needed without requiring expensive retraining cycles or losing their general reasoning abilities.
Advertisement
The challenge of continual learning
Once an LLM is trained and deployed, it remains static. It does not update its parameters to acquire new skills, internalize new knowledge, or improve from experience. To build truly adaptive AI, the industry needs to solve “continual learning,” allowing systems to accumulate knowledge much like humans do throughout their careers.
The most effective way for models to learn is through “on-policy learning.” In this approach, the model learns from data it generates itself allowing it to correct its own errors and reasoning processes. This stands in contrast to learning by simply mimicking static datasets. Without on-policy learning, models are prone to “catastrophic forgetting,” a phenomenon where learning a new task causes the model to lose its past knowledge and ability to perform previous tasks.
However, on-policy learning typically requires reinforcement learning (RL), which depends on an explicit reward function to score the model’s outputs. This works well for problems with clear outcomes, such as math and coding. But in many real-world enterprise scenarios (e.g., writing a legal brief or summarizing a meeting), defining a mathematical reward function is difficult or impossible.
RL methods also often fail when trying to teach a model entirely new information, such as a specific company protocol or a new product line. As Idan Shenfeld, a doctorate student at MIT and co-author of the paper, told VentureBeat, “No matter how many times the base model tries, it cannot generate correct answers for a topic it has zero knowledge about,” meaning it never gets a positive signal to learn from.
Advertisement
The standard alternative is supervised fine-tuning (SFT), where the model is trained on a fixed dataset of expert demonstrations. While SFT provides clear ground truth, it is inherently “off-policy.” Because the model is just mimicking data rather than learning from its own attempts, it often fails to generalize to out-of-distribution examples and suffers heavily from catastrophic forgetting.
SDFT seeks to bridge this gap: enabling the benefits of on-policy learning using only prerecorded demonstrations, without needing a reward function.
How SDFT works
SDFT solves this problem by using “distillation,” a process where a student model learns to mimic a teacher. The researchers’ insight was to use the model’s own “in-context learning” (ICL) capabilities to create a feedback loop within a single model.
In-context learning is the phenomenon where you provide the LLM with a difficult task and one or more demonstrations of how similar problems are solved. Most advanced LLMs are designed to solve new problems with ICL examples, without any parameter updates.
Advertisement
During the training cycle, SDFT employs the model in two roles.
The teacher: A frozen version of the model is fed the query along with expert demonstrations. Using ICL, the teacher deduces the correct answer and the reasoning logic required to reach it.
The student: This version sees only the query, simulating a real-world deployment scenario where no answer key is available.
When the student generates an answer, the teacher, which has access to the expert demonstrations, provides feedback. The student then updates its parameters to align closer to the teacher’s distribution.
This process effectively creates an on-policy learning loop by combining elements of SFT and RL. The supervision comes not from a static dataset, but from the model’s own interaction and outputs. It allows the model to correct its own reasoning trajectories without requiring an external reward signal. This process works even for new knowledge that RL would miss.
Advertisement
SDFT in action
To validate the approach, the researchers tested SDFT using the open-weight Qwen 2.5 model on three complex enterprise-grade skills: science Q&A, software tool use, and medical reasoning.
The results showed that SDFT learned new tasks more effectively than standard methods. On the Science Q&A benchmark, the SDFT model achieved 70.2% accuracy, compared to 66.2% for the standard SFT approach.
Contrary to SFT, SDFT preserves the model’s original knowledge while learning new tasks and knowledge (source: arXiv)
More important for enterprise adoption is the impact on catastrophic forgetting. When the standard SFT model learned the science task, its ability to answer general questions (such as logic or humanities) collapsed. In contrast, the SDFT model improved on the science task while holding its “Previous Tasks” score steady at 64.5%. This stability suggests companies could specialize models for specific departments (e.g., HR or Legal) without degrading the model’s basic common sense or reasoning capabilities.
Advertisement
The team also simulated a knowledge injection scenario, creating a dataset of fictional “2025 Natural Disasters” to teach the model new facts. They tested the model on indirect reasoning questions, such as “Given the floods in 2025, which countries likely needed humanitarian aid?”
Standard SFT resulted in a model that memorized facts but struggled to use them in reasoning scenarios. The SDFT model, having internalized the logic during training, scored 98% on the same questions.
Finally, the researchers conducted a sequential learning experiment, training the model on science, tool use, and medical tasks one after another. While the standard model’s performance oscillated, losing previous skills as it learned new ones, the SDFT model successfully accumulated all three skills without regression.
SDFT can learn different skills sequentially while preserving its previous knowledge (source: arXiv)
Advertisement
This capability addresses a major pain point for enterprises currently managing “model zoos” of separate adapters for different tasks.
“We offer the ability to maintain only a single model for all the company’s needs,” Shenfeld said. This consolidation “can lead to a substantial reduction in inference costs” because organizations don’t need to host multiple models simultaneously.
SDFT limitations and availability
The code for SDFT is available on GitHub and ready to be integrated into existing model training workflows.
“The SDFT pipeline is more similar to the RL pipeline in that it requires online response generation during training,” Shenfeld said. They are working with Hugging Face to integrate SDFT into the latter’s Transformer Reinforcement Learning (TRL) library, he added, noting that a pull request is already open for developers who want to test the integration.
Advertisement
For teams considering SDFT, the practical tradeoffs come down to model size and compute. The technique requires models with strong enough in-context learning to act as their own teachers — currently around 4 billion parameters with newer architectures like Qwen 3, though Shenfeld expects 1 billion-parameter models to work soon. It demands roughly 2.5 times the compute of standard fine-tuning, but is best suited for organizations that need a single model to accumulate multiple skills over time, particularly in domains where defining a reward function for reinforcement learning is difficult or impossible.
While effective, the method does come with computational tradeoffs. SDFT is approximately four times slower and requires 2.5 times more computational power (FLOPs) than standard fine-tuning because the model must actively generate its own answers (“rollouts”) during training to compare against the teacher. However, the researchers note that because the model retains knowledge better, organizations may avoid the costly multi-stage retraining processes often required to repair models that suffer from catastrophic forgetting.
The technique also relies on the underlying model being large enough to benefit from in-context learning. The paper notes that smaller models (e.g., 3 billion parameters) initially struggled because they lacked the “intelligence” to act as their own teachers.
However, Shenfeld said that the rapid improvement of small models is changing this dynamic. “The Qwen 2.5 3B models were too weak, but in some experiments we currently do, we found that the Qwen 3 4B model is strong enough,” he said. “I see a future where even 1B models have good enough ICL capabilities to support SDFT.”
Advertisement
Ultimately, the goal is to move beyond static snapshots toward systems that improve through use.
“Lifelong learning, together with the ability to extract learning signal from unstructured user interactions… will bring models that just keep and keep improving with time,” Shenfeld said.
“Think about the fact that already the majority of compute around the world goes into inference instead of training. We have to find ways to harness this compute to improve our models.”
Apple has reportedly delayed some of Siri’s AI features beyond iOS 26.4
These will apparently now land as part of iOS 26.5 or iOS 27
These features were first announced back in June 2024
Siri’s long-promised AI overhaul is becoming a huge embarrassment for Apple, as while this was initially announced back in June of 2024, at which point Apple said it would launch as part of iOS 18 that year, we’re now in 2026 and it still hasn’t arrived. Not only that, but it’s reportedly now being delayed even further.
We’d heard that it might finally arrive – at least in part – with iOS 26.4, which is expected to roll out soon, but now Apple watcher Mark Gurman, writing for Bloomberg (via 9to5Mac), has said that at least some of the features that were previously planned for iOS 26.4 will now ship with iOS 26.5, which is expected in May, and iOS 27, due in September, instead.
Gurman – who has a superb track record for Apple information – cites “people familiar with the matter”, and adds that the most likely features to slip are “voice-based control of in-app actions”, and “the expanded ability for Siri to tap into personal data,” which, as Gurman explains, “would let users ask the assistant to, say, search old text messages to locate a podcast shared by a friend and immediately play it.”
So if this is correct, Siri’s AI overhaul won’t get most of its core features until around two years after it was first announced, and parts that don’t arrive until iOS 27 will be a full two years later than Apple initially said to expect them.
(Image credit: Apple)
An unreasonably long wait
Even in isolation, this would be a ridiculously long delay, and one that’s not very fair on customers – including myself – who upgraded to iPhone 16-series phones in part down to the promise of these features.
But it gets even worse when you consider just how far ahead Android is when it comes to AI features, with Gemini having delivered much of what Apple is promising for Siri for years now.
Advertisement
In fact, Apple is so far behind that it seems to have – for the time being at least – essentially given up on trying to directly compete, and has instead inked a deal with Google to use Gemini as the brains behind Siri. But even with that deal in place, the wait goes on.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Apple is no stranger to embarrassments and failures, from ‘antennagate’ and ‘bendgate’ to the awful state Apple Maps launched in and the abandoned AirPower wireless charger, but none of these issues dragged on for quite as long as the current Siri debacle.
And not only is Siri miles behind the competition here, but even before AI emerged, Siri was generally considered less capable than rivals, so for whatever reason this is something Apple has struggled with in one way or another since the launch of Siri itself.
Advertisement
Hopefully, Siri will finally be competitive once this promised AI overhaul is delivered, but with the way things have been going so far, I wouldn’t be surprised if it gets even further delayed.
We may receive a commission on purchases made from links.
For an easy and affordable way to turn your TV into a smart streaming hub, one of the top recommendations you’re going to hear is a Fire TV Stick. It doesn’t even matter if you have a Roku TV or other streaming OS built into your television. People get a Fire TV Stick because it does a better job integrating with smart homes, supports voice assistants, and streamlines all your movies, shows, music, and live TV into one easy-to-use place. Install is super easy because it’s just plug-and-play: just stick it into the HDMI port on the back of a TV, connect the power cable, and you’re good to go.
But the Fire TV Stick shouldn’t be limited to your house’s TV alone. This Amazon hardware is a lot more flexible than people realize. As it turns out, there are several other useful ways you can use one. No matter if you’re working with a Fire TV Stick HD, one of the 4k models of the Fire TV Stick, or one of Amazon’s other Fire TV offerings, we’ve rounded up four other compatible devices you can use a Fire TV Stick with.
Advertisement
Projectors
Pixel-Shot/Shutterstock
If you have a nice, big wall of open space in your house (or one of those backyard setups any neighbor would be jealous of), you might’ve already invested in a projector. They’re one of the most common alternatives to traditional TVs, and they do a great job giving you that movie theater experience from home. Turns out, they can also pair with a Fire TV Stick. Most modern projectors include at least one HDMI input, and that’s all the Fire TV Stick needs for video and audio output.
Once you get it connected, just find an outlet to plug the Fire TV Stick’s power cord into, and you’re all set to stream HD or 4k content. The Fire TV Stick HD supports resolutions up to 1080p at 60 frames per second, while the Fire TV Stick 4K Plus and its siblings can output up to 2160p with support for HDR formats like Dolby Vision and HDR10+. Combined with a compatible projector, you’ll easily be able to stream movies, live sports, or other content from the projector to the screen. If you’re planning on using it outside, just make sure your Wi-Fi signal can reach.
Advertisement
Computers
Tina Simakova/Getty Images
If you’re away from home, it’s nice knowing certain desktop displays and even laptops can support a Fire TV Stick. As long as it has an HDMI input that supports external devices, you’ll be able to plug it in and start streaming. Once it’s connected, the Fire TV Stick basically turns your monitor into a display separate from the computer’s operating system. This comes in handy in offices, dorm rooms, or work-from-home setups where a TV would be too big or too excessive for the space.
If you’re already strapped for space, don’t worry: The Fire TV Stick’s small size and low power requirements mean it won’t be a burden that clutters up your desktop setup. Pair some Bluetooth headphones to your computer, and you can enjoy some private listening as well. As a note: Not every laptop computer has an HDMI port, and of the ones that do, not all of them support HDMI input. Check your computer’s specs before committing.
Advertisement
Hotel room TVs
Pranithan Chorruangsak/Getty Images
On vacation or a work trip and can’t find anything good to watch on the hotel or Airbnb’s TV? You might want to remember to pack your Fire TV Stick next time. That way, you won’t have to bother with those limited channel selections, locked menus, or unreliable casting options and can just watch what you want to watch instead. As long as the hotel TV has an accessible HDMI port, your Fire TV Stick has space to shine. (Just don’t forget it when it comes time to check out.)
Some hotels and short-term rentals have started encouraging people to log into their personal streaming services on the place’s smart TV, but that’s a pain. Plus, you have to remember to log out before you leave. Using a Fire TV Stick instead means you just plug it in, sign into the Wi-Fi, and start streaming off your own apps without needing to fiddle with the hotel’s. And some good news: If your device supports 4k but the hotel’s TV doesn’t, the device will simply adjust to the resolution of the screen.
Advertisement
AV Receiver
Alexstepanov/Getty Images
If you have an AV receiver and really want to get the most out of your Fire TV Stick’s surround sound support, just plug it right into the receiver’s HDMI port. No need to plug it into the TV at all! Just cut out the middle man and go straight to the source. Plenty of modern receivers give you both HDMI inputs and outputs, meaning they’ll route video to your projector or TV while taking care of the audio separately. No extra hardware required.
The Fire TV Stick 4K Plus and Fire TV Stick 4K Max work especially well with AV receivers because of their Dolby Atmos support, not to mention the multi-channel audio pass-through and HDMI 2.1 features like ARC. The included Alexa Voice Remote might also be able to control certain receiver functions, including power and volume, depending on how smart your setup is. That said, a Fire TV Stick HD will work just fine in the receiver, too.
Water wells are simple things, but that doesn’t mean they are maintenance-free. It can be important to monitor water levels in a well, and that gets complicated when the well is remote. Commercial solutions exist, of course, but tend to be expensive and even impractical in some cases. That’s where [Hans Gaensbauer]’s low-cost, buoyancy-based well monitor comes in. An Engineers Without Border project, it not only cleverly measures water level in a simple way — logging to a text file on a USB stick in the process — but it’s so low-power that a single battery can run it for years.
The steel cable (bottom left) is attached to a submerged length of pipe, and inside the cylinder is a custom load cell. The lower the water level, the higher the apparent weight of the submerged pipe.
The monitor [Hans] designed works in the following way: suspend a length of pipe inside the well, and attach that pipe to a load cell. The apparent weight of the pipe will be directly proportional to how much of the pipe is above water. The fuller the well, the less the pipe will seem to weigh. It’s very clever, requires nothing to be in the well that isn’t already water-safe, and was designed so that the electronics sit outside in a weatherproof enclosure. Cost comes out to about $25 each, which compares pretty favorably to the $1000+ range of industrial sensors.
The concept is clever, but it took more that that to create a workable solution. For one thing, space was an issue. The entire well cap was only six inches in diameter, most of which was already occupied. [Hans] figured he had only about an inch to work with, but he made it work by designing a custom load cell out of a piece of aluminum with four strain gauges bonded to it. The resulting sensor is narrow, and sits within a nylon and PTFE tube that mounts vertically to the top of the well cap. Out from the bottom comes a steel cable that attaches to the submerged tube, and out the top comes a cable that brings the signals to the rest of the electronics in a separate enclosure. More details on the well monitor are in the project’s GitHub repository.
All one has to do after it’s installed is swap out the USB stick to retrieve readings, and every once in a long while change the battery. It sure beats taking manual sensor readings constantly, like meteorologists did back in WWII.
This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Parsity and delivered to your inbox for free!
We’d like to introduce Brian Jenney, a senior software engineer and owner of Parsity, an online education platform that helps people break into AI and modern software roles through hands-on training. Brian will be sharing his advice on engineering careers with you in the coming weeks of Career Alert.
Here’s a note from Brian:
“12 years ago, I learned to code at the age of 30. Since then I’ve led engineering teams, worked at organizations ranging from five-person startups to Fortune 500 companies, and taught hundreds of others who want to break into tech. I write for engineers who want practical ways to get better at what they do and advance in their careers. I hope you find what I write helpful.”
Advertisement
Last year, I was conducting interviews for an AI startup position. We allowed unlimited AI usage during the technical challenge round. Candidates could use Cursor, Claude Code, ChatGPT, or any assistant they normally worked with. We wanted to see how they used modern tools.
During one interview, we asked a candidate a simple question: “Can you explain what the first line of your solution is doing?”
Silence.
After a long pause, he admitted he had no idea. His solution was correct. The code worked. But he couldn’t explain how or why. This wasn’t an isolated incident. Around 20 percent of the candidates we interviewed were unable to explain how their solutions worked, only that they did.
Advertisement
When AI Makes Interviews Harder
A few months earlier, I was on the other side of the table at this same company. During a live interview, I instinctively switched from my AI-enabled code editor to my regular one. The CTO stopped me.
“Just use whatever you normally would. We want to see how you work with AI.”
I thought the interview would be easy. But I was wrong.
Instead of only evaluating correctness, the interviewer focused on my decision-making process:
Advertisement
Why did I accept certain suggestions?
Why did I reject others?
How did I decide when AI helped versus when it created more work?
I wasn’t just solving a problem in front of strangers. I was explaining my judgment and defending my decisions in real time, and AI created more surface area for judgment. Counterintuitively, the interview was harder.
The Shift in Interview Evaluation
Most engineers now use AI tools in some form, whether they write code, analyze data, design systems, or automate workflows. AI can generate output quickly, but it can’t explain intent, constraints, or tradeoffs.
More importantly, it can’t take responsibility when something breaks.
As a result, major companies and startups alike are now adapting to this reality by shifting to interviews with AI. Meta, Rippling, and Google, for instance, have all begun allowing candidates to use AI assistants in technical sessions. And the goal has evolved: interviewers want to understand how you evaluate, modify, and trust AI-generated answers.
So, how can you succeed in these interviews?
Advertisement
What Actually Matters in AI-Enabled Interviews
Refusing to use AI out of principle doesn’t help. Some candidates avoid AI to prove they can think independently. This can backfire. If the organization uses AI internally—and most do—then refusing to use it signals rigidity, not strength.
Silence is a red flag. Interviews aren’t natural working environments. We don’t usually think aloud when deep in a complex problem, but silence can raise concerns. If you’re using AI, explain what you’re doing and why:
“I’m using AI to sketch an approach, then validating assumptions.”
“This suggestion works, but it ignores a constraint we care about.”
“I’ll accept this part, but I want to simplify it.”
Your decision-making process is what separates effective engineers from prompt jockeys.
Treat AI output as a first draft. Blind acceptance is the fastest way to fail. Strong candidates immediately evaluate the output: Does this meet the requirements? Is it unnecessarily complex? Would I stand behind this in production?
Small changes like renaming variables, removing abstractions, or tightening logic signal ownership and critical thinking.
Advertisement
Optimize for trust, not completion. Most AI tools can complete a coding challenge faster than any human. Interviews that allow AI are testing something different. They’re answering: “Would I trust this person to make good decisions when things get messy?”
Adapting to a Shifting Landscape
Interviews are changing faster than most candidates realize. Here’s how to prepare:
Start using AI tools daily. If you’re not already working with Cursor, Claude Code, ChatGPT, or CoPilot, start now. Build muscle memory for prompting, evaluating output, and catching errors.
Develop your rejection instincts. The skill isn’t using AI. It’s knowing when AI output is wrong, incomplete, or unnecessarily complex. Practice spotting these issues and learning known pitfalls.
Advertisement
Your next interview might test these skills. The candidates who’ve been practicing will have a clear advantage.
—Brian
Around this time last year, CEOs like Sam Altman promised that 2025 would be the year AI agents would join the workforce as your own personal assistant. But in hindsight, did that really happen? It depends on who you ask. Some programmers and software engineers have embraced agents like Cursor and Claude Code in their daily work. But others are still wary of the risks these tools bring, such as a lack of accountability.
In the United States, starting salaries for students graduating this spring are expected to increase, according to the latest data from the National Association of Colleges and Employers. Computer science and engineering majors are expected to be the highest paying graduates, with a 6.9 percent and 3.1 percent salary increase from last year, respectively. The full report breaks down salary projections by academic major, degree level, industry, and geographic region.
If given the opportunity, are international projects worth taking on? As part of a career advice series by IEEE Spectrum’s sister publication, The Institute, the chief engineer for Honeywell lays out the advantages of working with teams from around the world. Participating in global product development, the author says, could lead to both personal and professional enrichment. Read more here.
Fresh leaks suggest Intel’s upcoming Core Ultra 400 desktop processors, based on the Nova Lake-S architecture, could push power consumption beyond 700W under full load.
Two well-known hardware leakers (Jaykihn and kopite7kimi) on X shared technical notes outlining early platform behaviour for the Core Ultra 400 series, including what appears to be a high-end PL4 power figure.
One post claims that a fully loaded Nova Lake K-series chip exceeds 700 watts, though the workload and exact testing conditions were not specified.
If accurate, that figure would represent a significant jump over recent Intel desktop processors, including Raptor Lake-S, which topped out far lower under comparable profiles.
Advertisement
The 700W claim reportedly refers to the PL4 rating, also known as Power Level 4, which represents the highest defined power limit in Intel’s power management hierarchy.
For comparison, earlier-generation 13th Gen desktop chips reached around 314W in performance profiles, while certain Extreme configurations reportedly pushed near 490W.
Advertisement
Overclocking and core configuration
Additional details suggest the 700W figure applies to a dual-tile configuration, with one variant combining 16 performance cores, 32 efficiency cores, and four low-power efficiency cores.
Advertisement
The same sources also outlined changes to thermal and monitoring behaviour for Nova Lake-S, including the inability to offset TJMax or disable thermal throttling.
The on-die thermal sensor reportedly measures temperatures from –64°C up to 100°C when users enable Negative Temperature Reporting.
Leakers claim LP E-cores ignore BCLK and ECLK adjustments, indicating Intel has tightened control over specific clock domains.
The processor can reportedly boot using only LP E-cores, or run LP E-cores alongside E-cores while disabling the P-cores.
Advertisement
Advertisement
Sources say users can disable entire compute dies, though the platform now groups P-cores and E-cores into clusters that restrict disabling to a per-cluster basis.
Intel has not confirmed these specifications, and no official PL4 ratings or final configurations have been announced for Nova Lake-S.
Intel previously stated that Nova Lake will launch by the end of the year, though the company has not clarified whether that timeline applies to desktop or mobile variants.
Advertisement
If these early figures prove accurate, Core Ultra 400 could mark one of Intel’s most power-hungry and performance-focused desktop platforms to date.
This editor’s roundup lands at a moment when everything feels less like discourse and more like performance art, and not the good kind. Bad Bunny delivered the first mostly Spanish halftime show in Super Bowl history; a powerful, Puerto Rican-rooted celebration that thrust Latino culture onto a stage watched by 124 million people and the reactions were predictably absurd.
Conservatives from cable pundits to President Trump called it divisive and “terrible,” freaked out over language, and even tried to pitch hair-metal bands as replacements, while fanbases tried to learn Spanish just to keep up with the cultural conversation.
And then the other side did what it always does. Turning Point USA’s Super Bowl-adjacent event somehow morphed online into an “ICE rally,” a “Charlie Kirk memorial,” and an act of open racism, as if every folding chair and red lanyard came with a deportation order stapled to it. Context didn’t matter. Facts were optional.
Even the Grammy Awards could not just hand someone a trophy without turning it into a TED Talk nobody asked for — win, smile, say thanks, and sit the hell down.
Advertisement
Meanwhile, the internet still can’t decide whether this was patriotism or provocation, which tells you everything about how performative outrage has become the default setting on all sides. Then there’s the actual stuff we cover: Meze Audio dropped the Strada with a tuning curve that’s got listeners scratching heads, Record Store Day 2026 has its own mix of hype and eye-rolls, and the hi-fi show calendar is so bloated even insiders are saying “enough.” Same damn script everywhere — nuance on mute, noise on max.
Bad Bunny Sang in Spanish; Outrage Was Bilingual
Bad Bunny Singing at Super Bowl LX Halftime Show (Watch on YouTube)
If there was anything worth getting upset about during the Super Bowl, it wasn’t the language, it was the vocals. Bad Bunny can perform, no question: the pacing worked, the energy was there, the rhythm was locked in, and the spectacle did exactly what halftime shows are designed to do. But let’s not pretend we witnessed a once-in-a-generation vocal masterclass. Whitney Houston he is not. Hell, Whitney Houston warming up he is not. The real outrage should’ve been over pitch, breath control, and the fact that halftime vocals have quietly become optional while choreography and vibes do the heavy lifting.
And let’s get real for a second: there are 40-50 million Spanish-speaking people in the United States, many of whom watch the NFL every Sunday, buy jerseys, bet on games, and scream at referees in multiple languages. Holy queso, Batman — Spanish isn’t some foreign invasion; it’s part of the room. Always has been. Acting shocked by that in 2026 is less patriotism and more blind ignorance.
What’s depressing isn’t that a halftime show turned political; it’s that everything does now. We can’t even have a dumb, overproduced Super Bowl anymore without someone turning it into a referendum on national identity. It’s a football game. A mediocre one. Played by two teams most people claim to hate until kickoff. Someone wins, someone loses, and nobody should be afraid to joke about going to Disney World afterward because ICE might be waiting behind Space Mountain. When even that joke feels risky, you don’t have a culture war; you’ve got cultural exhaustion.
For all of the President’s misguided social media outrage about a halftime show, the only real winner here was Bad Bunny. Mission accomplished. He’ll sell out arenas coast to coast, merch will fly, and yes, the pants will remain deeply questionable whether you like the music or not.
Advertisement
What’s wrong with us is simpler; we’re obsessed with the noise instead of the moment, which makes perfect sense when half the country seems to get its news and its history from TikTok, X, and Instagram comments written at a sixth grade reading level.
Kendrick Lamar Performing at Super Bowl LIX Halftime Show (Watch on YouTube)
Last year’s Kendrick Lamar halftime worked better for me not because it was louder or angrier, but because it stayed tethered to the game, a genuine Eagles beat down of the Chiefs, Taylor Swift included, and because I actually own his records and took my son to one of his shows, which was pretty great. Football happened. Music happened. Nobody lost their damn mind. That’s apparently too much to ask now. It’s that bad, folks.
Advertisement. Scroll to continue reading.
Record Store Day 2026: Great Releases, Too Few Copies, Lots of Cash Required
Record Store Day is still one of my favorite days of the year, and not because I enjoy standing outside at 5am fueled by burnt coffee and poor decisions. It’s because you get to show up for the independent shops that keep this whole hobby from turning into a soulless “add to cart” spreadsheet. You stand in the cold or the heat or the rain, take your pick, clutch your list like it’s an immigration document, and hope the vinyl gods don’t flag you for secondary screening.
Advertisement
RSD 2023: Line at Jack’s Music, Red Bank, New Jersey
It doesn’t always work out, especially if you live somewhere where the first 20 people in line aren’t music fans, they’re flippers with spreadsheets, burner accounts, and the moral compass of a vending machine. They sprint straight to the obvious heat and scoop up the titles that were never pressed in big numbers, the stuff your local store might only have 1 to 5 copies of, then they flip it on Discogs or eBay for 3 to 5 times the price like they personally remastered it. These people don’t love records. They love arbitrage. Please step on a LEGO.
This year’s RSD 2026 list is legitimately stacked. Bruno Mars is the 2026 Record Store Day Ambassador, and shops will be holding Early Listening Parties on February 25 for his new album THE ROMANTIC, ahead of Record Store Day on April 18.
On the guaranteed-mayhem side, you’ve got Pink Floyd with Live From the Los Angeles Sports Arena, April 26th, 1975 (4xLP), plus perennial troublemakers like The Cure, David Bowie, Madonna, Grateful Dead, and Pearl Jam with React/Respond as a photo book plus a 7 inch single, which is basically catnip for the resellers.
Add The Rolling Stones turning RSD into a mini theme park with multiple drops, including the RSD3 mini turntable and a run of 3 inch singles like Get Off of My Cloud, Honky Tonk Women, Play With Fire, Heart of Stone, Mother’s Little Helper, and Have You Seen Your Mother, Baby, Standing in the Shadow? because nothing says “serious collector culture” like tiny-format chaos.
And the jazz titles? Quietly dangerous this year. You’ve got Bill Evans At The BBC: The Complete 1965 London Sets, John Coltrane and the John Coltrane Quartet showing up in the mix, Ahmad Jamal, Roy Hargrove with A Tribute to Pharoah Sanders, plus Joe Henderson Consonance: Live at the Jazz Showcase on Resonance, and Mal Waldron Stardust & Starlight: Live at the Jazz Showcase.
Advertisement
These are my dark-horse Record Store Day 2026 picks, the ones I’m prioritizing before caffeine fully kicks in and common courtesy disappears.
On the soul and jazz side, Stax: Killer B’s from Various Artists is exactly the kind of compilation RSD should be about, deep cuts that remind people why Stax mattered beyond the hits. Consonance: Live at the Jazz Showcase by Joe Henderson is a serious live document and another reminder that Resonance continues to treat jazz collectors like adults. Add The New Sounds from Miles Davis, Primeval Blues, Rags, And Gospel Songs by Charlie Patton, and BBC Sessions from John Prine, and you’ve got a stretch of records that feel more like history lessons than collectibles. These are the ones flippers ignore because they require listening instead of speculation.
The alternative and art-pop lane is quietly stacked this year. Analogue 20th Anniversary Deluxe Edition from a-ha is far better than people remember and will vanish once word gets out. The Seduction of Claude Debussy by Art of Noise is still ambitious, strange, and influential in ways that feel increasingly rare. The Rhythmatist shows Stewart Copeland doing something genuinely different outside of The Police, and The Blind Leading the Naked from Violent Femmes remains one of their most overlooked records. None of these scream “safe,” which is exactly why they matter.
Then there are the deceptively obvious picks that people will pretend not to want until they’re gone. Greatest Hits from The Cure will not sit long. MTV Unplugged captures Tony Bennett doing what modern vocalists still study but rarely master. Hallo Spaceboy is David Bowie in his confrontational mid-90s phase, not nostalgia cosplay, and Sledgehammer from Peter Gabriel still works when it’s pressed properly, whether people want to admit it or not.
Advertisement
Meze STRADA: Green, Gorgeous, and Sonically Side-Eyed
Meze Audio STRADA
Can Meze’s $799 STRADA closed-back headphones stand out in a market that’s already overcrowded, opinionated, and very sure of itself? That’s the real question, and it’s a fair one. The $500 to $1,000 headphone segment is a knife fight right now, with Grado, Focal, Denon, HiFiMAN, Dan Clark Audio, Beyerdynamic, Audeze, and Sennheiser all going after the same ears and the same wallets. Romanian manufacturer Meze Audio has never tried to win by shouting the loudest. They’ve won by doing the work.
I’m coming at this with some perspective. I own six pairs of Meze headphones, which makes it pretty clear I’ve bought into the approach. That doesn’t mean I love everything blindly. I don’t. If we’re ranking favorites, the Empyrean II, the 109 Pro, and the 99 Classics (2nd generation) sit at the top of my list. Those models get the balance right: comfort that disappears on your head, industrial design that feels intentional, and sound that prioritizes musical engagement over chasing measurements. So where does that leave the STRADA?
Advertisement. Scroll to continue reading.
From a build and design perspective, Meze hasn’t missed. The STRADA’s earcups are genuinely beautiful, finished in a deep green that feels more British sports car than Romanian headphone, and I’m ultimately fine with that. That said, the color has been polarizing. I’ve seen plenty of online chatter that wasn’t especially polite, along with no shortage of praise from people who really like the look. I wasn’t sold on it straight out of the box either, but it grew on me. And I’d be a hypocrite if I complained too loudly, considering I’ve owned two cars; a Morgan and a Mini Cooper that weren’t exactly shy about wearing a similar shade.
Meze Audio STRADA
It looks expensive without trying too hard, and it clearly isn’t chasing trends. The magnesium frame, soft padded headband, and balanced weight distribution hit the familiar Meze notes. Comfort remains a core strength, and clamping force is spot on — secure without crossing into fatigue. Long listening sessions aren’t a problem. Do I love the headband as much as some of Meze’s more recent designs? Not quite. It’s different, and that difference will land better for some listeners than others. Still, from a usability standpoint, the brief is handled.
Where things get more complicated is the tuning and the intent. The STRADA is clearly not trying to be a closed-back version of the 109 Pro, and that’s both understandable and slightly puzzling. Closed-back designs come with real constraints. You lose some openness and spatial air by default, but you gain isolation and control.
Advertisement
There’s also more room to play with treble behavior and bass weight, and Meze leans into that here. The STRADA sounds denser and heavier down low, with a presentation that’s less spacious overall, while the top end sits firmly on the brighter side. I’ve found myself reaching for EQ more than usual to dial it in. That’s not inherently wrong. It’s just different.
Meze Audio 109 Pro
The question I keep coming back to is why Meze felt the need to “fix” something that already worked so well. The 109 Pro are excellent headphones. They’re balanced, engaging, and easy to live with. The STRADA doesn’t replace them. It sidesteps them. Whether that sidestep feels purposeful or unnecessary will depend on what you’re looking for in a closed-back design at this price. I generally like what the STRADA is doing, but I’m not fully convinced yet that this was a gap in the lineup that needed filling.
The full review is coming next week, and that’s where I’ll land the plane properly. For now, the STRADA feels like a well-made, thoughtfully designed headphone that raises an interesting question about direction rather than execution. And with Meze, that question is usually worth asking.
Hi-Fi Show Overload: When Everything Is “Must-See”
2026 is barely six weeks old and the reality has already set in: there are simply too many hi-fi shows. After covering CES 2026 more deeply than any other hi-fi publication, the eCoustics team barely had time to unpack before heading straight to NAMM 2026. That kind of pace is part of the job, but it’s also a stress test. CanJam launched its first event in Dubai and, by all accounts, it went well enough that a 2027 return is already locked.
And let’s not forget ISE in Barcelona, which already happened, because the year barely waits for you to catch your breath before moving on—mañana is a lie, the calendar never sleeps, and the sangria is never strong enough.
Advertisement
February arrived with serious, deadly cold and snow pushing far deeper south than anyone expected. I was there. Back home, the same system claimed more than 35 lives across New Jersey and New York over the past three weeks, a grim reminder that this wasn’t just inconvenient weather. Meanwhile, the Tampa Show is next week and CanJam NYC is somehow already right around the corner. We’ll be there. Hopefully the city will have picked up the garbage by then.
And that’s the problem. The 2026 calendar is stacked to the point of absurdity, and it’s putting both the media and manufacturers into a quiet panic. We can’t cover everything. Nobody can. Travel budgets are stretched, crews are thin, and even the companies making the gear are starting to pick and choose because showing up everywhere is expensive and increasingly hard to justify. More shows don’t automatically mean better coverage or better products. At some point, the industry has to ask whether this pace is sustainable—or whether everyone involved is just pretending it is.
And if you think we’re just kvetching, here’s the part where the calendar starts to look like a cry for help. After Tampa and CanJam NYC; which is already shaping up to be the biggest one yet—the industry rolls straight into Bristol, then Montreal, CanJam Hong Kong, CanJam Singapore, and AXPONA, where six members of the eCoustics team will be on the floor at once. Hope the Wiener Circle has enough char dogs in stock, because nobody’s cooking that week.
Then comes Vienna, now replacing Munich, pushed later into June. Early feedback from people who’ve actually been in the space? Let’s just say the reaction was less wunderbar and more nein, which feels culturally appropriate. From there, we pivot directly into summer with T.H.E. Show SoCal, CanJam London (we’ll be at both), SouthWest Audio Fest in Dallas (I’ll be there), Audio Advice Live in Raleigh (Chris will be there), Pacific Audio Fest, CanJam SoCal (we’ll be there), CEDIA with full-team coverage, CanJam Shanghai, CanJam Dallas, Toronto (I’ll be there), CAF with team coverage, Warsaw, the Paris Show, Singapore Show—yes, again—and a show in Australia, because apparently jet lag is a lifestyle choice now.
Advertisement
Advertisement. Scroll to continue reading.
The only things missing on the calendar at this point are CanJam Tel Aviv, CanJam Berlin; and it’s more than a little interesting that there still aren’t any plans for a CanJam in Canada—and something in Mexico or Argentina, because apparently South America remains an untapped frontier. Then there’s T.H.E. Show, which continues to live in a state of strategic ambiguity but is likely to surface with the familiar SoCal show and the New York show, which for those of us in the Garden State is, let’s be honest, New Jersey. Same circus, different exits.
All of this somehow wraps up just in time for Black Friday and Thanksgiving, when everyone either collapses, disappears, or spends a brief but meaningful stretch in an institution. I’ve already done my time. Didn’t enjoy the kosher meal option.
Something has to give. There isn’t a single publication on the planet that can realistically handle that schedule, and I’m genuinely proud of the fact that we can cover the shows we do without turning the whole thing into noise. Travel can be fun, sure, but let’s not kid ourselves — this is work. And there’s a point where listening to the same gear, in the same hotel rooms, under the same conditions, stops being insight and starts becoming background hum. Familiarity doesn’t always breed clarity. Sometimes it just breeds fatigue.
Advertisement
I’ve spoken to seven manufacturers over the past few weeks, and I’m talking about the big ones — and they’re feeling it too. The math doesn’t work. Showing up everywhere isn’t financially viable, and it’s not strategically smart either. At some point you’re not launching products, you’re just maintaining appearances. And frankly, nobody needs to see the same rotation of ass-kissing journalists and YouTubers with their hands out at every stop on the circuit. That doesn’t serve readers, viewers, or the industry.
I’ve always kept a simple rule. I eat with the team, with a few of the industry’s best PR people; Adam Sohmer, Jaclyn Inglis, Sue Toscano — or alone. That’s it. Anything else invites complications that don’t need to exist. Free dinners from companies aren’t hospitality. They’re bribes. And once you normalize that, you’ve already lost the plot.
Samsung looks to be going all-in on 2nm chip production, a move that could start to loosen Qualcomm’s grip on future Galaxy phones.
While the upcoming Galaxy S26 is expected to debut Samsung’s first 2nm processor, the Exynos 2600, new reports suggest the company is already lining up its successor for mass production.
According to Korean outlet Hankyung, Samsung plans to begin mass production of the Exynos 2700 in the second half of 2026. Analysts at Kiwoom Securities believe the chip could power around half of the Galaxy S27 lineup, expected to land in 2027.
If that happens, it would mark a major shift away from Qualcomm-powered flagships and, by extension, from TSMC-manufactured chips.
Advertisement
Advertisement
That doesn’t mean Samsung is cutting ties with Qualcomm just yet. Current reports suggest Samsung’s 2nm yield sits at around 50%, compared to Qualcomm’s reported 65% via TSMC. Until Samsung can close that gap, Qualcomm is likely to remain part of the picture. At least for certain markets and models, that remains true.
Still, the ambition is clear. The Exynos 2700 — reportedly codenamed Ulysses — is expected to feature a deca-core CPU, an Xclipse 970 GPU, and support for next-gen standards like LPDDR6 RAM and UFS 5.0 storage. Importantly, all are built on Samsung’s 2nm SF2P process. On paper, at least, it’s shaping up to be a serious flagship contender.
The bigger story, though, is what this means for Samsung as a whole. Launching the Exynos 2600 ahead of rivals would make Samsung the first company to bring a 2nm chip to market. This would beat both Qualcomm and MediaTek, while also giving Samsung more control over its hardware stack, echoing strategies used by Apple and Google.
Advertisement
There’s also a geopolitical angle. With US tariffs still influencing supply chains, Samsung’s growing manufacturing presence could make it an attractive alternative for companies looking to reduce reliance on TSMC. That’s a long-term play, and one that hinges heavily on Samsung improving its yield rates.
For now, Samsung’s 2nm push feels like a statement of intent. Qualcomm may still be the safer option today. However, Samsung is clearly positioning Exynos as a serious rival again — and this time, the stakes are much higher.
Raytheon’s non-kinetic Coyote responds intelligently to the growing threat of drone swarms. Companies and militaries are seeking for cost-effective solutions to deal with large numbers of low-cost, off-the-shelf drones. RTX’s Raytheon division has really created a new variant that demonstrates what it’s all about.
The Coyote Block 3 Non-Kinetic variant just blasts out of a little tube and flies through the air. It’s driven by a small turbine engine, and before you know it, it’s flying at high speeds and altitudes, allowing it to quickly close in on targets. Once in the air, it simply hangs around over the contested region, waiting for any dangers to appear. When the drones begin to arrive, typically in large groups meant to overwhelm defenses, the Coyote rushes into action.
Due to platform compatibility issue, the DJI Fly app has been removed from Google Play. DJI Neo must be activated in the DJI Fly App, to ensure a…
Lightweight and Regulation Friendly – At just 135g, this drone with camera for adults 4K may be even lighter than your phone and does not require FAA…
Palm Takeoff & Landing, Go Controller-Free [1] – Neo takes off from your hand with just a push of a button. The safe and easy operation of this drone…
Unlike its explosive cousins or missiles, which simply bang into targets, this Coyote carries a non-kinetic payload, also known as an unseen blast of electricity that immediately damages the drone’s circuitry. Circuits fail, controls lock up, and the enemy aircraft plummets from the sky. There is no fireball or shrapnel; the attacking drone simply drops to the ground, and the Coyote continues to fly.
This capability was tested during recent demonstrations for the US Army. One drill at Yuma Proving Grounds saw operators launch drone swarms directly at the defense setup. The Coyote Block 3 Non-Kinetic then fought many incoming drones simultaneously. The footage from that drill shows the interceptor speeding past its targets, followed by the drones plummeting through the air with no sign of an explosion or hit. According to reports, at least ten drones were destroyed in one strike, including those troublesome Group 1 and Group 2 types that the adversary like to deploy in large numbers.
The recovery feature is another significant advantage of this device, since the Coyote just returns to base and drops into a net. Ground crews can then inspect the airframe, perform some basic maintenance, and prepare it for the next trip. Because it is reusable, you save a lot of money compared to building and launching single-use interceptors that go up in a puff of smoke after one task. Instead of needing to build a new round every time, the main expenses are now fuel and the occasional refurbishing.
Advertisement
Raytheon builds both kinetic and non-kinetic variants of the Coyote, with the kinetic versions relying on direct collision or a warhead to destroy the target completely. The non-kinetic Block 3 variant has the same fast, jet-powered body and can fly faster and higher than many comparable aircraft, but it replaces the explosive end with an electronics-focused defeat. This makes all the difference when drone swarms arrive in waves. Conventional rockets or cannons will simply run out of ammunition if the attacks continue for an extended period of time, but a gettable platform with a reusable effect is a different issue entirely. [Source]