State-backed hackers are using Google’s Gemini AI model to support all stages of an attack, from reconnaissance to post-compromise actions.
Bad actors from China (APT31, Temp.HEX), Iran (APT42), North Korea (UNC2970), and Russia used Gemini for target profiling and open-source intelligence, generating phishing lures, translating text, coding, vulnerability testing, and troubleshooting.
Cybercriminals are also showing increased interest in AI tools and services that could help in illegal activities, such as social engineering ClickFix campaigns.
AI-enhanced malicious activity
The Google Threat Intelligence Group (GTIG) notes in a report today that APT adversaries use Gemini to support their campaigns “from reconnaissance and phishing lure creation to command and control (C2) development and data exfiltration.”
Chinese threat actors employed an expert cybersecurity persona to request that Gemini automate vulnerability analysis and provide targeted testing plans in the context of a fabricated scenario.
Advertisement
“The PRC-based threat actor fabricated a scenario, in one case trialing Hexstrike MCP tooling, and directing the model to analyze Remote Code Execution (RCE), WAF bypass techniques, and SQL injection test results against specific US-based targets,” Google says.
Another China-based actor frequently employed Gemini to fix their code, carry out research, and provide advice on technical capabilities for intrusions.
The Iranian adversary APT42 leveraged Google’s LLM for social engineering campaigns, as a development platform to speed up the creation of tailored malicious tools (debugging, code generation, and researching exploitation techniques).
Additional threat actor abuse was observed for implementing new capabilities into existing malware families, including the CoinBait phishing kit and the HonestCue malware downloader and launcher.
Advertisement
GTIG notes that no major breakthroughs have occurred in that respect, though the tech giant expects malware operators to continue to integrate AI capabilities into their toolsets.
HonestCue is a proof-of-concept malware framework observed in late 2025 that uses the Gemini API to generate C# code for second-stage malware, then compiles and executes the payloads in memory.
HonestCue operational overview Source: Google
CoinBait is a React SPA-wrapped phishing kit masquerading as a cryptocurrency exchange for credential harvesting. It contains artifacts indicating that its development was advanced using AI code generation tools.
One indicator of LLM use is logging messages in the malware source code that were prefixed with “Analytics:,” which could help defenders track data exfiltration processes.
Based on the malware samples, GTIG researchers believe that the malware was created using the Lovable AI platform, as the developer used the Lovable Supabase client and lovable.app.
Advertisement
Cybercriminals also used generative AI services in ClickFix campaigns, delivering the AMOS info-stealing malware for macOS. Users were lured to execute malicious commands through malicious ads listed in search results for queries on troubleshooting specific issues.
AI-powered ClickFix attack source: Google
The report further notes that Gemini has faced AI model extraction and distillation attempts, with organizations leveraging authorized API access to methodically query the system and reproduce its decision-making processes to replicate its functionality.
Although the problem is not a direct threat to users of these models or their data, it constitutes a significant commercial, competitive, and intellectual property problem for the creators of these models.
Essentially, actors take information obtained from one model and transfer the information to another using a machine learning technique called “knowledge distillation,” which is used to train fresh models from more advanced ones.
“Model extraction and subsequent knowledge distillation enable an attacker to accelerate AI model development quickly and at a significantly lower cost,” GTIG researchers say.
Advertisement
Google flags these attacks as a threat because they constitute intellectual theft, they are scalable, and severely undermine the business model of AI-as-a-service, which has the potential to impact end users soon.
In a large-scale attack of this kind, Gemini AI was targeted by 100,000 prompts that posed a series of questions aimed at replicating the model’s reasoning across a range of tasks in non-English languages.
Google has disabled accounts and infrastructure tied to documented abuse, and has implemented targeted defenses in Gemini’s classifiers to make abuse harder.
The company assures that it “designs AI systems with robust security measures and strong safety guardrails” and regularly tests the models to improve their security and safety.
Advertisement
Modern IT infrastructure moves faster than manual workflows can handle.
In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.
Amazon-owned eero is selling a new add-on called eero Signal 4G LTE, a compact box meant to keep your home network online during internet outages. Plug it into a compatible eero router and your Wi-Fi can fall back to cellular data, so work calls, cameras, and smart home routines don’t instantly go dark.
There’s a catch, the cellular data is tied to an annual eero Plus plan managed in the eero app. The hardware by itself won’t provide the fallback connection, you’re also committing to eero’s service to actually use the backup.
It plugs in, then takes over
Signal connects over USB-C to any USB-C powered eero that supports Wi-Fi 6 or newer, plus eero PoE Gateway. It can share a single power adapter with the eero it’s attached to, which keeps the setup from turning into another pile of bricks and cables.
After you add it in the eero app, Signal stays in standby until your primary connection fails. When it does, Signal switches the whole network over to LTE, then drops back to standby once your ISP is back. No extra steps.
Advertisement
Where you place it matters because reception is everything. eero’s guidance is to pair Signal with the eero located where cell service is strongest, ideally higher up and closer to an exterior wall.
The subscription caveat
The backup connection runs through eero Plus, with two data tiers. The standard annual eero Plus plan includes up to 10GB of backup data per year, aimed at brief, occasional outages. New annual eero Plus subscribers who buy Signal get six months included, then the service renews at $99.99 for the next 12 months.
If you need more breathing room, eero Plus 100 includes up to 100GB of backup data per month. eero lists it at $99 for the first year (50% off), then it renews at $199.99 per year.
What to watch next
Signal is designed as a safety net at one address and it still expects a working primary internet connection most of the time, so it’s not a replacement for broadband. eero includes a three-year warranty and says Signal receives updates for security patches and new features.
Advertisement
Before you buy, check LTE strength where your router lives, then decide whether 10GB a year matches your typical outage pattern. If you can wait, eero says a 5G version is planned for later in 2026 with a $199.99 price.
Signal is available in the US for $99.99 on eero.com and Amazon.
Apple has reportedly delayed some of Siri’s AI features beyond iOS 26.4
These will apparently now land as part of iOS 26.5 or iOS 27
These features were first announced back in June 2024
Siri’s long-promised AI overhaul is becoming a huge embarrassment for Apple, as while this was initially announced back in June of 2024, at which point Apple said it would launch as part of iOS 18 that year, we’re now in 2026 and it still hasn’t arrived. Not only that, but it’s reportedly now being delayed even further.
We’d heard that it might finally arrive – at least in part – with iOS 26.4, which is expected to roll out soon, but now Apple watcher Mark Gurman, writing for Bloomberg (via 9to5Mac), has said that at least some of the features that were previously planned for iOS 26.4 will now ship with iOS 26.5, which is expected in May, and iOS 27, due in September, instead.
Gurman – who has a superb track record for Apple information – cites “people familiar with the matter”, and adds that the most likely features to slip are “voice-based control of in-app actions”, and “the expanded ability for Siri to tap into personal data,” which, as Gurman explains, “would let users ask the assistant to, say, search old text messages to locate a podcast shared by a friend and immediately play it.”
So if this is correct, Siri’s AI overhaul won’t get most of its core features until around two years after it was first announced, and parts that don’t arrive until iOS 27 will be a full two years later than Apple initially said to expect them.
(Image credit: Apple)
An unreasonably long wait
Even in isolation, this would be a ridiculously long delay, and one that’s not very fair on customers – including myself – who upgraded to iPhone 16-series phones in part down to the promise of these features.
But it gets even worse when you consider just how far ahead Android is when it comes to AI features, with Gemini having delivered much of what Apple is promising for Siri for years now.
Advertisement
In fact, Apple is so far behind that it seems to have – for the time being at least – essentially given up on trying to directly compete, and has instead inked a deal with Google to use Gemini as the brains behind Siri. But even with that deal in place, the wait goes on.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Apple is no stranger to embarrassments and failures, from ‘antennagate’ and ‘bendgate’ to the awful state Apple Maps launched in and the abandoned AirPower wireless charger, but none of these issues dragged on for quite as long as the current Siri debacle.
And not only is Siri miles behind the competition here, but even before AI emerged, Siri was generally considered less capable than rivals, so for whatever reason this is something Apple has struggled with in one way or another since the launch of Siri itself.
Advertisement
Hopefully, Siri will finally be competitive once this promised AI overhaul is delivered, but with the way things have been going so far, I wouldn’t be surprised if it gets even further delayed.
We may receive a commission on purchases made from links.
For an easy and affordable way to turn your TV into a smart streaming hub, one of the top recommendations you’re going to hear is a Fire TV Stick. It doesn’t even matter if you have a Roku TV or other streaming OS built into your television. People get a Fire TV Stick because it does a better job integrating with smart homes, supports voice assistants, and streamlines all your movies, shows, music, and live TV into one easy-to-use place. Install is super easy because it’s just plug-and-play: just stick it into the HDMI port on the back of a TV, connect the power cable, and you’re good to go.
But the Fire TV Stick shouldn’t be limited to your house’s TV alone. This Amazon hardware is a lot more flexible than people realize. As it turns out, there are several other useful ways you can use one. No matter if you’re working with a Fire TV Stick HD, one of the 4k models of the Fire TV Stick, or one of Amazon’s other Fire TV offerings, we’ve rounded up four other compatible devices you can use a Fire TV Stick with.
Advertisement
Projectors
Pixel-Shot/Shutterstock
If you have a nice, big wall of open space in your house (or one of those backyard setups any neighbor would be jealous of), you might’ve already invested in a projector. They’re one of the most common alternatives to traditional TVs, and they do a great job giving you that movie theater experience from home. Turns out, they can also pair with a Fire TV Stick. Most modern projectors include at least one HDMI input, and that’s all the Fire TV Stick needs for video and audio output.
Once you get it connected, just find an outlet to plug the Fire TV Stick’s power cord into, and you’re all set to stream HD or 4k content. The Fire TV Stick HD supports resolutions up to 1080p at 60 frames per second, while the Fire TV Stick 4K Plus and its siblings can output up to 2160p with support for HDR formats like Dolby Vision and HDR10+. Combined with a compatible projector, you’ll easily be able to stream movies, live sports, or other content from the projector to the screen. If you’re planning on using it outside, just make sure your Wi-Fi signal can reach.
Advertisement
Computers
Tina Simakova/Getty Images
If you’re away from home, it’s nice knowing certain desktop displays and even laptops can support a Fire TV Stick. As long as it has an HDMI input that supports external devices, you’ll be able to plug it in and start streaming. Once it’s connected, the Fire TV Stick basically turns your monitor into a display separate from the computer’s operating system. This comes in handy in offices, dorm rooms, or work-from-home setups where a TV would be too big or too excessive for the space.
If you’re already strapped for space, don’t worry: The Fire TV Stick’s small size and low power requirements mean it won’t be a burden that clutters up your desktop setup. Pair some Bluetooth headphones to your computer, and you can enjoy some private listening as well. As a note: Not every laptop computer has an HDMI port, and of the ones that do, not all of them support HDMI input. Check your computer’s specs before committing.
Advertisement
Hotel room TVs
Pranithan Chorruangsak/Getty Images
On vacation or a work trip and can’t find anything good to watch on the hotel or Airbnb’s TV? You might want to remember to pack your Fire TV Stick next time. That way, you won’t have to bother with those limited channel selections, locked menus, or unreliable casting options and can just watch what you want to watch instead. As long as the hotel TV has an accessible HDMI port, your Fire TV Stick has space to shine. (Just don’t forget it when it comes time to check out.)
Some hotels and short-term rentals have started encouraging people to log into their personal streaming services on the place’s smart TV, but that’s a pain. Plus, you have to remember to log out before you leave. Using a Fire TV Stick instead means you just plug it in, sign into the Wi-Fi, and start streaming off your own apps without needing to fiddle with the hotel’s. And some good news: If your device supports 4k but the hotel’s TV doesn’t, the device will simply adjust to the resolution of the screen.
Advertisement
AV Receiver
Alexstepanov/Getty Images
If you have an AV receiver and really want to get the most out of your Fire TV Stick’s surround sound support, just plug it right into the receiver’s HDMI port. No need to plug it into the TV at all! Just cut out the middle man and go straight to the source. Plenty of modern receivers give you both HDMI inputs and outputs, meaning they’ll route video to your projector or TV while taking care of the audio separately. No extra hardware required.
The Fire TV Stick 4K Plus and Fire TV Stick 4K Max work especially well with AV receivers because of their Dolby Atmos support, not to mention the multi-channel audio pass-through and HDMI 2.1 features like ARC. The included Alexa Voice Remote might also be able to control certain receiver functions, including power and volume, depending on how smart your setup is. That said, a Fire TV Stick HD will work just fine in the receiver, too.
Water wells are simple things, but that doesn’t mean they are maintenance-free. It can be important to monitor water levels in a well, and that gets complicated when the well is remote. Commercial solutions exist, of course, but tend to be expensive and even impractical in some cases. That’s where [Hans Gaensbauer]’s low-cost, buoyancy-based well monitor comes in. An Engineers Without Border project, it not only cleverly measures water level in a simple way — logging to a text file on a USB stick in the process — but it’s so low-power that a single battery can run it for years.
The steel cable (bottom left) is attached to a submerged length of pipe, and inside the cylinder is a custom load cell. The lower the water level, the higher the apparent weight of the submerged pipe.
The monitor [Hans] designed works in the following way: suspend a length of pipe inside the well, and attach that pipe to a load cell. The apparent weight of the pipe will be directly proportional to how much of the pipe is above water. The fuller the well, the less the pipe will seem to weigh. It’s very clever, requires nothing to be in the well that isn’t already water-safe, and was designed so that the electronics sit outside in a weatherproof enclosure. Cost comes out to about $25 each, which compares pretty favorably to the $1000+ range of industrial sensors.
The concept is clever, but it took more that that to create a workable solution. For one thing, space was an issue. The entire well cap was only six inches in diameter, most of which was already occupied. [Hans] figured he had only about an inch to work with, but he made it work by designing a custom load cell out of a piece of aluminum with four strain gauges bonded to it. The resulting sensor is narrow, and sits within a nylon and PTFE tube that mounts vertically to the top of the well cap. Out from the bottom comes a steel cable that attaches to the submerged tube, and out the top comes a cable that brings the signals to the rest of the electronics in a separate enclosure. More details on the well monitor are in the project’s GitHub repository.
All one has to do after it’s installed is swap out the USB stick to retrieve readings, and every once in a long while change the battery. It sure beats taking manual sensor readings constantly, like meteorologists did back in WWII.
This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Parsity and delivered to your inbox for free!
We’d like to introduce Brian Jenney, a senior software engineer and owner of Parsity, an online education platform that helps people break into AI and modern software roles through hands-on training. Brian will be sharing his advice on engineering careers with you in the coming weeks of Career Alert.
Here’s a note from Brian:
“12 years ago, I learned to code at the age of 30. Since then I’ve led engineering teams, worked at organizations ranging from five-person startups to Fortune 500 companies, and taught hundreds of others who want to break into tech. I write for engineers who want practical ways to get better at what they do and advance in their careers. I hope you find what I write helpful.”
Advertisement
Last year, I was conducting interviews for an AI startup position. We allowed unlimited AI usage during the technical challenge round. Candidates could use Cursor, Claude Code, ChatGPT, or any assistant they normally worked with. We wanted to see how they used modern tools.
During one interview, we asked a candidate a simple question: “Can you explain what the first line of your solution is doing?”
Silence.
After a long pause, he admitted he had no idea. His solution was correct. The code worked. But he couldn’t explain how or why. This wasn’t an isolated incident. Around 20 percent of the candidates we interviewed were unable to explain how their solutions worked, only that they did.
Advertisement
When AI Makes Interviews Harder
A few months earlier, I was on the other side of the table at this same company. During a live interview, I instinctively switched from my AI-enabled code editor to my regular one. The CTO stopped me.
“Just use whatever you normally would. We want to see how you work with AI.”
I thought the interview would be easy. But I was wrong.
Instead of only evaluating correctness, the interviewer focused on my decision-making process:
Advertisement
Why did I accept certain suggestions?
Why did I reject others?
How did I decide when AI helped versus when it created more work?
I wasn’t just solving a problem in front of strangers. I was explaining my judgment and defending my decisions in real time, and AI created more surface area for judgment. Counterintuitively, the interview was harder.
The Shift in Interview Evaluation
Most engineers now use AI tools in some form, whether they write code, analyze data, design systems, or automate workflows. AI can generate output quickly, but it can’t explain intent, constraints, or tradeoffs.
More importantly, it can’t take responsibility when something breaks.
As a result, major companies and startups alike are now adapting to this reality by shifting to interviews with AI. Meta, Rippling, and Google, for instance, have all begun allowing candidates to use AI assistants in technical sessions. And the goal has evolved: interviewers want to understand how you evaluate, modify, and trust AI-generated answers.
So, how can you succeed in these interviews?
Advertisement
What Actually Matters in AI-Enabled Interviews
Refusing to use AI out of principle doesn’t help. Some candidates avoid AI to prove they can think independently. This can backfire. If the organization uses AI internally—and most do—then refusing to use it signals rigidity, not strength.
Silence is a red flag. Interviews aren’t natural working environments. We don’t usually think aloud when deep in a complex problem, but silence can raise concerns. If you’re using AI, explain what you’re doing and why:
“I’m using AI to sketch an approach, then validating assumptions.”
“This suggestion works, but it ignores a constraint we care about.”
“I’ll accept this part, but I want to simplify it.”
Your decision-making process is what separates effective engineers from prompt jockeys.
Treat AI output as a first draft. Blind acceptance is the fastest way to fail. Strong candidates immediately evaluate the output: Does this meet the requirements? Is it unnecessarily complex? Would I stand behind this in production?
Small changes like renaming variables, removing abstractions, or tightening logic signal ownership and critical thinking.
Advertisement
Optimize for trust, not completion. Most AI tools can complete a coding challenge faster than any human. Interviews that allow AI are testing something different. They’re answering: “Would I trust this person to make good decisions when things get messy?”
Adapting to a Shifting Landscape
Interviews are changing faster than most candidates realize. Here’s how to prepare:
Start using AI tools daily. If you’re not already working with Cursor, Claude Code, ChatGPT, or CoPilot, start now. Build muscle memory for prompting, evaluating output, and catching errors.
Develop your rejection instincts. The skill isn’t using AI. It’s knowing when AI output is wrong, incomplete, or unnecessarily complex. Practice spotting these issues and learning known pitfalls.
Advertisement
Your next interview might test these skills. The candidates who’ve been practicing will have a clear advantage.
—Brian
Around this time last year, CEOs like Sam Altman promised that 2025 would be the year AI agents would join the workforce as your own personal assistant. But in hindsight, did that really happen? It depends on who you ask. Some programmers and software engineers have embraced agents like Cursor and Claude Code in their daily work. But others are still wary of the risks these tools bring, such as a lack of accountability.
In the United States, starting salaries for students graduating this spring are expected to increase, according to the latest data from the National Association of Colleges and Employers. Computer science and engineering majors are expected to be the highest paying graduates, with a 6.9 percent and 3.1 percent salary increase from last year, respectively. The full report breaks down salary projections by academic major, degree level, industry, and geographic region.
If given the opportunity, are international projects worth taking on? As part of a career advice series by IEEE Spectrum’s sister publication, The Institute, the chief engineer for Honeywell lays out the advantages of working with teams from around the world. Participating in global product development, the author says, could lead to both personal and professional enrichment. Read more here.
Fresh leaks suggest Intel’s upcoming Core Ultra 400 desktop processors, based on the Nova Lake-S architecture, could push power consumption beyond 700W under full load.
Two well-known hardware leakers (Jaykihn and kopite7kimi) on X shared technical notes outlining early platform behaviour for the Core Ultra 400 series, including what appears to be a high-end PL4 power figure.
One post claims that a fully loaded Nova Lake K-series chip exceeds 700 watts, though the workload and exact testing conditions were not specified.
If accurate, that figure would represent a significant jump over recent Intel desktop processors, including Raptor Lake-S, which topped out far lower under comparable profiles.
Advertisement
The 700W claim reportedly refers to the PL4 rating, also known as Power Level 4, which represents the highest defined power limit in Intel’s power management hierarchy.
For comparison, earlier-generation 13th Gen desktop chips reached around 314W in performance profiles, while certain Extreme configurations reportedly pushed near 490W.
Advertisement
Overclocking and core configuration
Additional details suggest the 700W figure applies to a dual-tile configuration, with one variant combining 16 performance cores, 32 efficiency cores, and four low-power efficiency cores.
Advertisement
The same sources also outlined changes to thermal and monitoring behaviour for Nova Lake-S, including the inability to offset TJMax or disable thermal throttling.
The on-die thermal sensor reportedly measures temperatures from –64°C up to 100°C when users enable Negative Temperature Reporting.
Leakers claim LP E-cores ignore BCLK and ECLK adjustments, indicating Intel has tightened control over specific clock domains.
The processor can reportedly boot using only LP E-cores, or run LP E-cores alongside E-cores while disabling the P-cores.
Advertisement
Advertisement
Sources say users can disable entire compute dies, though the platform now groups P-cores and E-cores into clusters that restrict disabling to a per-cluster basis.
Intel has not confirmed these specifications, and no official PL4 ratings or final configurations have been announced for Nova Lake-S.
Intel previously stated that Nova Lake will launch by the end of the year, though the company has not clarified whether that timeline applies to desktop or mobile variants.
Advertisement
If these early figures prove accurate, Core Ultra 400 could mark one of Intel’s most power-hungry and performance-focused desktop platforms to date.
This editor’s roundup lands at a moment when everything feels less like discourse and more like performance art, and not the good kind. Bad Bunny delivered the first mostly Spanish halftime show in Super Bowl history; a powerful, Puerto Rican-rooted celebration that thrust Latino culture onto a stage watched by 124 million people and the reactions were predictably absurd.
Conservatives from cable pundits to President Trump called it divisive and “terrible,” freaked out over language, and even tried to pitch hair-metal bands as replacements, while fanbases tried to learn Spanish just to keep up with the cultural conversation.
And then the other side did what it always does. Turning Point USA’s Super Bowl-adjacent event somehow morphed online into an “ICE rally,” a “Charlie Kirk memorial,” and an act of open racism, as if every folding chair and red lanyard came with a deportation order stapled to it. Context didn’t matter. Facts were optional.
Even the Grammy Awards could not just hand someone a trophy without turning it into a TED Talk nobody asked for — win, smile, say thanks, and sit the hell down.
Advertisement
Meanwhile, the internet still can’t decide whether this was patriotism or provocation, which tells you everything about how performative outrage has become the default setting on all sides. Then there’s the actual stuff we cover: Meze Audio dropped the Strada with a tuning curve that’s got listeners scratching heads, Record Store Day 2026 has its own mix of hype and eye-rolls, and the hi-fi show calendar is so bloated even insiders are saying “enough.” Same damn script everywhere — nuance on mute, noise on max.
Bad Bunny Sang in Spanish; Outrage Was Bilingual
Bad Bunny Singing at Super Bowl LX Halftime Show (Watch on YouTube)
If there was anything worth getting upset about during the Super Bowl, it wasn’t the language, it was the vocals. Bad Bunny can perform, no question: the pacing worked, the energy was there, the rhythm was locked in, and the spectacle did exactly what halftime shows are designed to do. But let’s not pretend we witnessed a once-in-a-generation vocal masterclass. Whitney Houston he is not. Hell, Whitney Houston warming up he is not. The real outrage should’ve been over pitch, breath control, and the fact that halftime vocals have quietly become optional while choreography and vibes do the heavy lifting.
And let’s get real for a second: there are 40-50 million Spanish-speaking people in the United States, many of whom watch the NFL every Sunday, buy jerseys, bet on games, and scream at referees in multiple languages. Holy queso, Batman — Spanish isn’t some foreign invasion; it’s part of the room. Always has been. Acting shocked by that in 2026 is less patriotism and more blind ignorance.
What’s depressing isn’t that a halftime show turned political; it’s that everything does now. We can’t even have a dumb, overproduced Super Bowl anymore without someone turning it into a referendum on national identity. It’s a football game. A mediocre one. Played by two teams most people claim to hate until kickoff. Someone wins, someone loses, and nobody should be afraid to joke about going to Disney World afterward because ICE might be waiting behind Space Mountain. When even that joke feels risky, you don’t have a culture war; you’ve got cultural exhaustion.
For all of the President’s misguided social media outrage about a halftime show, the only real winner here was Bad Bunny. Mission accomplished. He’ll sell out arenas coast to coast, merch will fly, and yes, the pants will remain deeply questionable whether you like the music or not.
Advertisement
What’s wrong with us is simpler; we’re obsessed with the noise instead of the moment, which makes perfect sense when half the country seems to get its news and its history from TikTok, X, and Instagram comments written at a sixth grade reading level.
Kendrick Lamar Performing at Super Bowl LIX Halftime Show (Watch on YouTube)
Last year’s Kendrick Lamar halftime worked better for me not because it was louder or angrier, but because it stayed tethered to the game, a genuine Eagles beat down of the Chiefs, Taylor Swift included, and because I actually own his records and took my son to one of his shows, which was pretty great. Football happened. Music happened. Nobody lost their damn mind. That’s apparently too much to ask now. It’s that bad, folks.
Advertisement. Scroll to continue reading.
Record Store Day 2026: Great Releases, Too Few Copies, Lots of Cash Required
Record Store Day is still one of my favorite days of the year, and not because I enjoy standing outside at 5am fueled by burnt coffee and poor decisions. It’s because you get to show up for the independent shops that keep this whole hobby from turning into a soulless “add to cart” spreadsheet. You stand in the cold or the heat or the rain, take your pick, clutch your list like it’s an immigration document, and hope the vinyl gods don’t flag you for secondary screening.
Advertisement
RSD 2023: Line at Jack’s Music, Red Bank, New Jersey
It doesn’t always work out, especially if you live somewhere where the first 20 people in line aren’t music fans, they’re flippers with spreadsheets, burner accounts, and the moral compass of a vending machine. They sprint straight to the obvious heat and scoop up the titles that were never pressed in big numbers, the stuff your local store might only have 1 to 5 copies of, then they flip it on Discogs or eBay for 3 to 5 times the price like they personally remastered it. These people don’t love records. They love arbitrage. Please step on a LEGO.
This year’s RSD 2026 list is legitimately stacked. Bruno Mars is the 2026 Record Store Day Ambassador, and shops will be holding Early Listening Parties on February 25 for his new album THE ROMANTIC, ahead of Record Store Day on April 18.
On the guaranteed-mayhem side, you’ve got Pink Floyd with Live From the Los Angeles Sports Arena, April 26th, 1975 (4xLP), plus perennial troublemakers like The Cure, David Bowie, Madonna, Grateful Dead, and Pearl Jam with React/Respond as a photo book plus a 7 inch single, which is basically catnip for the resellers.
Add The Rolling Stones turning RSD into a mini theme park with multiple drops, including the RSD3 mini turntable and a run of 3 inch singles like Get Off of My Cloud, Honky Tonk Women, Play With Fire, Heart of Stone, Mother’s Little Helper, and Have You Seen Your Mother, Baby, Standing in the Shadow? because nothing says “serious collector culture” like tiny-format chaos.
And the jazz titles? Quietly dangerous this year. You’ve got Bill Evans At The BBC: The Complete 1965 London Sets, John Coltrane and the John Coltrane Quartet showing up in the mix, Ahmad Jamal, Roy Hargrove with A Tribute to Pharoah Sanders, plus Joe Henderson Consonance: Live at the Jazz Showcase on Resonance, and Mal Waldron Stardust & Starlight: Live at the Jazz Showcase.
Advertisement
These are my dark-horse Record Store Day 2026 picks, the ones I’m prioritizing before caffeine fully kicks in and common courtesy disappears.
On the soul and jazz side, Stax: Killer B’s from Various Artists is exactly the kind of compilation RSD should be about, deep cuts that remind people why Stax mattered beyond the hits. Consonance: Live at the Jazz Showcase by Joe Henderson is a serious live document and another reminder that Resonance continues to treat jazz collectors like adults. Add The New Sounds from Miles Davis, Primeval Blues, Rags, And Gospel Songs by Charlie Patton, and BBC Sessions from John Prine, and you’ve got a stretch of records that feel more like history lessons than collectibles. These are the ones flippers ignore because they require listening instead of speculation.
The alternative and art-pop lane is quietly stacked this year. Analogue 20th Anniversary Deluxe Edition from a-ha is far better than people remember and will vanish once word gets out. The Seduction of Claude Debussy by Art of Noise is still ambitious, strange, and influential in ways that feel increasingly rare. The Rhythmatist shows Stewart Copeland doing something genuinely different outside of The Police, and The Blind Leading the Naked from Violent Femmes remains one of their most overlooked records. None of these scream “safe,” which is exactly why they matter.
Then there are the deceptively obvious picks that people will pretend not to want until they’re gone. Greatest Hits from The Cure will not sit long. MTV Unplugged captures Tony Bennett doing what modern vocalists still study but rarely master. Hallo Spaceboy is David Bowie in his confrontational mid-90s phase, not nostalgia cosplay, and Sledgehammer from Peter Gabriel still works when it’s pressed properly, whether people want to admit it or not.
Advertisement
Meze STRADA: Green, Gorgeous, and Sonically Side-Eyed
Meze Audio STRADA
Can Meze’s $799 STRADA closed-back headphones stand out in a market that’s already overcrowded, opinionated, and very sure of itself? That’s the real question, and it’s a fair one. The $500 to $1,000 headphone segment is a knife fight right now, with Grado, Focal, Denon, HiFiMAN, Dan Clark Audio, Beyerdynamic, Audeze, and Sennheiser all going after the same ears and the same wallets. Romanian manufacturer Meze Audio has never tried to win by shouting the loudest. They’ve won by doing the work.
I’m coming at this with some perspective. I own six pairs of Meze headphones, which makes it pretty clear I’ve bought into the approach. That doesn’t mean I love everything blindly. I don’t. If we’re ranking favorites, the Empyrean II, the 109 Pro, and the 99 Classics (2nd generation) sit at the top of my list. Those models get the balance right: comfort that disappears on your head, industrial design that feels intentional, and sound that prioritizes musical engagement over chasing measurements. So where does that leave the STRADA?
Advertisement. Scroll to continue reading.
From a build and design perspective, Meze hasn’t missed. The STRADA’s earcups are genuinely beautiful, finished in a deep green that feels more British sports car than Romanian headphone, and I’m ultimately fine with that. That said, the color has been polarizing. I’ve seen plenty of online chatter that wasn’t especially polite, along with no shortage of praise from people who really like the look. I wasn’t sold on it straight out of the box either, but it grew on me. And I’d be a hypocrite if I complained too loudly, considering I’ve owned two cars; a Morgan and a Mini Cooper that weren’t exactly shy about wearing a similar shade.
Meze Audio STRADA
It looks expensive without trying too hard, and it clearly isn’t chasing trends. The magnesium frame, soft padded headband, and balanced weight distribution hit the familiar Meze notes. Comfort remains a core strength, and clamping force is spot on — secure without crossing into fatigue. Long listening sessions aren’t a problem. Do I love the headband as much as some of Meze’s more recent designs? Not quite. It’s different, and that difference will land better for some listeners than others. Still, from a usability standpoint, the brief is handled.
Where things get more complicated is the tuning and the intent. The STRADA is clearly not trying to be a closed-back version of the 109 Pro, and that’s both understandable and slightly puzzling. Closed-back designs come with real constraints. You lose some openness and spatial air by default, but you gain isolation and control.
Advertisement
There’s also more room to play with treble behavior and bass weight, and Meze leans into that here. The STRADA sounds denser and heavier down low, with a presentation that’s less spacious overall, while the top end sits firmly on the brighter side. I’ve found myself reaching for EQ more than usual to dial it in. That’s not inherently wrong. It’s just different.
Meze Audio 109 Pro
The question I keep coming back to is why Meze felt the need to “fix” something that already worked so well. The 109 Pro are excellent headphones. They’re balanced, engaging, and easy to live with. The STRADA doesn’t replace them. It sidesteps them. Whether that sidestep feels purposeful or unnecessary will depend on what you’re looking for in a closed-back design at this price. I generally like what the STRADA is doing, but I’m not fully convinced yet that this was a gap in the lineup that needed filling.
The full review is coming next week, and that’s where I’ll land the plane properly. For now, the STRADA feels like a well-made, thoughtfully designed headphone that raises an interesting question about direction rather than execution. And with Meze, that question is usually worth asking.
Hi-Fi Show Overload: When Everything Is “Must-See”
2026 is barely six weeks old and the reality has already set in: there are simply too many hi-fi shows. After covering CES 2026 more deeply than any other hi-fi publication, the eCoustics team barely had time to unpack before heading straight to NAMM 2026. That kind of pace is part of the job, but it’s also a stress test. CanJam launched its first event in Dubai and, by all accounts, it went well enough that a 2027 return is already locked.
And let’s not forget ISE in Barcelona, which already happened, because the year barely waits for you to catch your breath before moving on—mañana is a lie, the calendar never sleeps, and the sangria is never strong enough.
Advertisement
February arrived with serious, deadly cold and snow pushing far deeper south than anyone expected. I was there. Back home, the same system claimed more than 35 lives across New Jersey and New York over the past three weeks, a grim reminder that this wasn’t just inconvenient weather. Meanwhile, the Tampa Show is next week and CanJam NYC is somehow already right around the corner. We’ll be there. Hopefully the city will have picked up the garbage by then.
And that’s the problem. The 2026 calendar is stacked to the point of absurdity, and it’s putting both the media and manufacturers into a quiet panic. We can’t cover everything. Nobody can. Travel budgets are stretched, crews are thin, and even the companies making the gear are starting to pick and choose because showing up everywhere is expensive and increasingly hard to justify. More shows don’t automatically mean better coverage or better products. At some point, the industry has to ask whether this pace is sustainable—or whether everyone involved is just pretending it is.
And if you think we’re just kvetching, here’s the part where the calendar starts to look like a cry for help. After Tampa and CanJam NYC; which is already shaping up to be the biggest one yet—the industry rolls straight into Bristol, then Montreal, CanJam Hong Kong, CanJam Singapore, and AXPONA, where six members of the eCoustics team will be on the floor at once. Hope the Wiener Circle has enough char dogs in stock, because nobody’s cooking that week.
Then comes Vienna, now replacing Munich, pushed later into June. Early feedback from people who’ve actually been in the space? Let’s just say the reaction was less wunderbar and more nein, which feels culturally appropriate. From there, we pivot directly into summer with T.H.E. Show SoCal, CanJam London (we’ll be at both), SouthWest Audio Fest in Dallas (I’ll be there), Audio Advice Live in Raleigh (Chris will be there), Pacific Audio Fest, CanJam SoCal (we’ll be there), CEDIA with full-team coverage, CanJam Shanghai, CanJam Dallas, Toronto (I’ll be there), CAF with team coverage, Warsaw, the Paris Show, Singapore Show—yes, again—and a show in Australia, because apparently jet lag is a lifestyle choice now.
Advertisement
Advertisement. Scroll to continue reading.
The only things missing on the calendar at this point are CanJam Tel Aviv, CanJam Berlin; and it’s more than a little interesting that there still aren’t any plans for a CanJam in Canada—and something in Mexico or Argentina, because apparently South America remains an untapped frontier. Then there’s T.H.E. Show, which continues to live in a state of strategic ambiguity but is likely to surface with the familiar SoCal show and the New York show, which for those of us in the Garden State is, let’s be honest, New Jersey. Same circus, different exits.
All of this somehow wraps up just in time for Black Friday and Thanksgiving, when everyone either collapses, disappears, or spends a brief but meaningful stretch in an institution. I’ve already done my time. Didn’t enjoy the kosher meal option.
Something has to give. There isn’t a single publication on the planet that can realistically handle that schedule, and I’m genuinely proud of the fact that we can cover the shows we do without turning the whole thing into noise. Travel can be fun, sure, but let’s not kid ourselves — this is work. And there’s a point where listening to the same gear, in the same hotel rooms, under the same conditions, stops being insight and starts becoming background hum. Familiarity doesn’t always breed clarity. Sometimes it just breeds fatigue.
Advertisement
I’ve spoken to seven manufacturers over the past few weeks, and I’m talking about the big ones — and they’re feeling it too. The math doesn’t work. Showing up everywhere isn’t financially viable, and it’s not strategically smart either. At some point you’re not launching products, you’re just maintaining appearances. And frankly, nobody needs to see the same rotation of ass-kissing journalists and YouTubers with their hands out at every stop on the circuit. That doesn’t serve readers, viewers, or the industry.
I’ve always kept a simple rule. I eat with the team, with a few of the industry’s best PR people; Adam Sohmer, Jaclyn Inglis, Sue Toscano — or alone. That’s it. Anything else invites complications that don’t need to exist. Free dinners from companies aren’t hospitality. They’re bribes. And once you normalize that, you’ve already lost the plot.
Samsung looks to be going all-in on 2nm chip production, a move that could start to loosen Qualcomm’s grip on future Galaxy phones.
While the upcoming Galaxy S26 is expected to debut Samsung’s first 2nm processor, the Exynos 2600, new reports suggest the company is already lining up its successor for mass production.
According to Korean outlet Hankyung, Samsung plans to begin mass production of the Exynos 2700 in the second half of 2026. Analysts at Kiwoom Securities believe the chip could power around half of the Galaxy S27 lineup, expected to land in 2027.
If that happens, it would mark a major shift away from Qualcomm-powered flagships and, by extension, from TSMC-manufactured chips.
Advertisement
Advertisement
That doesn’t mean Samsung is cutting ties with Qualcomm just yet. Current reports suggest Samsung’s 2nm yield sits at around 50%, compared to Qualcomm’s reported 65% via TSMC. Until Samsung can close that gap, Qualcomm is likely to remain part of the picture. At least for certain markets and models, that remains true.
Still, the ambition is clear. The Exynos 2700 — reportedly codenamed Ulysses — is expected to feature a deca-core CPU, an Xclipse 970 GPU, and support for next-gen standards like LPDDR6 RAM and UFS 5.0 storage. Importantly, all are built on Samsung’s 2nm SF2P process. On paper, at least, it’s shaping up to be a serious flagship contender.
The bigger story, though, is what this means for Samsung as a whole. Launching the Exynos 2600 ahead of rivals would make Samsung the first company to bring a 2nm chip to market. This would beat both Qualcomm and MediaTek, while also giving Samsung more control over its hardware stack, echoing strategies used by Apple and Google.
Advertisement
There’s also a geopolitical angle. With US tariffs still influencing supply chains, Samsung’s growing manufacturing presence could make it an attractive alternative for companies looking to reduce reliance on TSMC. That’s a long-term play, and one that hinges heavily on Samsung improving its yield rates.
For now, Samsung’s 2nm push feels like a statement of intent. Qualcomm may still be the safer option today. However, Samsung is clearly positioning Exynos as a serious rival again — and this time, the stakes are much higher.