Completely redesigned with upgraded components and slightly better fit than XM5
Top-notch sound that’s accurate, well-balanced and natural
Excellent noise-canceling and voice-calling performance with 8 microphones (4 in each bud)
Decent battery life
Cons
Pretty pricey
Included eartips may not be a good match for all ears
Android-only spatial audio features
When I first heard that Sony was coming out with new sixth-generation 1000X earbuds, I wasn’t quite sure what to expect. Companies like Bose and Apple have basically stuck with the same design — or a similar one anyway — for their flagship noise-canceling buds for the last few years. But Sony’s new WF-1000XM6 buds are completely overhauled inside and out and look nothing like the models that preceded them.
The end result is impressive: While expensive at $330, the WF-1000XM6 not only features great sound and excellent noise canceling, but their voice-calling performance is also top-notch. Are they the best noise-canceling earbuds out there right now? Aside from a caveat or two, I’d say so, though the AirPods Pro 3 remain a safer bet for Apple users from a fit and features standpoint (not to mention a lower price tag).
The WF-1000XM6’s design shift
Both the buds and their case are a little plain-looking. I’m OK with that, and from a practical standpoint, I liked that the case is flat on both its top and bottom, making it easy to place down on a flat surface, such as a wireless charging pad.
Advertisement
The XM5s have a partially glossy finish, but these have a full matte finish, which I prefer. That said, they don’t have anything to distinguish them as the XM4s did with their eye-catching copper ring that served as a microphone housing.
Enlarge Image
Sony calls this color silver.
Advertisement
David Carnoy/CNET
More intricately molded than your typical stemless buds, Sony says the new shape (11% slimmer overall than the XM5s and more aerodynamic to reduce wind noise) conforms better to the natural curves of your ears, and I agree with that. I also appreciated the little ridge along the top side of each bud that allows you to grip it better, so the bud is less likely to slip from your fingers when putting them in or taking them out.
The buds have touch controls that are nicely responsive and are equipped with ear-detection sensors that pause audio when you take a bud out of your ear and resume playback when you put it back in. They’re IPX4 splashproof and seem fine for gym use, though I probably wouldn’t recommend them for running because I wasn’t certain they’d stay in my ears with a lot of jostling.
The buds now have eight microphones (four in each bud) instead of six.
Advertisement
David Carnoy/CNET
Like a lot of high-end buds, they’re a little beefy and will stick out of your ears a bit. That didn’t really bother me. But once again, I can’t say I was thrilled with Sony’s included eartips, which are the same firm foam tips that were included with the XM5s. I was able to get a fairly secure fit with them, but I didn’t get a truly tight seal, according to the seal test in Sony’s SoundConnect app for iOS and Android. I didn’t find the tips super comfortable, either, so I went with a pair of large-size silicone tips from another set of buds I’d tested (I favor tips from Sennheiser and Bowers & Wilkins, which are wider and more rounded). With the tip change, sound quality and noise-canceling performance improved noticeably, which makes me wonder why Sony doesn’t include more tip options.
To be clear, many people should get a good fit from one of the included tips. But my ears fall into the 10% to 20% of ears that just aren’t a great match for Sony’s tips. And, as you may have read or heard me say too many times, it’s crucial to get a tight seal to get optimal sound quality and noise-canceling performance. That’s especially true of these buds because they deliver some real wow factor if you get a tight seal.
Enlarge Image
Advertisement
Sony’s tip on the left, my own on the right. Sound quality and noise-canceling performance improved when I swapped in my own tips and got a tight seal.
David Carnoy/CNET
Upgraded components lead to better performance
Aside from the external makeover, the XM6s are upgraded on the inside with new drivers, a 3X more powerful QN3e chip with improved analog conversion technology, eight microphones — up from six — and an improved bone-conduction sensor that helps with voice-calling performance. The “HD Noise Canceling” QN3e processor is paired with Sony’s Integrated Processor V2, which now supports 32-bit processing compared with 24-bit processing. The same V2 chip is also found in Sony’s XM5 earbuds and its flagship WH-1000XM6 over-ear headphones.
Sony says the new XM6 buds feature 25% “further reduction in noise” than the XM5s, with gains made in the mid-to-high frequency range. I spent a lot of time comparing the XM6s to other leading premium noise-canceling earbuds, including Apple’s excellent AirPods Pro 3, the Bose QuietComfort Ultra Earbuds (2nd Gen) and Bowers and Wilkins’ Pi8. Both the AirPods Pro 3 and QuietComfort Ultra Earbuds have superb noise canceling. Sony says the XM6s have the best noise canceling for earbuds right now, based on international testing standards.
Advertisement
I compared the WF-1000XM6 buds to the Bose QuietComfort Ultra Earbuds (2nd Gen).
David Carnoy/CNET
Alas, I don’t have access to expensive technical equipment to test noise-canceling performance, so I have to rely on a few less scientific tests, including comparing how well each set of buds muffles the noisy HVAC unit in my kitchen and wearing the buds in the noisy streets of New York and on the subway. In the HVAC test, they were all really close, though I thought the Sony had a very slight edge.
In the streets of New York, it’s really hard to sense that the noise canceling is any better than what you get with those competing models. All three are very close, and your experience could vary with the quality of the seal you get. It’s quite possible that these Sonys are able to muffle a wider range of frequencies with slightly more vigor, but they still can’t muffle higher frequencies as well as lower frequencies. That means you can still hear people’s voices and higher-pitched noises, albeit at significantly reduced volume levels.
I do think Sony has also made some improvements to its transparency mode. Apple’s is still the gold standard, but Sony’s now sounds quite natural at its highest setting. Previously, you had to play around with the level to find the most natural setting (the sound from the outside world was actually augmented at the highest setting).
Advertisement
Sony also now has an auto ambient mode that’s similar to Apple’s Adaptive Audio mode, which automatically adjusts the level of ambient sound filtered in, depending on the level of noise around you. Plus, you can toggle on a voice pass-through mode that filters in voices while suppressing ambient noise.
The buds have a little ridge on their side that help you get a better grip on them when putting them in you ears and taking them out.
David Carnoy/CNET
Superior sound
When it comes to sound, both the AirPods Pro 3 and Bose QC Ultras sound excellent, with the Ultras sounding smooth and clean across a variety of music genres. Some people complained that the AirPods Pro 3’s sound was a little too aggressive (not enough warmth) compared with the AirPods Pro 2’s, with more dynamic bass and treble and slightly recessed mids. I preferred the AirPods Pro 3’s sound — to my ears, it has a little more clarity and definition, and I was OK with the more energetic bass. But everybody has their own sound preferences, and you can experience some listening fatigue if you feel the treble has too much sizzle or the bass kicks too hard in the wrong way.
Advertisement
I think the XM6’s sound is better and more special than both the AirPods Pro 3’s and QC Ultra’s sound. Music sounds more accurate and natural with better bass extension, overall clarity and refinement, along with a wide soundstage where all the instruments seem well-placed. Additionally, I found the XM6s came across slightly more dynamic and bold-sounding than the Bowers & Wilkins Pi8 buds, which also feature accurate, natural sound for Bluetooth earbuds.
As I said, all the models mentioned here sound impressive, but the tonal quality varies a bit. While companies often talk about how their buds and headphones deliver audio the way artists intended you to hear it, some do it better than others and are able to live up to audiophile standards — or close to them anyway. Such is the case for the WF-1000X6 buds.
I tested them with an iPhone 16 Pro and a Google Pixel 9, listening to a variety of music genres on Spotify using the lossless audio setting. They handled everything with aplomb (virtually no distortion) and didn’t cause any listening fatigue. My connection was also rock solid with no Bluetooth hiccups. While I didn’t experience, any major connectivity issues with the XM5s, some people apparently did, and Sony says it equipped the XM6s with a new wireless antenna that’s 1.5x larger than XM5’s antenna to improve the wireless connection, particularly in crowded signal areas (there are certain intersections in new York City that have a lot of wireless interference and can cause Bluetooth hiccups).
Advertisement
Enlarge Image
Testing the WF-1000XM6 earbuds on the bone-chilling streets of New York.
David Carnoy/CNET
Top-notch voice-calling performance
They’re also hard to beat for voice-calling performance, which I also grade an A. Callers said my voice sounded mostly natural and clear, and they didn’t really hear any background noise when I wasn’t speaking (and only a little when I did speak). If you want to hear a test, check out the one I did with fellow CNET editor Josh Goldman in my video review of the XM6 buds.
It’s worth noting that the buds have a side-tone feature, so you can hear your voice in the buds when you’re talking. And like previous 1000X models, these have Sony’s speak-to-chat feature, which lowers the volume of your audio and goes into ambient mode when you start to have a conversation with someone.
Advertisement
Watch this: Sony WF-1000XM6 Earbuds Review: Supreme Performance, Subdued Design
Also, Sony has redesigned the venting of the earbuds to increase airflow and reduce internal noises such as “footsteps and chewing sound.” I did notice some improvements there (yes, a lot of people don’t like having their ears feel occluded and hearing their footsteps).
As far as audio codecs go, the buds support AAC, SBC and LDAC as well as multipoint Bluetooth pairing, which allows pairing to two devices to the buds simultaneously. Sony says the buds are “ready for LE Audio,” which means that at some point they should support the LC3 audio codec and Auracast broadcast audio with a firmware update.
Sony has continued to streamline its SoundConnect app for iOS and Android, so it’s a little more user-friendly, though there are still a lot of settings to play around with, including scene-based listening settings and various equalizer settings.
Advertisement
Battery life is rated at up to 8 hours at moderate volume levels, with an extra two charges in the case. That’s a little better than what competing models offer and, again, the case supports wireless charging.
Sony WF-1000XM6 final thoughts
The XM6s are noticeably improved across the board from the XM5s, which I still like. And while these buds are certainly expensive, they’re pretty hard to beat from a performance standpoint across all the key areas, including sound quality, noise canceling and voice-calling, which is why I’ve awarded them an Editors’ Choice.
The one thing I can’t tell you is just how well they’ll fit your ears. While the AirPods Pro 3 don’t offer quite as good sound quality, they’re less expensive and are in some ways a safer pick for Apple users, as their lightweight stem design tends to fit a wide range of ears comfortably. They also have more features overall, including a Hearing Aid mode, Apple’s new Live Translation feature and personalized spatial audio (Sony’s spatial audio features are Android-only).
That said, if you’re able to get a good fit with a comfortable seal, the XM6s are truly impressive earbuds. They may just be the best out there at the moment.
What’s the role of vector databases in the agentic AI world? That’s a question that organizations have been coming to terms with in recent months.
The narrative had real momentum. As large language models scaled to million-token context windows, a credible argument circulated among enterprise architects: purpose-built vector search was a stopgap, not infrastructure. Agentic memory would absorb the retrieval problem. Vector databases were a RAG-era artifact.
The production evidence is running the other way.
Qdrant, the Berlin-based open source vector search company, announced a $50 million Series B on Thursday, two years after a $28 million Series A. The timing is not incidental. The company is also shipping version 1.17 of its platform. Together, they reflect a specific argument: The retrieval problem did not shrink when agents arrived. It scaled up and got harder.
Advertisement
“Humans make a few queries every few minutes,” Andre Zayarni, Qdrant’s CEO and co-founder, told VentureBeat. “Agents make hundreds or even thousands of queries per second, just gathering information to be able to make decisions.”
That shift changes the infrastructure requirements in ways that RAG-era deployments were never designed to handle.
Why agents need a retrieval layer that memory can’t replace
Agents operate on information they were never trained on: proprietary enterprise data, current information, millions of documents that change continuously. Context windows manage session state. They don’t provide high-recall search across that data, maintain retrieval quality as it changes, or sustain the query volumes autonomous decision-making generates.
“The majority of AI memory frameworks out there are using some kind of vector storage,” Zayarni said.
Advertisement
The implication is direct: even the tools positioned as memory alternatives rely on retrieval infrastructure underneath.
Three failure modes surface when that retrieval layer isn’t purpose-built for the load. At document scale, a missed result is not a latency problem — it is a quality-of-decision problem that compounds across every retrieval pass in a single agent turn. Under write load, relevance degrades because newly ingested data sits in unoptimized segments before indexing catches up, making searches over the freshest data slower and less accurate precisely when current information matters most. Across distributed infrastructure, a single slow replica pushes latency across every parallel tool call in an agent turn — a delay a human user absorbs as inconvenience but an autonomous agent cannot.
Qdrant’s 1.17 release addresses each directly. A relevance feedback query improves recall by adjusting similarity scoring on the next retrieval pass using lightweight model-generated signals, without retraining the embedding model. A delayed fan-out feature queries a second replica when the first exceeds a configurable latency threshold. A new cluster-wide telemetry API replaces node-by-node troubleshooting with a single view across the entire cluster.
Why Qdrant doesn’t want to be called a vector database anymore
Nearly every major database now supports vectors as a data type — from hyperscalers to traditional relational systems. That shift has changed the competitive question. The data type is now table stakes. What remains specialized is retrieval quality at production scale.
Advertisement
That distinction is why Zayarni no longer wants Qdrant called a vector database.
“We’re building an information retrieval layer for the AI age,” he said. “Databases are for storing user data. If the quality of search results matters, you need a search engine.”
His advice for teams starting out: use whatever vector support is already in your stack. The teams that migrate to purpose-built retrieval do so when scale forces the issue.
“We see companies come to us every day saying they started with Postgres and thought it was good enough — and it’s not.”
Advertisement
Qdrant’s architecture, written in Rust, gives it memory efficiency and low-level performance control that higher-level languages don’t match at the same cost. The open source foundation compounds that advantage — community feedback and developer adoption are what allow a company at Qdrant’s scale to compete with vendors that have far larger engineering resources.
“Without it, we wouldn’t be where we are right now at all,” Zayarni said.
How two production teams found the limits of general-purpose databases
The companies building production AI systems on Qdrant are making the same argument from different directions: agents need a retrieval layer, and conversational or contextual memory is not a substitute for it.
GlassDollar helps enterprises including Siemens and Mahle evaluate startups. Search is the core product: a user describes a need in natural language and gets back a ranked shortlist from a corpus of millions of companies. The architecture runs query expansion on every request – a single prompt fans out into multiple parallel queries, each retrieving candidates from a different angle, before results are combined and re-ranked. That is an agentic retrieval pattern, not a RAG pattern, and it requires purpose-built search infrastructure to sustain it at volume.
Advertisement
The company migrated from Elasticsearch as it scaled toward 10 million indexed documents. After moving to Qdrant it cut infrastructure costs by roughly 40%, dropped a keyword-based compensation layer it had maintained to offset Elasticsearch’s relevance gaps, and saw a 3x increase in user engagement.
“We measure success by recall,” Kamen Kanev, GlassDollar’s head of product, told VentureBeat. “If the best companies aren’t in the results, nothing else matters. The user loses trust.”
Agentic memory and extended context windows aren’t enough to absorb the workload that GlassDollar needs, either.
“That’s an infrastructure problem, not a conversation state management task,” Kanev said. “It’s not something you solve by extending a context window.”
Advertisement
Another Qdrant user is&AI, which is building infrastructure for patent litigation. Its AI agent, Andy, runs semantic search across hundreds of millions of documents spanning decades and multiple jurisdictions. Patent attorneys will not act on AI-generated legal text, which means every result the agent surfaces has to be grounded in a real document.
“Our whole architecture is designed to minimize hallucination risk by making retrieval the core primitive, not generation,” Herbie Turner, &AI’s founder and CTO, told VentureBeat.
For &AI, the agent layer and the retrieval layer are distinct by design.
“Andy, our patent agent, is built on top of Qdrant,” Turner said. “The agent is the interface. The vector database is the ground truth.”
Advertisement
Three signals it’s time to move off your current setup
The practical starting point: use whatever vector capability is already in your stack. The evaluation question isn’t whether to add vector search — it’s when your current setup stops being adequate. Three signals mark that point: retrieval quality is directly tied to business outcomes; query patterns involve expansion, multi-stage re-ranking, or parallel tool calls; or data volume crosses into the tens of millions of documents.
At that point the evaluation shifts to operational questions: how much visibility does your current setup give you into what’s happening across a distributed cluster, and how much performance headroom does it have when agent query volumes increase.
“There’s a lot of noise right now about what replaces the retrieval layer,” Kanev said. “But for anyone building a product where retrieval quality is the product, where missing a result has real business consequences, you need dedicated search infrastructure.”
The Amsterdam-headquartered startup has been out of stealth for just eight months, but it already has 350 staff, production deployments across four continents, and a valuation reportedly approaching $1.7 billion
There is a problem that every major enterprise AI deployment eventually runs into: the gap between a convincing demo and a working system in production. Models hallucinate. Integrations break. Compliance requirements differ by country.
Local languages do not behave the way US-centric training data assumes. The organisations best placed to close this gap, the argument goes, are not those with the best models, they are those with the most people on the ground.
That thesis is the foundation of Wonderful, the enterprise AI agent platform founded in early 2025 by Bar Winkler and Roey Lalazar.
Advertisement
The company has raised $150 million in a Series B round led by Insight Partners, with participation from existing backers Index Ventures, IVP, Bessemer Venture Partners, and Vine Ventures.
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
The raise brings Wonderful’s total disclosed funding to $286 million, a striking figure for a company that only emerged from stealth in mid-2025 with a $34 million seed round, then raised a $100 million Series A in November of the same year.
Advertisement
Wonderful is headquartered in Amsterdam, with Israeli founders and a model built on local deployment teams embedded inside client organisations. The company says it now operates in more than 30 countries across Europe, the Middle East, Asia-Pacific, and Latin America, serving enterprises in telecoms, financial services, manufacturing, and healthcare.
It will use the new capital to grow headcount from 350 to approximately 900 by year-end.
The company’s core product is an enterprise AI agent platform, model-agnostic by design, continuously benchmarking and selecting AI models for each use case.
The agents handle customer-facing workflows across voice, chat, and email, as well as internal workflows such as employee onboarding, compliance, and IT support.
Advertisement
What distinguishes Wonderful’s model is the deployment layer: rather than selling software and leaving clients to integrate it themselves, the company embeds local teams inside enterprise environments to manage rollout, integration, and post-deployment optimisation.
“In 2026, enterprises will be deciding who to partner with to operationalize AI across their organizations, and those decisions will hinge on who can deliver deep integrations across complex infrastructures and tailor solutions to each organization’s unique environment,”said Bar Winkler, CEO and Co-founder of Wonderful.
“We built our platform and operating model around that reality, and the demand we’re seeing globally reflects it.”
The company says more than 70% of enterprises that begin with a single use case expand into additional workflows within three months, a retention dynamic that Bar Winkler attributes to Wonderful’s practice of building a shared architecture across an enterprise’s core systems from the outset. Once that foundation is in place, activating new use cases becomes progressively faster.
Advertisement
Wonderful also claims measurable operational results from production deployments: reductions in handling times of up to 60%, containment rates above 80%, and multi-million-dollar annual efficiency gains for individual clients. These figures are not independently audited.
“Over 70% of enterprises that begin with a single use case expand into additional workflows within the first three months,” Winkler added. “That expansion is possible because we built a shared foundation across core systems from day one.”
“Wonderful is establishing trust and deep partnerships inside complex enterprises at a critical moment for the market,” said Jeff Horing, managing director at Insight Partners. “We believe that the team’s combination of platform strength and execution position Wonderful as a strong enterprise partner in today’s ecosystem.”
Lalazar, the company’s CTO, framed the ambition in broader terms. “We’re deploying agents across every business function, while pioneering the next generation of application layers that will transform how organisations operate,” he said.
Advertisement
The enterprise AI agent market is crowded, and growing more so. Salesforce’s Agentforce, ServiceNow’s AI platform, and a wave of better-funded standalone startups are all pursuing the same budget line.
Wonderful’s differentiation rests on a bet that local deployment teams and multilingual agents will be decisive in markets where US-centric platforms struggle, that the structural complexity of global enterprise is, in effect, its moat.
Eight months out of stealth, the bet appears to be attracting capital. Whether it holds at scale is the question this round is funding.
Mayor Katie Wilson speaks at the Downtown Seattle Association’s annual event on Wednesday. (GeekWire Photo / Lisa Stiffler)
Seattle is witnessing a curious role reversal in its economic narrative. While the city finally gains ground on perennial challenges like crime and transportation, its traditional growth engine — the tech sector and downtown employment — is beginning to sputter.
The city has for years been a tech, retail and arts hub, but its total downtown jobs peaked in 2019 with more than 340,000 workers. Since the pandemic, that number has been creeping downwards, hitting approximately 317,000 jobs — which is roughly on par with 2018 numbers, according to a new report from the Downtown Seattle Association (DSA).
“We’re going in the wrong direction,” said Jon Scholes, DSA president and CEO, at the organization’s annual State of Downtown event on Wednesday.
“Over this period where we’ve seen a decrease in jobs, we’ve seen a record increase in taxes that employers in the city of Seattle are paying — that employers aren’t paying in Bellevue and other cities in our region,” he added. “We have become an outlier when it comes to the cost of doing business in our city.”
Those costs include the city’s JumpStart tax, which targets the payrolls of large employers with high‑earning employees, as well as last year’s restructuring of Seattle’s tax on gross revenue that shifted the burden from smaller businesses to large ones. Also on the horizon is the new state income tax on wealthier individuals that lawmakers just passed.
Advertisement
Taxes are taking a lot of the blame, but other major forces are at work as well. Across the country, companies are cutting headcount as AI tools replace some roles, economic uncertainty lingers, and leaders move to trim what they see as pandemic-era corporate “bloat.”
That said, key elected leaders on Wednesday acknowledged concerns about rising taxes and government budgets.
“I very much appreciate that it is not ideal for our tax environment for businesses to be wildly out of step with neighboring jurisdictions,” Mayor Katie Wilson told the packed hall at the Seattle Convention Center.
Wilson and King County Executive Girmay Zahilay both pledged to scrutinize their governments’ budgets. Wilson said she expects to make “significant” cuts and Zahilay plans to build the county’s spending plans “from the ground up” rather than following the model of rolling past budgets forward.
Advertisement
Jon Scholes, Downtown Seattle Association president and CEO, speaking at the Seattle Convention Center. (GeekWire Photo / Lisa Stiffler)
The fiscal caution comes even as the city’s social metrics trend upward. The 2025 DSA report highlighted several bright spots:
Crime: Incidents and violent crimes have decreased downtown since a 2021 peak.
Residential Growth: The number of downtown residents has reached nearly 110,000 — an 80% increase over the past 25 years.
Visitors: More than 15.3 million unique visitors came to downtown — an increase from 2019, but flat compared to the year before. People are also visiting more frequently.
Transit: Light rail boardings at downtown stations jumped 23% over 2024.
And yet that residential and visitor energy hasn’t yet translated into a full-scale recovery of the Monday-through-Friday workforce. Despite return-to-office mandates, daily worker foot traffic averages just 145,000 — still well below the 226,000 workers on average who filled downtown streets each day in 2019, according to DSA.
Amazon has helped with the rebound, but multiple rounds of layoffs have dampened the effect.
Once Seattle’s largest employer, Amazon recently lost that crown to the University of Washington, the Seattle Times reported. The company had a peak of about 60,000 workers in the city in 2020, but that headcount has slumped to less than 50,000. That figure could dip further as Amazon this spring is vacating a seven-story, 251,000-square-foot leased space in downtown.
A display at the Downtown Seattle Association’s annual event. (GeekWire Photo / Lisa Stiffler)
Beyond the tech giants, the broader commercial landscape is struggling with a growing volume of empty office spaces. Downtown vacancies reached a new high of 34.7% in the last quarter of 2025, according to CBRE. Before the pandemic, that number was hovering around 8%.
Despite these headwinds, the contractions aren’t universal. Some firms are doubling down on the city’s core: Impinj recently renewed and increased its downtown office space while DAT Solutions and Docker both took sublease space along the city’s waterfront.
In an interview after the event, Scholes emphasized that the health of the entire economic ecosystem depends on these major anchors.
Advertisement
“We need big employers in the city,” he said. “I was with some small businesses earlier this week, and they said, ‘You know, our best customers are big employers. They are our lifeblood … If you’re a restaurant, if you’re a barbershop downtown, you’re relying on people in those upper floors.”
Many of us remember the halcyon days of being a kid in the ‘90s, spending a weekend afternoon with remote control in hand and a seemingly endless well of stuff to watch on TV. Now you can relive the experience thanks to the appropriately named Channel Surfer web app. It’s essentially a YouTube discovery tool that surfaces interesting videos, but presented in a retro homage to the cable channel screen.
Channel Surfer is the work of developer Steven Irby. He has 40 channels on the app right now, mostly grouping content by theme. There are channels for typical cable fare like news and sports, but also music, movies and a number of more tailored tech subjects like AI, gaming, gadgets and space.
“I built Channel Surfer because I’m tired of the algorithms and indecision fatigue,” he told TechCrunch, which is where we discovered the app. “I miss channel surfing and not having to decide what to watch. I want to just sit and tune into what’s on and not think about what to watch next.”
It seems Irby isn’t alone, because he posted on X that the number of views he’s getting for Channel Surfer already broke 10,000 on its first day.
Ali Farhadi has been CEO of the Allen Institute for AI since July 2023. (GeekWire File Photo / Todd Bishop)
Ali Farhadi is stepping down as the CEO of the Allen Institute for AI (Ai2), after a two-and-a-half-year tenure that brought growing recognition to the Seattle-based nonprofit research institute as a key player in the world of open-source artificial intelligence.
He will be replaced on an interim basis by Peter Clark, a founding member of Ai2, as the board begins a search for a permanent successor. Clark served in the same interim role after the departure of founding CEO Oren Etzioni in 2022. Farhadi’s last day is Friday.
The announcement was made late Thursday morning to the roughly 200-person Ai2 team, said board chair Bill Hilf, in an interview with GeekWire shortly after the internal meeting.
Hilf said he and Farhadi had been discussing the transition for about six months. Farhadi wants to pursue his research ambitions at the frontier of large-scale AI, where for-profit companies are spending billions of dollars a year on computing horsepower, Hilf said.
Asked why Farhadi couldn’t pursue that work at Ai2, Hilf cited the financial realities of competing against tech giants at the largest scale of AI model development as a nonprofit. He said the board has to weigh whether philanthropic dollars are best spent trying to keep pace.
Advertisement
“The cost to do extreme-scale open model research is extraordinary,” Hilf said, adding that it’s “really hard to do extreme-scale model work inside of a nonprofit.”
Hilf said Ai2 will continue its work on areas including OLMo, its open-source AI models, while also citing its focus on applying AI to real-world problems in areas such as climate, conservation, and health.
A computer vision specialist, Farhadi had deep roots at Ai2. He joined the institute in 2015 and co-founded the Ai2 spinout Xnor.ai, which Apple acquired in 2020 for an estimated $200 million in one of the institute’s biggest commercial successes.
He led machine learning efforts at Apple before returning to lead Ai2 in July 2023.
Advertisement
Farhadi has not said where he might go next. He is expected to remain a professor at the University of Washington’s Allen School of Computer Science and Engineering.
“Leading Ai2 has been a true privilege,” Farhadi said in a statement, citing the Ai2 team’s release of more than 300 models and artifacts with more than 33 million downloads.
He pointed to advances in health, science, and environmental research, and cited investments from the NSF and Nvidia and initiatives such as the Cancer AI Alliance as results of its impact.
“Ai2 is entering its next phase from a position of real strength, with growing global adoption of our work and an extraordinary team driving innovation,” Farhadi said. “I’m excited to see them continue pushing the boundaries of what AI can achieve for humanity.”
Advertisement
Farhadi will leave the Ai2 board. Chief Operating Officer Sophie Lebrecht is also leaving. Lebrecht worked alongside Farhadi at Xnor.ai and at Apple before joining him at Ai2.
Hilf noted that all programs planned for 2026 are fully funded and that Farhadi wanted to ensure that stability before stepping down.
Existing commitments are not affected, Hilf said, including a $152 million, five-year initiative backed by the National Science Foundation and Nvidia to build open AI models for scientific research, and Ai2’s role in the Cancer AI Alliance led by Seattle’s Fred Hutch Cancer Center.
Ai2 was founded in 2014 by the late Microsoft co-founder Paul Allen. It receives major funding from the Foundation for Science and Technology, an Allen entity. Jody Allen is on the Ai2 board.
Advertisement
Clark, the interim CEO, said in a statement that he is committed to a smooth transition.
“Our mission remains unchanged: advancing AI research and engineering for the common good, and turning our open breakthroughs into lasting, real-world impact,” he said.
Hilf said the board is looking for a new CEO who combines scientific depth with nonprofit management experience and a passion for open science, acknowledging that the combination is rare and that building an open community is harder than people think.
Each spring, the global hi-fi industry makes its pilgrimage to the Chicago suburbs for AXPONA, and AXPONA 2026 is shaping up to be one of the busiest yet. The Windy City show has become the premier North American stage for new hi-fi product launches, drawing hundreds of exhibitors from around the world and filling the halls with everything from statement loudspeakers and cutting edge amplifiers to the latest digital streaming gear and personal audio.
This year’s event (April 10-12, 2026 at the Renaissance Schaumburg Hotel & Convention Center near Chicago, IL) will once again offer a maze of listening rooms, product debuts, and late night industry debates about whether vinyl really does sound better than streaming. World premieres, U.S. debuts, and technology showcases are expected across the Exhibit Hall, Ear Gear section, Experience Car Audio showcase, and dozens of hotel listening rooms.
The eCoustics team will be there in force. Six of us are heading to Chicago to cover the show from every angle with daily reporting, listening impressions, interviews, podcasts, and video coverage. If something new, strange, brilliant, or outrageously expensive appears at AXPONA 2026, we’ll be in the room listening to it. And probably arguing about it five minutes later.
New Hi-Fi Product Launches Expected at AXPONA 2026
While AXPONA is always a great place to reconnect with the industry and hear familiar systems pushed to their limits, the real draw is the steady stream of new hi-fi products making their debut. Manufacturers routinely use the Chicago show as the North American launchpad for new loudspeakers, amplifiers, DACs, and source components, knowing that thousands of enthusiasts, dealers, and journalists will be walking the halls looking for the next big thing.
Advertisement
Audio Note (UK)
Audio Note (UK) will present the U.S. premiere of the latest version of its Oto integrated amplifier at AXPONA 2026. The Oto remains one of the company’s most recognizable tube integrated designs and continues Audio Note’s focus on minimalist circuits and traditional analog amplification. The new Oto SE 35 is available as of February 2026 starting at $5,950.
Location: Room 1403
AXISS Audio Distribution
AXISS Audio Distribution will host two notable debuts. The first is the U.S. introduction of a new optical phono system from Reed, which includes a customized optical cartridge paired with a dedicated phono EQ stage and an external power supply designed specifically for optical signal conversion. AXISS will also present the North American debut of the Soulution 787 turntable, a reference level linear tracking design that uses a moving platter combined with a pivoted tonearm architecture. Pricing for both products has not been announced.
Location: Room 1128 and Aster Suite (16th Floor)
Decibel+
Decibel Plus will showcase several new components including the Neoson Genèse integrated amplifier ($7,885), the SoulNoteA-1 ver2 integrated amplifier, Atlantis Lab AT-16 and AT-21 loudspeakers, and the AudioByte SuperVOX DAC and SuperHUB streaming hub. The systems are aimed at building complete high performance systems available through the retailer’s online store. Pricing for the individual products varies by component and has not been fully confirmed ahead of the show.
Location: Room 1109
Dinaburg Technology
Dinaburg Technology will debut its C2S (Concentric Coplanar Stabilizer) driver architecture at AXPONA 2026. Demonstrations will include a 60 x 72 mm headphone transducer and a 6.5-inch loudspeaker driver built around the same architecture, which is designed to repurpose energy typically lost in traditional driver designs. Pricing and commercial product timelines have not yet been announced.
Advertisement
Advertisement. Scroll to continue reading.
Location: Ear Gear Booth 9425
Gato Audio & Copenhagen Loudspeaker Company
Gato Audio and Copenhagen Loudspeaker Company will introduce the CLC65 loudspeaker, described as the first commercially available three way loudspeaker built exclusively with Purifi drivers. The system is intended to demonstrate advanced driver integration and system coherence using Purifi’s driver technology. Pricing has not yet been announced.
Location: Room 731
Goldmund
Goldmund will present the Telos 8800 monoblock amplifiers, rated at 1,400 watts of Class AB power and priced at $790,000 per pair. The system will also include the Mimesis Reference preamplifier, priced at $159,000, both intended to drive reference level loudspeaker systems.
Location: Utopia Lounge
Advertisement
Jorma
Jorma will introduce the Paragon flagship cable series, designed as the company’s highest level signal transmission product line. Pricing varies depending on cable type and configuration.
Location: Utopia Lounge
Luxman
Luxman will showcase the L100-C integrated amplifier, a pure Class-A design priced at $11,995, alongside the D100-C SACD/CD player and DAC, priced at $18,995.
Location: Prosperity Room
Magico
Magico will host the world premiere of the new S7 (2026) loudspeaker, a five driver, three way floorstanding system that weighs approximately 384 pounds per speaker. The S7 represents the latest model in Magico’s S-Series lineup, priced at $200,000 per pair.
Location: Club Room 15
Marten
Marten Coltrane Quintet Loudspeaker in Dark Grey
Marten will demonstrate the Coltrane Quintet Extreme loudspeakers, priced at $510,000 per pair. The speakers incorporate beryllium and diamond drivers and are derived from the company’s flagship Coltrane series.
Advertisement. Scroll to continue reading.
Advertisement
Location: Utopia Lounge
Mojo Audio
Mojo Audio will demonstrate the Mystique Z Quantum DAC (from $12,999) alongside its “One Box Wonder” music server (from $6,999), a single chassis server designed to function as a central digital source component in high end audio systems.
Location: Room 360
Nobius Audio
Nobius Audio N1-9
Nobius Audio will introduce several loudspeaker products including the N1-9B bass module, designed to complement the N1-9 bookshelf speaker, along with the S2-5 and S1-2 bookshelf loudspeakers. Pricing for the new models has not yet been confirmed.
Location: EX9315
Orchestalls
Orchestalls will showcase compact versions of its existing loudspeaker models, offering smaller scale versions of previously introduced designs intended to make the brand’s speaker lineup more accessible. Pricing has not been announced.
Advertisement
Location: Room 728
Paramax Corporation
Paramax Corporation will debut a new powered monitor speaker, representing the company’s entry into the powered speaker category. Pricing and final specifications have not yet been released.
Location: EX9326
Rockna (via Mimic Audio)
Mimic Audio will feature the Rockna Sinum digital source component, highlighting the company’s proprietary DAC architecture and digital signal processing design. Pricing has not yet been announced.
Advertisement
Location: Ear Gear Booth 8119
Quintessence Audio
Dohmann Helix One MKIII turntable (titanium)
Quintessence Audio will feature several notable systems including the Dohmann Helix One MKIII turntable, the dCS Varese digital playback system, and the Innuos Nazare music server. Pricing varies significantly depending on system configuration and was not fully confirmed prior to the show.
Advertisement. Scroll to continue reading.
Location: Connection 1, Perfection 1, Knowledge 1
Silent Angel
Silent Angel will present the MX flagship audiophile network streamer and a new version of the N8 network switch, both designed for use in dedicated streaming audio systems. Pricing has not yet been confirmed.
Advertisement
Location: EX9112
SoundCrewe Audio
SoundCrewe SC-450 Loudspeakers
SoundCrewe Audio will display several models from its SC-750 loudspeaker lineup, including the Model 350 desktop speaker, Model 450 and Model 550 bookshelf speakers, and the Model 750 flagship loudspeaker using Accuton CELL drivers. Pricing varies depending on configuration and finish.
Location: Room 546
SVS
SVS 3000 Micro R|Evolution at CES 2026 atop stack of SVS 3000 Series subwoofers
SVS will debut the 3000 Micro R|Evolution compact subwoofer, which uses dual opposing 9-inch drivers powered by a 1,200-watt amplifier with over 4,000 watts peak output in an 11-inch enclosure. The smallest subwoofer in the SVS line-up replaces the original 3000 Micro, and completes the new 3000 R|Evolution series we first saw at CES 2026. Pricing has not yet been announced.
Location: Rooms 1442 and 1444
Swan Song Audio
Swan Song Audio will introduce several new products including 3D printed loudspeakers, a 3D printed turntable, battery powered phono and DAC preamplifiers, 11.5-watt battery powered Class A monoblock amplifiers, Argentium silver cables, and acoustic treatment products. Pricing varies by product and has not been finalized.
Advertisement
Location: Room 1644
T+A
T+A will demonstrate its new SDX 3100 HV streamer/DAC/preamplifier, along with the A 3100 HV stereo amplifier and PS 3100 HV external power supply, which together form part of the company’s HV electronics lineup. Pricing has not yet been confirmed.
Location: Schaumburg G
Vandersteen
Vandersteen will debut a new line stage preamplifier featuring purist circuitry and defeatable tone, stereo/mono, and matrix functions. Pricing has not yet been announced.
Advertisement
Advertisement. Scroll to continue reading.
Location: Room 1434
VPE Electrodynamics
VPE Electrodynamics will introduce the Elevon active DSP loudspeaker, a three way powered speaker system with DSP based crossover and room adjustment controls, along with the Aileron bookshelf speaker that can function as a stereo or center channel design. Pricing has not yet been confirmed.
Location: Room 502
Advertisement
Woo Audio
Woo Audio will debut the WA300B-BAL tube power amplifier, designed and assembled in New York. The amplifier will be demonstrated driving Ø Audio and Aretai loudspeakers. Pricing has not yet been announced.
Location: Room 1125
The Bottom Line
AXPONA 2026 is already shaping up to deliver a wide range of new loudspeakers, amplifiers, turntables, DACs, and streaming components from across the high end audio world. The list above represents just the products we know about today, and we expect additional announcements and surprise debuts as April approaches. We will continue updating this preview as more manufacturers confirm launches and will also highlight the new products we expect to see and hear once the show begins. One thing is certain: when the industry gathers in the Chicago suburbs this spring, the Windy City is going to get very loud.
The Parallels Desktop virtualization tool is confirmed to work on Apple’s new MacBook Neo, but there are enough caveats to suggest that you’d be better off buying a MacBook Air or MacBook Pro instead.
Apple’s MacBook Neo works just fine with Parallels Desktop
Parallels Desktop has long been the go-to app for people who need to run virtualized machines on a Mac. It’s a great way for people to use Windows apps without having to use a separate machine or switch entirely. With the release of the MacBook Neo, some had wondered whether its A18 Pro chip would be capable of running Parallels Desktop. The A18 Pro first debuted in the iPhone 16 Pro, and this is the first time a Mac has used an iPhone A-series chip. M-series chips are normally found in Apple’s computers. Continue Reading on AppleInsider | Discuss on our Forums
Training standard AI models against a diverse pool of opponents — rather than building complex hardcoded coordination rules — is enough to produce cooperative multi-agent systems that adapt to each other on the fly. That’s the finding from Google’s Paradigms of Intelligence team, which argues the approach offers a scalable and computationally efficient blueprint for enterprise multi-agent deployments without requiring specialized scaffolding.
The technique works by training an LLM agent via decentralized reinforcement learning against a mixed pool of opponents — some actively learning, some static and rule-based. Instead of hardcoded rules, the agent uses in-context learning to read each interaction and adapt its behavior in real time.
Why multi-agent systems keep fighting each other
The AI landscape is rapidly shifting away from isolated systems toward a fleet of agents that must negotiate, collaborate, and operate in shared spaces simultaneously. In multi-agent systems, the success of a task depends on the interactions and behaviors of multiple entities as opposed to a single agent.
The central friction in these multi-agent systems is that their interactions frequently involve competing goals. Because these autonomous agents are designed to maximize their own specific metrics, ensuring they don’t actively undermine one another in these mixed-motive scenarios is incredibly difficult.
Advertisement
Multi-agent reinforcement learning (MARL) tries to address this problem by training multiple AI agents operating, interacting, and learning in the same shared environment at the same time. However, in real-world enterprise architectures, a single, centralized system rarely has visibility over or controls every moving part. Developers must rely on decentralized MARL, where individual agents must figure out how to interact with others while only having access to their own limited, local data and observations.
Multi-agent reinforcement learning
One of the main problems with decentralized MARL is that the agents frequently get stuck in suboptimal states as they try to maximize their own specific rewards. The researchers refer to it as “mutual defection,” based on the Prisoner’s Dilemma puzzle used in game theory. For example, think of two automated pricing algorithms locked in a destructive race to the bottom. Because each agent optimizes strictly for its own selfish reward, they arrive at a stalemate where the broader enterprise loses.
Another problem is that traditional training frameworks are designed for stationary environments, meaning the rules of the game and the behavior of the environment are relatively fixed. In a multi-agent system, from the perspective of any single agent, the environment is fundamentally unpredictable and constantly shifting because the other agents are simultaneously learning and adapting their own policies.
Advertisement
While enterprise developers currently rely on frameworks that use rigid state machines, these methods often hit a scalability wall in complex deployments.
“The primary limitation of hardcoded orchestration is its lack of flexibility,” Alexander Meulemans, co-author of the paper and Senior Research Scientist on Google’s Paradigms of Intelligence team, told VentureBeat. “While rigid state machines function adequately in narrow domains, they can fail to scale as the scope and complexity of agent deployments broaden. Our in-context approach complements these existing frameworks by fostering adaptive social behaviors that are deeply embedded during the post-training phase.”
What this means for developers using LangGraph, CrewAI, or AutoGen
Frameworks like LangGraph require developers to explicitly define agents, state transitions, and routing logic as a graph. LangChain describes this approach as equivalent to a state machine, where agent nodes and their connections represent states and transition matrices. Google’s approach inverts that model: rather than hardcoding how agents should coordinate, it produces cooperative behavior through training, leaving the agents to infer coordination rules from context.
The researchers prove that developers can achieve advanced, cooperative multi-agent systems using the exact same standard sequence modeling and reinforcement learning techniques that already power today’s foundation models.
Advertisement
The team validated the concept using a new method called Predictive Policy Improvement (PPI), though Meulemans notes the underlying principle is model-agnostic.
“Rather than training a small set of agents with fixed roles, teams should implement a ‘mixed pool’ training routine,” Meulemans said. “Developers can reproduce these dynamics using standard, out-of-the-box reinforcement learning algorithms (such as GRPO).”
By exposing agents to interact with diverse co-players (i.e., varying in system prompts, fine-tuned parameters, or underlying policies) teams create a robust learning environment. This produces strategies that are resilient when interacting with new partners and ensures that multi-agent learning leads toward stable, long-term cooperative behaviors.
How the researchers proved it works
To build agents that can successfully deduce a co-player’s strategy, the researchers created a decentralized training setup where the AI is pitted against a highly diverse, mixed pool of opponents composed of actively learning models and static, rule-based programs. This forced diversity requires the agent to dynamically figure out who it is interacting with and adapt its behavior on the fly, entirely from the context of the interaction.
Advertisement
Diverse multi-agent training
For enterprise developers, the phrase “in-context learning” often triggers concerns about context window bloat, API costs, and latency, especially when windows are already packed with retrieval-augmented generation (RAG) data and system prompts. However, Meulemans clarifies that this technique focuses on efficiency rather than token count. “Our method focuses on optimizing how agents utilize their available context during post-training, rather than strictly demanding larger context windows,” he said. By training agents to parse their interaction history to infer strategies, they use their allocated context more adaptively without requiring longer context windows than existing applications.
Using the Iterated Prisoner’s Dilemma (IPD) as a benchmark, the researchers achieved robust, stable cooperation without any of the traditional crutches. There are no artificial separations between meta and inner learners, and no need to hardcode assumptions about how the opponent’s algorithm functions. Because the agent is adapting in real-time while also updating its core foundation model weights over time across many interactions, it effectively occupies both roles simultaneously. In fact, the agents performed better when given no information about their adversaries and were forced to adapt to their behavior through trial and error.
Multi-agent training works best when given a pool of diverse agents and allowed to explore the rules by themselves (source: arXiv)
Advertisement
The developer’s role shifts from rule writer to architect
The researchers say that their work bridges the gap between multi-agent reinforcement learning and the training paradigms of modern foundation models. “Since foundation models naturally exhibit in-context learning and are trained on diverse tasks and behaviors, our findings suggest a scalable and computationally efficient path for the emergence of cooperative social behaviors using standard decentralized learning techniques,” they write.
As relying on in-context behavioral adaptation becomes the standard over hardcoding strict rules, the human element of AI engineering will fundamentally shift. “The AI application developer’s role may evolve from designing and managing individual interaction rules to designing and providing high-level architectural oversight for training environments,” Meulemans said. This transition elevates developers from writing narrow rulebooks to taking on a strategic role, defining the broad parameters that ensure agents learn to be helpful, safe, and collaborative in any situation.
‘It’s been 20 years, it’s surprising to me how little has changed’: Sonos CEO and former Pandora exec Tom Conrad reveals what he thinks is ‘holding us back’ from more music streaming innovation
But given the fact that Conrad’s history includes 10 years at Pandora in the early days of music streaming — he was Chief Technology Officer when he left in 2024 — and that Sonos is so deeply connected to the music-streaming services, I wanted to ask what he thought about these services today, both in terms of working with them now on the Sonos app, and personally as a streaming pioneer.
Article continues below
Advertisement
(Image credit: Sonos)
“One of the things I’m really excited about in terms of our software roadmap is working more closely with our music service partners,” he begins. “All I really care about with with respect to listening to music on Sonos is getting the customer as quickly and seamlessly as possible to their outcome.
“If that means AirPlay or Bluetooth or Spotify Connect or experiences inside of Spotify versus experiences inside of our app… I don’t care. I just want it to work every time, and have it be completely seamless. I feel like we have a better relationship with Apple, Amazon, Spotify than we have in years, and I’m really excited about the work we’re driving together.”
Advertisement
Apple is motivated by selling hardware, and Spotify is motivated by reducing licensing costs.Tom Conrad, Sonos CEO
I expect those who use the Sonos app instead of AirPlay or other direct streaming tech will be pleased with the idea of being able to get into music quicker and more easily, but it’s the more personal insights I’m more interesting in, and Conrad shared some of those too.
“You know, the iPod invented the core conventions of modern digital music, and then in 2004, Pandora and Last FM, I suppose, kind of invented the modern conventions around personalized streaming audio,” he told me. “And it’s been 20 years, and it’s surprising to me how little has changed in that experience.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
“We’ve gone from a world where you had access to just the CDs you bought to a world where you have access to hundreds of millions of songs in your pocket, and yet the user interface of it all is kind of just some hierarchical browsing, and then a fullscreen audio player with skip buttons and things.
Advertisement
“I guess quietly, at night, I sort of imagine a future where there’s more innovation and [questioning] what does it mean to navigate the whole entire world of music with something that wasn’t designed for 1,000 songs in your pocket.”
Motivational speaker
I asked if Conrad thinks the physically small size of phone screens is a restrictive element that holds us back from developing new ways of interacting with music.
“You know what I think is mostly holding us back in that regard? Apple is motivated by selling hardware, and Spotify is motivated by reducing licensing costs, and no one is motivated by: let’s make a great and innovative music discovery experience for the consumer.”
I point out that Qobuz and Tidal are more focused on music discovery, but don’t have the bottomless resources that Spotify and Apple do, which Conrad agrees with — but overall, I agree with him.
I always say that the vinyl revival and the popularity of the best turntables here in the 2020s is in no small part because people want music to feel special, with the thrill of discovery. Physical media gives people the excitement of successfully finding something they didn’t have before when they’re looking through a record store’s boxes — the power of a surprise.
Obviously, it would be foolish to replicate the scarcity element of physical media in a streaming app, but Conrad’s suggestion of new ways to navigate and discover music seems like a way to scratch that same itch of making music apps exciting by providing more ways for you to find something you’ve never heard before, and to then explore that artist or genre.
Advertisement
The discovery features of music streaming services feel so narrow — they either replace radio or throw a pipeline of music at you without context, and with only the foggiest sense of why you’d be interested in it.
I’d like it to feel interactive, like by opening the correct door, searching in the correct box, or asking the correct source, I can find something new to experience. I hope we’ll see more innovative interfaces to create the experience of being a smart record hunter in the future.
And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
For two decades, the RSAC Innovation Sandbox contest has been the industry’s most reliable crystal ball. With over $50.1 billion in investments and more than 100 acquisitions across its alumni, the contest has an extraordinary track record of spotting cybersecurity’s future leaders before the rest of the world knows their names.
The contest’s track record also offers a story of generational innovation that speaks for itself, says Cecilia Marinier, vice president of innovation and scholars at RSAC.
“We see one founder buying another founder buying another founder,” Marinier says. “Think about the amount of accumulated knowledge, and how powerful it is to continue to build on such solid foundations.”
Advertisement
It’s a pattern that repeats throughout the Sandbox’s alumni network. Last year, Donnchadh Casey and James White, CEOs of Calypso AI sold their company to F5, whose current Chief Product Officer is Kunal Anand. Anand was on the RSAC 2016 Innovation Sandbox stage as co-founder of Prevoty. His company was bought by Imperva, which was the winner of the contest in 2007. It’s all resulted in a tight-knit cycle of founders, operators, and acquirers that continues to shape the cybersecurity ecosystem.
Oliver Friedrichs, currently GM of CrowdStrike appeared on the Innovation Sandbox stage twice, winning in 2016 with Phantom, which was acquired by Splunk. He then returned as a 2023 finalist with Pangea, which was later acquired by CrowdStrike. Ali Golshan, a 2017 finalist with StackRox, went on to sell Gretel AI to Nvidia. Rehan Jalil, the 2020 winner who brought Securiti AI to the stage, saw his company acquired by Veeam for $2.7 billion.
“That’s with a B,” Marinier notes, underscoring the scale of value emerging from the Sandbox alumni network. “Those numbers also speak for themselves.”
See the 2026 RSAC top 10 finalists live on stage
This year’s Top Ten finalists take the stage at Moscone Center in San Francisco on Monday, March 23, each delivering a three-minute pitch to a panel of seasoned industry judges. The lineup reads like a map of enterprise security’s most urgent pressure points in 2026: agentic AI governance, non-human identity management, social engineering defense, supply chain provenance, and AI-native code security, among others.
Advertisement
Finalists include:
Charm Security: uses its agentic AI workforce to targets scams and human-centric fraud
Clearly AI: helps teams ship secure software fast by replacing manual work with AI-powered reviews
Crash Override: embeds in CI/CD to capture build execution data that APIs can’t access
Fig Security: finds and fixes broken security flows across the entire SecOps stack
Geordie AI: a security and governance platform purpose-built for AI agents
Glide Identity: verifies users instantly and securely—without passwords or SMS codes
Humanix: designed to stop social engineering attacks by detecting and responding to attacks on people
Realm Labs: enables enterprises to see inside the AI’s “brain” and monitor its thoughts during inference
Token Security: focused on governing AI agents and machine identities at enterprise scale
ZeroPath: replaces traditional SAST, SCA, and secrets scanning with a single AI-native engine capable of detecting complex business logic flaws.
“The most disruptive technology right now is obviously AI, and it’s bringing with it some brand-new security challenges that are being developed at the same rate that AI is evolving,” Marinier says. “Our finalists are bringing cutting-edge solutions for tackling those problems and beating those nefarious actors.”
Agentic AI, in particular, emerged as a dominant theme this cycle.
“Governance for AI, continuous monitoring, automation, SecOps resilience, everything from threat modeling to how to use agentic AI, and then controlling against agentic AI getting into systems, it’s all there in our top 10,” she says. “It’s the call to action to today and tomorrow’s security leaders.”
Advertisement
Who selects the winners, and why it matters
One of the less-discussed secrets behind the Sandbox’s track record is the rigor of its judging panel. This year’s panel includes:
Nasrin Rezai, SVP & CISO at Verizon
Larry Feinsmith, head of global technology strategy at JPMorganChase
David Chen, head of global technology investment banking at Morgan Stanley
Paul Kocher, cryptographer and entrepreneur
Niloofar Razi, operating partner at Capitol Meridian Partners
“We’re very careful about how we put together the panel,” Marinier explains. “They have to represent a variety of perspectives, including an eye for startups that are likely to have positive trajectories. They’re top leaders in the industry, who are able to recognize the companies that have risen above the noise.”
Critically, RSAC itself plays no role in the selection, she adds.
“The judges select these companies,” she says. “They have for the past 20 years, and they will be going into the future.” That independence, she argues, is a core reason why the contest carries such weight with the industry.
Advertisement
The $5 million investment for the future of finalists
Beginning in 2025, as part of the contest’s 20th anniversary, all 10 finalists receive a $5 million investment in the form of a SAFE note, funded by Crosspoint Capital. It’s still early days for measuring the full impact, but Marinier points to the trajectory of ProjectDiscovery, last year’s winner.
Funding launched ProjectDiscovery from a hopeful startup to a company with enough traction to hire industry professionals with experience and know-how, who wouldn’t have previously considered an early-stage startup. Not only did they have the funds, they had the recognition, and they were able to attract great talent because they’re obviously going somewhere.
“The money is ultimately about extending the runway,” Marinier adds. “The SAFE note gives finalists breathing room to scale infrastructure and capitalize on the visibility the contest generates, before the spotlight fades.”
RSAC’s broader innovation ecosystem
The Innovation Sandbox contest is the flagship, but it’s the centerpiece of a significantly larger innovation infrastructure that Marinier has built over the past decade. In that time, RSAC’s innovation programming has touched more than 1,000 companies across multiple programs.
Advertisement
Launch Pad, now in its sixth year, functions as the Sandbox’s “little brother,” a Shark Tank-style forum where earlier-stage companies receive real feedback from judges without a winner being declared, though some of those companies are already starting to “graduate” to the next level of industry success. The Early Stage Expo, featuring 78 companies this year, gives attendees a window into what’s coming down the pipeline, sitting alongside the conference’s 600 main exhibitors.
The Innovation Showcase runs year-round, not just during conference week, with live Q&A sessions between entrepreneurs and audiences that are then carried into RSAC’s new membership platform, an effort to sustain connections across the full year, not just the five days in San Francisco.
There’s also a dedicated track for investors and entrepreneurs, featuring VCs sharing forward-looking perspectives, sessions on fundraising strategy, and design partnership frameworks. And for the next generation, RSAC’s Security Scholars program selects 60 students from universities across the country, with 22 presenting research posters on Wednesday of conference week.
“The security scholars are presenting their research that could lead to nascent technology,” Marinier says. “They’re in theearly phase, working their way up the ladder. One day they’ll make it onto our stages, and after that, the world’s their oyster.”
Advertisement
Why RSAC Conference is unmissable
For anyone serious about the future of cybersecurity, whether you’re a CISO, a founder, an investor, or an engineer, Marinier makes the case plainly.
“Building a safer society requires bold ideas, and new technologies, and real-world solutions,” she says. “RSAC Conference is bringing together some the newest, the smartest, the most innovative security perspectives in the industry for critical conversations about solving the security problems the world faces.”
The RSAC Innovation Sandbox contest kicks off at Moscone Center on Monday, March 23 at 9:30 AM PT. Winners will be announced by approximately noon the same day.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.