In short:Intel has signed on as the primary foundry partner for Elon Musk’s Terafab, a $25 billion joint venture between Tesla, SpaceX, and xAI targeting a terawatt of AI compute per year, handing the struggling chip giant the marquee customer it has been searching for since pivoting to a foundry-first strategy.
On 7 April 2026, Intel announced it is joining the Terafab project, becoming the foundry partner for the most ambitious semiconductor facility ever proposed in the United States. The announcement came two weeks after Musk first unveiled Terafab at the North Campus of Giga Texas in Austin, a joint venture between Tesla, SpaceX, and xAI that claims it will produce one terawatt of AI compute every year. Intel’s role is to contribute its most advanced process node, packaging expertise, and manufacturing scale to make that claim real. For Intel chief executive Lip-Bu Tan, who has spent the past year attempting to rebuild Intel around an external foundry business, the deal is the most significant external customer win the company has landed since he took the job.
What Terafab is claiming to build
Terafab is designed as a vertically integrated semiconductor complex, covering chip design, lithography, fabrication, memory production, advanced packaging, and testing under a single roof, with a stated goal of producing between 100 billion and 200 billion custom AI and memory chips per year. The initial buildout targets 100,000 wafer starts per month, with ambitions to eventually scale to one million wafer starts per month at full capacity. The project involves two separate facilities on the Giga Texas campus: one dedicated to chips for automotive and humanoid robotics applications, including Tesla’s Full Self-Driving system, its Cybercab robotaxi programme, and the Optimus robot line; and a second for high-performance AI data centre infrastructure and specialised processors for orbital deployments.
That orbital component is central to the project’s rationale. SpaceX, which completed its acquisition of xAI in an all-stock deal in February 2026, creating a combined entity valued at approximately $1.25 trillion, is building out a constellation of space-based AI satellites internally designated AI Sat Mini. Musk has said 80% of Terafab’s compute output will be directed toward that orbital infrastructure, with the remaining 20% for ground-based applications. The full cost of the project has been cited as between $20 billion and $25 billion, though independent analysts have been sharply sceptical of whether that figure is remotely sufficient to meet the stated production targets. A note from Bernstein Research estimated the true capital required to hit one terawatt of annual compute at approximately $5 trillion, more than 70% of the total annual United States federal budget.
Intel will contribute its 18A process node, the company’s most advanced logic manufacturing technology, currently ramping to high-volume production at Intel’s fabrication plants in Arizona and Oregon. Intel’s 18A is a 1.8-nanometre-class node, placing it in the same tier as the most advanced processes currently entering commercial production globally, and it represents the most sophisticated semiconductor capability manufactured entirely within the United States. Intel’s statement on joining Terafab was direct: “Intel is proud to join the Terafab project with SpaceX, xAI, and Tesla to help refactor silicon fab technology.” The company added: “Our ability to design, fabricate, and package ultra-high-performance chips at scale will help accelerate Terafab’s aim to produce 1 TW/year of compute to power future advances in AI and robotics.”
Advertisement
Tan’s post on X was more personal in its framing. “Elon has a proven track record of reimagining entire industries,” he wrote. “This is exactly what is needed in semiconductor manufacturing today. Terafab represents a step change in how silicon logic, memory and packaging will get built in the future. Intel is proud to be a partner.” Intel’s shares rose approximately 4% on the announcement, closing at $52.91. The market reaction reflects how significant the deal is for Intel’s foundry ambitions: in its most recent full year, Intel Foundry generated just $307 million in external customer revenue, a figure that makes the company a distant also-ran against Taiwan Semiconductor Manufacturing Company, which generates tens of billions annually from external customers. Terafab, if even partially realised, would transform Intel Foundry’s commercial profile entirely.
Intel’s recovery, and what this bet requires
Tan inherited an Intel in acute crisis. The company had lost ground to TSMC and AMD across almost every major product category, its own manufacturing roadmap had slipped repeatedly, and its foundry business, the effort to manufacture chips for external customers as TSMC does, had attracted little meaningful interest beyond government-supported contracts under the US CHIPS and Science Act. Tan’s restructuring has been aggressive: thousands of redundancies, a sharper focus on Intel’s 18A and 14A process nodes as the foundation of the foundry pitch, and a deliberate effort to position Intel’s domestic manufacturing capability as a geopolitical differentiator at a moment when US policymakers are intensely focused on reducing dependence on Taiwanese chipmaking.
Terafab is the clearest expression yet of where that pitch lands. The CHIPS Act tailwinds, the Trump administration’s desire to see advanced semiconductor production in the United States, and the specific demand Musk’s companies represent for high-volume, US-manufactured chips at the leading edge, all of those forces converge in this partnership. Whether Intel’s 18A can deliver at the yields and volumes Terafab’s targets require is a separate question. The node has been in development for several years and is only now entering volume ramp; the gap between a controlled high-volume manufacturing ramp and the production scales Terafab envisions remains very large. Chipmakers building the largest foundries in the world require several years of construction and billions of dollars before the first wafer is processed.The scale of capital commitments now characterising AI infrastructure investmentgives some context for what serious execution at Terafab’s claimed targets would actually require.
The credibility problem Terafab has not solved
The scepticism around Terafab is structural, not merely financial. Building a 2nm-class fabrication facility capable of 100,000 wafer starts per month costs roughly $25-35 billion on its own, according to Tom’s Hardware’s analysis of Bernstein’s research, meaning the entire stated Terafab budget is roughly enough to build a single fab operating at a fraction of the claimed full-capacity scale. Reaching one million wafer starts per month would require dozens of such facilities. The $20-25 billion figure appears to represent initial construction capital for the first phase, rather than the cost of the stated ambition.
Advertisement
There is also the question of the companies at the table. SpaceX-xAI’s internal situation has been turbulent:all 11 of xAI’s original co-founders have now left the companysince the SpaceX acquisition, a rate of attrition that has raised questions about the organisation’s technical continuity. Musk’s companies have a documented history of announcing timelines for facilities and products that subsequently stretch by years. Tesla’s Cybertruck, Optimus, and Full Self-Driving have each missed multiple committed dates without affecting the company’s willingness to make new commitments. None of this disqualifies Terafab, Musk’s companies have also delivered on goals that were widely dismissed, most notably SpaceX’s orbital launch programme, but it establishes why analysts are not taking the one-terawatt headline at face value.
What the partnership means for the chip industry
Intel’s arrival at Terafab lands at a moment when the chip industry is navigating a broader restructuring of who makes what and for whom. The rise of custom AI silicon, Amazon’s Trainium, Google’s TPUs, Microsoft’s Maia, has been eating into the share of AI workloads that run on Nvidia hardware.Nvidia’s response has been to open its NVLink Fusion interconnect to third-party silicon, including Marvell’s custom AI accelerators, a strategy designed to keep custom chip buyers inside Nvidia’s ecosystem even as they move off pure Nvidia hardware. Terafab represents something different: a vertically integrated attempt to produce custom silicon at a scale that has no precedent outside of the established foundry giants. If the project proceeds anywhere near its stated ambitions, it would add a third major domestic US semiconductor manufacturing ecosystem to a landscape currently dominated by TSMC’s Arizona expansion and Samsung’s Texas operations.
For Intel, the strategic logic is clear.As hyperscalers and technology companies increasingly pilot non-Nvidia chips for AI training and inference workloads, the market for foundry services from a domestically situated, leading-edge manufacturer is growing precisely when Intel has positioned itself to serve it. Whether Terafab is the vehicle that finally validates that positioning, or another ambitious announcement that tests the distance between Musk’s projections and physical reality, will become clearer as construction begins and wafer starts are counted rather than promised.The capital flowing into AI infrastructure at this scalehas a way of turning implausible timelines into achieved ones, and Intel, for the first time in years, is positioned to benefit if it does.
In 2026, the high-end loudspeaker market has taken a hard turn into what can only be described as flagship meshugas. One week it is Wilson Audio unveiling the Autobiography floorstanding speakers, the next it is Børresen Acoustics pushing even further into the stratosphere, and not long before that, YG Acoustics dropped the Titan in active sub configuration with a nickel finish at a cool $910,000 per pair. At this level, the question is no longer about system matching or room treatment. It is whether you need a bigger listening room or a real estate agent.
Against that backdrop, Wilson Audio believes it has answered the “ultimate speaker” question with the new Autobiography. Standing 81 inches tall and tipping the scale at over 800 pounds per speaker, built from the company’s proprietary V Material, X Material, and S Material composites, this is not a product designed to blend in. It is a statement piece in every sense, and one that demands a closer look.
The Story Behind the Flagship Speaker
An autobiography is a personal account shaped by time, intent, and refinement. That framing is not accidental. With the Autobiography, Wilson Audio is presenting what it sees as a physical expression of its design history and engineering priorities.
Advertisement
The Autobiography draws on more than five decades of work inside the company, from the late-David A. Wilson’s early experiments with time alignment, enclosure materials, and resonance control to the current generation’s continued focus on precision and consistency. Every aspect of the speaker, from cabinet geometry to material selection, is positioned as an extension of that ongoing development rather than a reset.
To be clear, this is not a retrospective product. Wilson Audio isn’t repackaging past ideas and calling it a day. The Autobiography builds on a lineage that stretches back to the original WAMM and WATT systems, but the goal here is forward momentum, taking what worked, understanding why it worked, and applying that knowledge with newer materials, tighter tolerances, and more advanced modeling.
If there’s a “story” here, it’s told through engineering choices rather than sentiment. The Autobiography is less about looking back and more about documenting where Wilson Audio believes it stands right now—and how far it can still push the envelope.
The Drivers
At the core of the Autobiography is an entirely new driver complement. This is a five-way loudspeaker built around an MTM array, and none of the drivers are off the shelf or repurposed. Wilson Audio designed each one specifically for this system, with the expectation that they function as a unified acoustic platform rather than a collection of individual parts.
The vertical layout is deliberate. A 7-inch midrange driver anchors both the top and bottom of the enclosure, while the center section uses a symmetrical MTM crescent array with dual 2-inch midrange drivers flanking Wilson’s CSLS front-firing tweeter. The goal here is controlled dispersion and consistent behavior through the critical midband, where most of the music actually lives.
Advertisement
Bass duties are handled by two dissimilar woofers, a 12-inch and a 15-inch unit, engineered to work together rather than cover separate ranges in isolation. Rounding things out is a rear-firing ambient tweeter intended to add spatial information without calling attention to itself.
On paper, the architecture is about maintaining timing, dynamic range, and tonal balance across the full spectrum. In practice, the intent is straightforward: every driver does its job without stepping on the others, so the system presents music as a single, coherent event rather than a stitched together performance.
CSLS Front Firing Tweeter
The Convergent Synergy Laser Sintered (CSLS) front firing tweeter represents the latest evolution of Wilson Audio’s Convergent Synergy platform. It is not a cosmetic update. The CSLS unit incorporates a redesigned rear wave chamber intended to better manage back wave energy and reduce internal reflections that can smear high frequency detail.
Advertisement. Scroll to continue reading.
The focus here is lowering mechanical and acoustic noise at the source. By improving energy dissipation behind the diaphragm, the tweeter operates with less interference from its own enclosure, which in turn helps preserve low level information and spatial cues.
Advertisement
In practice, the goal is refinement rather than emphasis. The CSLS front firing tweeter is engineered to extend high frequency performance without adding edge or artificial detail, allowing micro dynamics and harmonic texture to come through with greater stability and less effort.
2-inch Midrange
Flanking the CSLS tweeter are two newly developed 2-inch midrange drivers paired with optimized sonic faceplates. Referred to as the 2-inch MID (Midband Integration Driver), these units are designed to bridge the gap between the speed and articulation of the tweeter and the weight and texture of the larger midrange drivers. Their symmetrical placement supports even dispersion and consistent time alignment through the most sensitive region of human hearing, where small errors are easiest to detect. The intent is straightforward: Wilson Audio is using the 2-inch MID drivers to smooth the midband transition so the system behaves as a single acoustic source, allowing the listener to hear a continuous presentation rather than a collection of individual drivers.
PentaMag 7-inch Midrange Driver
Above and below the MTM assembly sit two 7-inch PentaMag midrange drivers. These build on Wilson’s earlier QuadraMag platform, now using five AlNiCo (aluminum, nickel, cobalt) magnets arranged to increase motor strength, improve flux stability, and maintain linearity under dynamic load. The objective is better control through the midrange under real listening conditions, not just at lower levels. In practice, that translates into a presentation that can scale with volume while retaining clarity and tonal consistency, allowing voices and instruments to carry weight and detail without sounding congested or strained
Rear Firing Tweeter/RFT
Autobiography incorporates an inverted dome rear firing tweeter designed to enhance spatial depth, ambient retrieval, and harmonic decay. The driver uses aerospace grade unidirectional spread carbon fiber, chosen for its stiffness, consistency, and predictable behavior under load. The diaphragm features a variable thickness profile to reduce inertia while maintaining structural integrity, which is critical for low level detail and decay.
This is a wide dispersion design intended to reproduce ambient information without drawing attention to itself. Its operating range extends from 6 kHz to 22 kHz, focusing on spatial cues rather than primary tonal content. An attenuation control allows adjustment from 0 dB to minus 40 dB, with the maximum setting calibrated to begin at minus 7 dB at 10 kHz relative to the front firing tweeter at typical listening distances.
The goal is flexibility without excess. Wilson Audio gives the user the ability to fine tune how much ambient energy is introduced into the room, allowing for subtle reinforcement of space and decay without compromising the system’s overall balance.
The Woofers
The low frequency architecture of Autobiography is built around a clear objective: deliver bass that is fast, controlled, and authoritative without losing tonal nuance as it transitions into the midrange. To achieve this, Wilson Audio employs two purpose built woofers, a 12-inch and a 15-inch unit, engineered to operate in parallel as a unified system rather than as separate contributors covering different bands.
Using different sized drivers in this way introduces challenges in timing, pressure loading, and harmonic consistency. Those are addressed through dedicated motor structures, tuned suspension geometries, and a shared enclosure that allows both woofers to function as a single acoustic source. The result is a low frequency foundation that prioritizes speed, control, and scale, delivering extension and impact without excess or overhang that can compromise integration with the rest of the system.
Advertisement
Port Integration
In addition to the woofers, Autobiography uses a slot type bass reflex port that can be sealed without tools. Adjustments to the port cover and port ring allow the user to fine tune how the system interacts with the room, without adding unnecessary complexity to setup. The cross load flow porting system is designed to provide controlled adjustment of low frequency behavior rather than a fixed response.
In a forward firing configuration, output in the 10 Hz to 75 Hz range is reduced by approximately 1.0 to 1.5 dB, while output between 75 Hz and 130 Hz increases by roughly 1.5 to 2.0 dB. Switching to a rear firing configuration reverses that balance. The intent is straightforward: give the user a practical way to adapt low frequency performance to room boundaries and placement constraints without resorting to external processing.
Addressing Alignment
Wilson Audio designs have used different approaches to mechanical alignment over the years, but Autobiography introduces hardware developed specifically for this system, with a focus on improving how the individual driver modules are positioned for time alignment. From the module alignment sleds to the precision slide spikes, each element is designed to function as part of an integrated structure that supports consistent setup and repeatability. The emphasis here is on accuracy, durability, and ease of adjustment without adding unnecessary complexity.
Advertisement. Scroll to continue reading.
Both the upper and lower 7-inch PentaMag midrange modules are independently adjustable using the alignment sled system. The indicators, gears, and reference scales are clearly marked and can be set using a rotating cam grip, allowing for precise positioning without specialized tools. The MTM crescent frame follows a similar approach. Compared to prior flagships like the WAMM Master Chronosonic and Chronosonic XVX, the goal is a higher degree of control over time domain alignment while making the process more straightforward to implement in an actual listening room.
Connectivity
Custom Wilson Audio spade connectors are used throughout, designed to mate with the company’s proprietary binding posts to ensure a secure and consistent electrical connection. Machined wire clasps are integrated into the gantry to provide structured cable management and reduce strain on the terminals.
Resistors are mounted to pure copper heatsinks to improve thermal dissipation and reduce the potential for performance drift under load. These components are accessible via a framed resistor mount plate on the rear of the woofer enclosure, allowing changes to be made without tools.
The Autobiography is clearly positioned as Wilson Audio’s current statement product and is less about chasing a single headline feature and more about consolidating decades of work in materials, driver integration, and time alignment into one platform. What makes it stand out is the level of system control: proprietary cabinet materials, a fully bespoke driver array, and a mechanical alignment scheme that is more refined than anything the company has previously offered.
It also sits in a very specific tier alongside products like the Børresen M8 Gold Signature and the Sonus faber Suprema. These are not incremental upgrades over “normal” high end speakers. They are part of a category where scale, cost, and engineering ambition are pushed to extremes, and where the conversation shifts from value to execution at any cost.
That leads to the real question: who is this for? Not someone building their first serious system, and not even most seasoned audiophiles. Buyers at this level already have the home, the space, and the budget. The issue is not whether you can afford the speakers — it is whether your room can support them. Systems built around speakers like the Autobiography typically involve six figure investments in amplification, sources, cabling, power, and acoustic work. You are not choosing between these and a more modest setup. You are deciding whether your environment can accommodate this level of scale and output without compromise.
Ultimately, the Autobiography and its peers are about removing limits as much as possible. Whether that translates into a meaningful improvement over less extreme systems will depend on setup and room more than anything else.
Price & Availability
The Wilson Audio Autobiography Floorstanding Loudspeakers are priced at $788,000 (US) per pair through Authorized Wilson Audio Dealers.
Last week, Taylor Swift filed a trio of trademark applications to protect her image and voice. One is meant to cover a well-known photograph of the pop singer holding a pink guitar during a concert on her record-breaking Eras tour, while the two sound trademarks are for simple identifying phrases: “Hey, it’s Taylor Swift” and “Hey, it’s Taylor.”
The move comes as AI deepfakes continue to proliferate across social media. Any individual stands to have their likeness exploited in the creation of nonconsensual AI-generated material; earlier this month, an Ohio man was the first person convicted under a new federal law criminalizing “intimate” visual deceptions of this sort. Celebrities, meanwhile, find themselves at risk of both explicit deepfakes and false endorsements.
A new report from the AI detection company Copyleaks shows that Swift and other stars have recently had their likenesses used in scammy advertisements. Researchers identified a cluster of sponsored videos on TikTok that appeared to show Swift, Kim Kardashian, Rihanna, and others promoting “potentially fraudulent or malicious services,” with the clips making use of what the researchers call “realistic-sounding voices” as well as “textured filters meant to mask some of the flaws in the AI-generated visuals.”
The fake ads show Swift et al. in what seem to be common interview settings—red carpet events or talk show sets. Rather than answering questions, however, the AI-generated celebrities talk up supposed rewards programs in which TikTok users are paid for offering feedback on content served to them.
Advertisement
“I was reading about digital behavior this week and came across a testing feature called TikTok Pay,” says a deepfaked Swift in an ad that uses manipulated footage from an appearance the real Swift made on The Tonight Show Starring Jimmy Fallon in October. “Certain users are being invited to watch videos and submit opinions.” The deepfaked Swift goes on to say that the program is in “limited rollout” for the moment but encourages viewers to see if they qualify for it, adding: “If the page opens for you, don’t overthink it.”
Naturally, anyone who clicks is accepted. These ads eventually lead the user to a third-party service that, despite the TikTok name and logo, has evidently been vibe coded using the AI platform Lovable, whose own branding appears on the page and in the URL. At this point, the researchers say, the user is prompted to begin entering their name and personal information.
While it’s not clear what the advertisers intend to with all the data mined through their celebrity deepfake promotion, scam ads with similar objectives are exceedingly common. Last week, the nonprofit Consumer Federation of America sued Meta, alleging that the tech giant misled Facebook and Instagram users about its efforts to crack down on scam ads—and profited by allowing them to proliferate. On Monday, the US Federal Trade Commission reported that social media scams have surged overall, with Facebook scams accounting for the highest total of financial losses.
It’s no surprise that Swift and her peers are taking legal steps to distance themselves from this fraudulent economy. While Swift hasn’t publicly commented on the reasoning behind her trademark filings, the reputational damage that deceitful deepfakes pose to her billion-dollar brand can hardly be overlooked. The trouble is, they grow more sophisticated by the day.
A new NYT Strands puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Wednesday’s puzzle instead then click here: NYT Strands hints and answers for Wednesday, April 29 (game #787).
Strands is the NYT’s latest word game after the likes of Wordle, Spelling Bee and Connections – and it’s great fun. It can be difficult, though, so read on for my Strands hints.
Want more word-based fun? Then check out my NYT Connections today and Quordle today pages for hints and answers for those games, and Marc’s Wordle today page for the original viral word game.
Advertisement
SPOILER WARNING: Information about NYT Strands today is below, so don’t read on if you don’t want to know the answers.
Article continues below
NYT Strands today (game #788) – hint #1 – today’s theme
What is the theme of today’s NYT Strands?
• Today’s NYT Strands theme is… Wet blankets
NYT Strands today (game #788) – hint #2 – clue words
Play any of these words to unlock the in-game hints system.
Advertisement
SPAT
MOIST
SORE
PIANO
VIOLA
TAME
NYT Strands today (game #788) – hint #3 – spangram letters
How many letters are in today’s spangram?
• Spangram has 12 letters
NYT Strands today (game #788) – hint #4 – spangram position
What are two sides of the board that today’s spangram touches?
First side: bottom, 3rd column
Last side: top, 3rd column
Advertisement
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON’T WANT TO SEE THEM.
Advertisement
NYT Strands today (game #788) – the answers
(Image credit: New York Times)
The answers to today’s Strands, game #788, are…
DRIZZLE
MIST
STEAM
VAPOR
HUMIDITY
AEROSOL
SPANGRAM: CONDENSATION
My rating: Easy
My score: Perfect
The phrase “wet blankets” is used to describe someone who soaks the enjoyment out of a situation.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
After failing to spot “killjoy”, the most obvious word connected to this theme, on the board I decided that we were looking for a more literal connection to wet blankets which could only mean CONDENSATION.
The two Zs helped me find DRIZZLE and from here on it was a synonym search, although among them AEROSOL was not one I expected.
Advertisement
Yesterday’s NYT Strands answers (Wednesday, April 29, game #787)
HOOK
REEL
LURE
WEIGHT
SWIVEL
PLIERS
COOLER
BOBBER
SPANGRAM: TACKLE
What is NYT Strands?
Strands is the NYT’s not-so-new-any-more word game, following Wordle and Connections. It’s now a fully fledged member of the NYT’s games stable that has been running for a year and which can be played on the NYT Games site on desktop or mobile.
I’ve got a full guide to how to play NYT Strands, complete with tips for solving it, so check that out if you’re struggling to beat it each day.
Something shifted in enterprise RAG in Q1 2026. VB Pulse data spanning January through March tells a consistent story: the market stopped adding retrieval layers and started fixing the ones it already has. Call it the retrieval rebuild.
The survey covered three consecutive monthly waves from organizations with 100 or more employees, with between 45 and 58 qualified respondents per month across platform adoption, buyer intent, architecture outlook and evaluation criteria. The data should be treated as directional.
Enterprise intent to adopt hybrid retrieval tripled from 10.3% to 33.3% in a single quarter — even as 22% of qualified enterprise respondents reported having no production RAG systems at all. For data engineers and enterprise architects building agentic AI infrastructure, the data reveals a market in active transition: the RAG architecture most enterprises built to scale is not the one they expect to run by year-end.
Credit: VentureBeat Pulse survey
Advertisement
Hybrid retrieval has become the consensus enterprise strategy. Unlike single-method RAG pipelines that rely on vector similarity alone, hybrid retrieval combines dense embeddings with sparse keyword search and reranking layers, trading simplicity for the retrieval accuracy and access control that production agentic workloads require.
The standalone vector database category is under pressure. Weaviate, Milvus, Pinecone and Qdrant each lost adoption share across the quarter in the VB Pulse data. Custom stacks and provider-native retrieval are absorbing their displaced share.
A growing minority of enterprises are stepping back from RAG altogether — a signal that the market’s maturity narrative has meaningful exceptions.
Organizations that went wide on RAG in 2025 are hitting the same failure point: the architecture built for document retrieval does not hold at agentic scale.
Advertisement
Enterprises that scaled RAG fast are now paying to rebuild it
The two largest intent movements in Q1 are directly connected — enterprises confronting retrieval quality problems at scale, and hybrid retrieval emerging as the consensus answer.
Investment priorities shifted in parallel. Evaluation and relevance testing led budget intent in January at 32.8% and fell to 15.6% by March. Retrieval optimization moved in the opposite direction, from 19.0% to 28.9% — overtaking evaluation as the top growth investment area for the first time.
Credit: VentureBeat Pulse survey
Steven Dickens, vice president and practice lead at HyperFRAME Research, described the operational burden enterprise data teams are facing in a VentureBeat interview in March on Oracle’s agentic AI data stack. “Data teams are exhausted by fragmentation fatigue,” Dickens said. “Managing a separate vector store, graph database and relational system just to power one agent is a DevOps nightmare.”
Advertisement
That fatigue shows directly in the platform data. The custom stack rise to 35.6% is not a rejection of managed retrieval — many organizations run both. It is a consolidation response from engineering teams that have hit the limits of assembling too many components.
Not every enterprise has made it that far. The VB Pulse data includes a signal that complicates the market’s overall growth narrative: 22.2% of qualified respondents reported no production RAG by March, up from 8.6% in January. The report attributes this cohort to organizations that have “not yet committed to any retrieval infrastructure, or have paused programs” — concentrated in Healthcare, Education and Government, the same sectors showing the highest rates of flat budgets.
Standalone vector databases are losing the adoption argument but winning the reliability one
Recent reporting by VentureBeat illustrates why the dedicated retrieval layer still matters in production.
Two enterprises building on Qdrant show why purpose-built vector infrastructure still wins in production.
Advertisement
&AI builds patent litigation infrastructure and runs semantic search across hundreds of millions of documents. Grounding every result in a real source document is not optional — patent attorneys will not act on AI-generated text. That requirement makes the architectural choice clear.
“The agent is the interface,” Herbie Turner, &AI’s founder and CTO, told VentureBeat in March. “The vector database is the ground truth.”
GlassDollar, a startup that helps Siemens and Mahle evaluate startups, runs an agentic retrieval pattern across a corpus approaching 10 million indexed documents. A single user prompt fans out into multiple parallel queries, each retrieving candidates from a different angle before results are combined and re-ranked. That query volume and precision requirement is what drove the choice of purpose-built vector infrastructure.
“We measure success by recall,” Kamen Kanev, GlassDollar’s head of product, told VentureBeat in March. “If the best companies aren’t in the results, nothing else matters. The user loses trust.”
Advertisement
The VB Pulse data shows that framing — retrieval as ground truth rather than feature — is gaining traction across the broader enterprise market, even as standalone vector database adoption declines.
Why enterprises say they need a dedicated vector layer shifted significantly across Q1. In January the top reasons were access control complexity (20.7%) and retrieval precision (19.0%). By March, operational reliability at scale had surged to 31.1% — more than doubling and overtaking everything else. Enterprises are no longer keeping vector infrastructure primarily for precision. They are keeping it because it is the part of the stack they can rely on when query volumes scale.
How enterprises are redefining what good retrieval means
How enterprises judge their retrieval systems shifted notably across Q1 — and the direction of that shift points to a market getting more sophisticated about what good retrieval actually means.
In January, response correctness dominated evaluation criteria at 67.2% — far above anything else. By March, response correctness (53.3%), retrieval accuracy (53.3%) and answer relevance (53.3%) had converged exactly. Getting the right answer is no longer enough if it came from the wrong document or missed the context of the question.
Advertisement
Answer relevance was the only criterion that rose across the quarter, gaining five percentage points. It is also the hardest to measure — whether the retrieved context is actually the right context for that specific question requires purpose-built evaluation infrastructure, not just pass-or-fail correctness checks. Its rise signals that a meaningful share of enterprise buyers have moved past basic RAG testing entirely.
Credit: VentureBeat Pulse survey
The market’s verdict: RAG isn’t dead. The original architecture is
The “RAG is dead” narrative had real momentum heading into 2026. It rested on two claims. The first: that long-context windows — models capable of processing hundreds of thousands of tokens in a single prompt — would make dedicated retrieval unnecessary. The second: that agentic memory systems, which store what an agent learns across sessions rather than retrieving it fresh each time, would absorb the knowledge access problem entirely.
The VB Pulse data is the enterprise market’s answer to the first claim. The long-context-as-dominant-architecture position collapsed from 15.5% in January to 3.5% in February before partially recovering to 6.7% in March. January’s sample was heavily weighted toward Technology and Software respondents — the segment most exposed to long-context model announcements in late 2025. As the sample diversified, the position evaporated.
Advertisement
On the memory question, Jonathan Frankle, chief AI scientist at Databricks, framed the architecture clearly in a March interview with VentureBeat: a vector database with millions of entries sits at the base of the agentic memory stack, too large to fit in context. The LLM context window sits at the top. Between them, new caching and compression layers are emerging — but none of them replace the retrieval layer at the base. New agentic memory systems like Hindsight, developed by Vectorize, and observational memory approaches like those in the Mastra framework address session continuity and agent context over time — a different problem than high-recall search across millions of changing enterprise documents.
The most consequential signal: the share of respondents not expecting large-scale RAG deployments by year-end grew from 3.4% to 15.6% — nearly 5x. That is not a verdict against retrieval. It is a verdict against the retrieval architecture most enterprises built first.
Credit: VentureBeat Pulse survey
The retrieval rebuild is not optional
The retrieval rebuild is the cost of scaling RAG without first deciding what architecture could actually support it.
Advertisement
If your organization is among the 43.1% that entered Q1 planning to expand RAG into more workflows, the VB Pulse data suggests that plan has already changed for many of your peers — and may need to change for you. Hybrid retrieval is the consensus destination. Custom stack growth to 35.6% reflects teams building retrieval infrastructure around requirements that off-the-shelf products do not fully address.
RAG is not dead. The architecture most enterprises used to implement it is. The data suggests the rebuild is not a future decision. For 33% of enterprises, the rebuild is already the stated priority.
Emergency first-responder leaders told federal regulators in a private meeting last month that they were frustrated with the performance of autonomous vehicles on their streets—that city firefighters, police officers, EMTs, and paramedics are forced to spend time during emergencies resolving issues with frozen or stuck cars. One fire official called them “a safety issue for our crews as well as the victims.” WIRED obtained an audio recording of the meeting.
Officials from San Francisco and Austin, where Waymo has been ferrying passengers without drivers for more than a year, said the vehicles’ performance is getting worse. “We are actually seeing something interesting: backsliding of some things that had improved upon,” Mary Ellen Carroll, the executive director of San Francisco’s Department of Emergency Management, told officials with the National Highway Traffic Safety Administration (NHTSA), which oversees self-driving vehicle safety in the US. “They are committing more traffic violations.”
“We’ve seen some behavior we haven’t seen in a few years … Waymo is frequently now blocking our fire stations from access,” added Chief Patrick Rabbitt, the head of the San Francisco Fire Department. “Their default is to freeze.” The situation can prevent firetrucks from responding to emergencies in a “timely and appropriate” way, he said.
In Austin, first responders have been frequently stymied by Waymos “freezing up,” said Lieutenant William White, head of Highway Enforcement Command at the Austin Police Department. White said that, contrary to what Waymo had told first responders, the vehicles often fail to recognize or respond to officers’ hand signals, which can lead to cascading delays during emergencies or unusual road incidents.
Advertisement
“I believe the technology was deployed too quickly in too vast amounts, with hundreds of vehicles, when it wasn’t really ready,” White said. NHTSA did not respond to WIRED’s request for comment.
The complaints come as Waymo embarks on an ambitious expansion across the US and the world. Today, the company offers driverless rides in parts of 10 US cities, with plans to launch service in 10 more before the end of the year, including London. Waymo said last month that it’s now providing 500,000 paid rides weekly—a figure that’s still dwarfed by human-powered ride-hail services (Uber provides some 400 times that number weekly) but has grown tenfold since last year.
But these comments from cities where the service is already operating threaten to slow the rollout of driverless technology, which, according to Waymo’s data, reduces serious crashes compared to human-driven cars. Waymo is already facing political opposition, especially from organized labor, in several dense, blue, and potentially lucrative cities, including Boston, New York City, Seattle, and Washington, DC.
In a statement, Waymo spokesperson Julia Ilina wrote: “We deeply value our partnership with first responders and our shared commitment to safety. Their ongoing feedback has been instrumental in driving impactful improvements to the Waymo service.” The company says it has conducted in-person training for more than 35,000 emergency responders across the country.
Advertisement
Public Comment Periods
The comments made in the private meeting are blunter than what government officials have generally said in public. But they reflect long-simmering and sometimes vocal frustrations expressed by city leaders since at least late last year. Since autonomous vehicle operations are regulated in California and Texas by state rather than city officials, local first-responder departments and those who represent them can generally only request that developers like Waymo make specific changes to their operations.
On Wednesday, Austin first responders appeared before the City Council to discuss Waymo’s response to an incident last month in which a driverless vehicle blocked an ambulance for two minutes that was responding to a shooting in the city’s downtown, which killed three people and injured at least 14. Though officers were able to connect quickly with Waymo operators to move the vehicle, they reported that it had taken up to three minutes to connect with a remote agent in the past. They reiterated that Waymos don’t always respond well to hand signals, especially ones from police mounted on motorcycles.
Waymo declined to attend the meeting, and two front-row chairs labeled “RESERVED FOR: WAYMO” remained empty throughout the two-hour session.
Joby Aviation has completed demonstration flights of its electric air taxi over New York City, testing real routes between JFK and Manhattan helipads as it prepares for a future commercial service. The company says its eVTOL could turn a 60- to 120-minute airport trip into a flight of under 10 minutes, though commercial launch still depends on FAA certification. Electrive reports: To launch operations in New York City, Joby acquired Blade Urban Air Mobility last year. Blade already enables helicopter flights for affluent travelers between Manhattan and airports such as JFK or Newark in just five minutes, avoiding up to two hours of traffic and typical airport hassles. Joby aims to replace this service with quiet, electric air taxis as soon as possible, transitioning Blade’s existing customers to the new technology.
However, introducing a new aircraft into commercial service requires a years-long certification process, overseen in the US by the Federal Aviation Administration (FAA). Joby is now in the final phase of FAA certification. Following a series of demonstration flights in the San Francisco Bay Area, the company has tested its air taxi in New York City on real flight routes and under real-world conditions. During these tests, Joby demonstrated the acoustics and performance metrics critical for entering the urban air taxi market.
During these demonstration flights, Joby’s air taxi took off from John F. Kennedy International Airport (JFK) and landed at various helipads across the city, including Downtown Skyport and the helipads at West 30th Street and East 34th Street in Midtown, where Blade Air Mobility’s premium passenger lounges are located. These locations represent some of the commercial routes Joby plans for New York […]. Fun fact: Joby’s eVTOL aircraft are over 100 to 1,000 times quieter than a conventional helicopter, operating at roughly 55-65 dB during takeoff and landing compared to 90+ dB for helicopters.
Canonical’s plan to add AI features to Ubuntu has sparked pushback from users who are concerned it could follow Windows 11’s AI-heavy direction. “After Canonical’s announcement earlier this week that it’s bringing AI features to Ubuntu, replies included requests for an AI ‘kill switch‘ or a way to disable the upcoming features,” reports The Verge. Canonical says it has no plans for a “global AI kill switch” but it will allow users to remove any AI features they don’t want. From the report: In his original post, [Canonical’s VP of engineering, Jon Seager] said the upcoming AI features will include accessibility tools like AI speech-to-text and text-to-speech, along with agentic AI features for tasks like troubleshooting and automation. Canonical is also encouraging its engineers to use AI more and plans to begin introducing AI features in Ubuntu “throughout the next year.”
In a follow-up comment, Seager clarified that, “my plan is to introduce AI-backed features as a ‘preview’ on a strictly opt-in basis in [Ubuntu version] 26.10. In subsequent releases, my plan is to have a step in the initial setup wizard that allows the user to choose whether or not they’d like the AI-native features enabled.” Ultimately, he said, “All of these capabilities will be delivered as Snaps to the OS, layered on top of the existing Ubuntu stack. That means there will always be the option of removing those Snaps.” Users who prefer to avoid AI entirely could switch to other distros like Linux Mint, Pop!_OS, or Zorin OS. “These distros have some similarities to Ubuntu, but may not necessarily adopt the new AI features Canonical is rolling out,” adds The Verge.
Beijing’s commerce ministry has formally submitted a 30-page document warning the European Commission that its draft Cybersecurity Act, which would make vendor removal mandatory for the first time, could trigger reciprocal measures against European companies in China.
China has formally threatened the European Union with retaliation if a sweeping new cybersecurity law leads to the exclusion of Chinese firms, including Huawei and ZTE, from European critical infrastructure.
The Chinese Ministry of Commerce submitted a 30-page document to the European Commission, reported earlier by the South China Morning Post, explicitly warning that Beijing is prepared to invoke its Foreign Trade Law and State Council Supply Chain Security Regulations, legal frameworks that allow China to restrict trade, investigate foreign entities, and impose reciprocal bans on European companies, if Chinese firms face what it calls discriminatory treatment.
The document was submitted on April 17 to the Commission. MOFCOM spokesperson He Yongqian confirmed the submission at a press briefing on April 24, framing China’s core objection as the draft law’s use of ‘non-technical risk’ factors, a mechanism Beijing argues is a subjective political tool designed to exclude Chinese companies regardless of the actual security properties of their equipment.
Advertisement
What the EU Cybersecurity Act proposes?
The revised EU Cybersecurity Act, announced by the European Commission on January 20, represents a fundamental shift in how Brussels approaches network security. Since 2020, the EU’s ‘5G toolbox’ has recommended that member states avoid high-risk vendors in 5G networks.
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
That recommendation has been implemented unevenly: only 13 of 27 member states had acted on it by the time the new law was announced, and several of the bloc’s most significant economies, including Germany, where Huawei provided equipment across approximately 60% of 5G sites as recently as late 2024, had been slow to act.
Advertisement
The new law changes the legal basis from recommendation to obligation. It would require member states to remove equipment from vendors designated as high-risk suppliers from communications networks within three years of the law entering into force.
It also creates a mechanism under which the Commission can designate an entire country as a ‘cybersecurity threat,’ which would trigger exclusions extending beyond telecoms into 18 critical sectors, including energy, transport, and information technology.
The law does not name Huawei or ZTE explicitly, but the intent is unambiguous: EU Tech Commissioner Henna Virkkunen said it would give the bloc ‘the means to better protect our critical supply chains,’ and Strand Consult data puts Chinese vendors’ share of European 5G infrastructure at between 33% and 40%. A full removal would be the largest forced replacement of telecoms infrastructure in European history.
The precedent that makes Beijing’s threat credible
China’s retaliation threats have a documented track record. When Sweden banned Chinese vendors from its 5G networks in 2020, Ericsson’s revenues in China fell 46% the following year.
Advertisement
The company has never recovered that business. Nokia, which has maintained a small footprint in the Chinese market, has watched its China revenues fall from roughly €2.5 billion in 2018 to approximately €913 million last year.
Nokia executives have told the company internally that it faces a total ban in China for national security reasons, with Nokia’s president of mobile networks, Tommi Uitto, publicly stating that the combined China market share of both Nordic vendors has dropped to 3%.
The asymmetry is pointed. China has already been restricting Nokia and Ericsson, the two European companies that stand to benefit most from a Huawei ban, while simultaneously warning the EU that it will face consequences if it formalises its own exclusions.
That double standard is increasingly being called out. Nokia CEO Justin Hotard has contrasted Europe’s continued openness to Huawei with China’s restrictions on European vendors, and Ericsson’s Börje Ekholm has estimated the EU revenue opportunity from replacing Chinese kit at a ‘sizeable’ number given Huawei and ZTE’s combined European market share.
Advertisement
The Swedish precedent also illustrates the implementation challenge the EU faces independently of Chinese pressure. The UK mandated the removal of Huawei from 5G networks by the end of 2027. BT missed the 2023 deadline for its core network.
Germany ordered Huawei removed from the 5G core by the end of 2026, a deadline that applies to a part of the network Huawei was not even present in when the rules were announced, while allowing retention of Huawei’s radio access network until 2029. The practical reality of a three-year EU-wide rip-and-replace at scale is, as Light Reading noted, ‘ambitious and compliance is not certain.’
What Beijing is threatening and why?
China’s 30-page submission argues on four grounds. First, the ‘non-technical risk’ framework is discriminatory on its face, targeting companies by country of origin rather than by demonstrated security flaw. Second, the law violates WTO principles of non-discrimination and proportionality.
Third, that designating China as a ‘country of cybersecurity concern’ would, if triggered, extend exclusions far beyond telecoms into clean energy, automotive, and industrial supply chains where Chinese companies are deeply embedded in European markets.
Advertisement
Fourth, that European companies operating in China, German automakers with €90 billion in annual exports, Dutch chipmakers, French luxury and aerospace firms, would face reciprocal market access restrictions.
The legal mechanisms cited, China’s Foreign Trade Law and the State Council’s Supply Chain Security Regulations, are the same frameworks Beijing has used in previous technology trade disputes. They permit retaliatory trade restrictions, procurement bans, investigations into foreign entities, and entity list designations that mirror the US model China publicly decries.
The spokesperson’s framing, that China ‘still views cooperative dialogue as the correct path’, is the standard diplomatic hedging that accompanies formal coercive submissions of this kind.
A geopolitically loaded moment
The Trump administration has simultaneously been pressuring the EU to accelerate Huawei removal while threatening tariffs over EU enforcement actions against US tech companies.
Advertisement
The EU is navigating a position in which it faces pressure from Washington to act on Huawei and pressure from Beijing not to, while also trying to maintain economic relationships with both.
Germany, the member state with the most at stake both in terms of Huawei infrastructure and Chinese market exposure for its automotive sector, has been the most cautious about implementation pace.
For Nokia and Ericsson, the stakes are direct. Both were among the companies expected to meet EU leadership precisely around the question of European tech competitiveness and strategic supply chain policy.
A full European Huawei ban would represent the single largest new revenue opportunity the Nordic vendors have had in years. Whether the EU actually follows through, given member state reluctance, the implementation timeline, and Beijing’s explicit threat, is now the central question.
Advertisement
The Cybersecurity Act must still be negotiated with EU governments and the European Parliament before it becomes law. No timeline for that process has been confirmed.
China’s formal submission is designed to influence that negotiation, and the governments most exposed to Chinese trade retaliation, Germany, the Netherlands, and France, are also the ones whose implementation of the existing 5G toolbox has been most limited.
Looking for the most recent Wordle answer? Click here for today’s Wordle hints, as well as our daily answers and hints for The New York Times Mini Crossword, Connections, Connections: Sports Edition and Strands puzzles.
Schiit Audio is taking a more practical swing at tubes with the $99 Buf. It’s a compact tube buffer designed to sit in your signal chain, not take it over.
Buf isn’t a preamp and it’s not a DAC. There are no inputs for sources beyond basic line level, no volume control, and no system control duties. You place it between components; typically between a DAC and amplifier, or between a source and powered speakers, and it inserts a tube stage into the chain. If you don’t want that, it can be switched out and used as a straight pass through.
That makes it easy to experiment without committing to a full tube setup. It can be used to slightly reshape a solid state system, take the edge off a brighter chain, or add some variation to a desktop or headphone rig. It’s also flexible enough to move around depending on the system or use case, which fits Schiit’s usual modular approach.
At $99, Buf is less about replacing components and more about giving users a simple way to try a tube stage in different setups and decide if it works for them.
Advertisement
Schiit Buf
Tubes Anywhere Without Rebuilding Your System
Schiit Audio isn’t pretending the $99 Buf is neutral. In fact, they’re leaning in the opposite direction.
“Buf is a tube buffer. It adds a ton of low-order harmonic distortion, without adding a bunch of noise. It destroys the big perfect of ‘measurement gear.’ At the same time, lots of people, including those who use Audio Precisions all day, think it sounds better. So we figured we’d make this thing and let you find out for yourself,” said Jason Stoddard.
That’s the pitch. Not accuracy in the lab sense, but a different presentation that some listeners may prefer, especially in systems that lean hard into ultra-low distortion solid state performance.
Despite the price, this isn’t a stripped-down implementation. The Schiit Buf runs a 100V plate voltage, uses a linear power supply, and incorporates higher-grade parts including Panasonic film capacitors. Schiit’s Coherence topology is also in play, maintaining absolute phase and offering selectable gain; either 0dB or 12dB from a front panel switch.
Advertisement
Stoddard is also clear that this isn’t a generic design.
“It’s not just another cathode follower,” he explained. “It’s more akin to our Lyr and Vali tube amps, but it’s different than both—a simple Class A stage optimized for line level use, rather than driving headphones.”
That last part matters. Buf isn’t trying to power anything. It’s there to sit in the chain and influence it, subtly or not, depending on the system and gain setting.
Advertisement. Scroll to continue reading.
Advertisement
Connectivity is simple: RCA in, RCA out. It will work in most two-channel or desktop setups without much thought. And because it can be fully bypassed, it’s easy to evaluate in real time without pulling cables.
High Gain (12dB): 20Hz–20kHz ±1dB, THD <0.2%, IMD <0.4%, SNR >97dB, Crosstalk -80dB
Performance & Design
Output Impedance: 75 ohms
Input Impedance: 470k ohms
Maximum Output: 8.2V RMS
Topology: Coherence tube gain with BJT inverter, Class A
Protection: Delayed start, fast shutdown, muting relay
Power & Build
Power Supply: External 24VAC and 6VAC wall adapter, linear regulated rails, 6V AC heater
Power Consumption: 6W
Dimensions: 5 x 3.5 x 2.75 inches
Weight: 1 lbs
Schiit Buf Setup, Tube Use, and Basics
Buf can be added to a system in two straightforward ways. You can place it between a preamp and power amplifier, which allows it to affect every source connected to the preamp. Alternatively, you can insert it between a single source such as a DAC and an integrated amplifier, preamp, or headphone amplifier. In that setup, Buf only influences that specific source.
Tube lifespan is typically around 5,000 hours of use. That figure reflects active operating time, so it’s best to turn the unit off when it’s not in use to extend tube life.
Buf is compatible with tubes that use a standard 6922 pinout, with heater current up to 600mA. Supported types include 6N1P, 6922, ECC88, and 6DJ8. For simplicity and consistency, using the included tube is the most straightforward option.
Like the rest of Schiit’s lineup, Buf is built in the USA, with assembly in Texas and chassis work in California. It carries a 3-year warranty, with the tube itself covered for 90 days.
Advertisement
The Bottom Line
The $99 Schiit Buf is a simple way to add a tube stage to almost any system without changing your core components and you can bypass it when you don’t want it. What’s unique is the price, true tube implementation, and flexibility. What’s missing is everything else: no volume control, no inputs beyond basic RCA switching, no DAC, no remote. This is for users who already have a system and want to experiment with tube character without committing to a full tube preamp or amplifier.
You must be logged in to post a comment Login