Saudi Arabia’s Six Flags Qiddiya City officially opened its doors at the end of 2025, and one of its rides piqued everyone’s interest, which is not surprising given that Falcons Flight is now the world’s tallest, fastest, and longest roller coaster.
With the ride commencing at a height of 534.8 feet and dropping an amazing 518.4 feet in one fell swoop, these stats read like a who’s who of adrenaline-fueled bragging rights. The actual track length is an astounding 13,943.6 feet, or more than 2.6 miles, and the ride can reach a mind-blowing speed of 155.3 miles per hour; no other completed circuit coaster comes anywhere close to this. Former record holder Top Thrill 2 reaches only 420 feet, Formula Rossa travels at a snail’s rate of 149.1 mph, and Steel Dragon 2000 lags far behind at 8,133 feet.
Toy roller coaster set – Send kids’ play soaring with this fully functional LEGO City Robot World Roller-Coaster Park adventure toy for amusement…
What’s in the box? – This creative building set includes everything kids need to build a robot-themed roller coaster with lots of features,…
A building toy packed with details for imaginative play – Features include a working 3 car roller coaster train, a posable mech with a minifigure…
The team knew that they were working in a very special environment and did everything in their power to integrate the natural landscape of the desert. They used three powerful linear synchronous motor launches to accelerate the trains in phases, with the first launch getting the trains moving, the second launch taking effect along the cliff face and accelerating the trains to over 99 mph, and the third launch giving them a final boost as they descended. The launches are controlled by an astonishing 700 LSM modules, many more than any other coaster before them. The custom-designed trains feature 16-inch wheels to withstand the sand and heat, curved windshields to shield the passengers, and a machined structure with no welds.
There are six trains in total, each with 14 riders divided among four cars, and a double loader to facilitate lines. The lowest height to ride is 4 feet 3 inches. The entire ride takes three minutes and has airtime hills to keep your stomach in your throat, tremendous positive Gs through the hairpin turns, and an intense feeling of speed as you whizz by on the long straightaway.
All this was brought to fruition through years of planning. Work on this started around 2021, and the track was finally completed towards the end of 2024, with the final piece being placed right on the edge of the cliff. Intamin has said that Falcons Flight is a step forward in coaster design, and this is definitely reflected in the final product.
Spotify is working on a new feature to help users discover music. The immensely popular streaming service recently introduced SongDNA, an “immersive” experience that provides extended credits to the people who contributed to one or more music productions. Read Entire Article Source link
Meanwhile, a Los Angeles jury is deliberating a social media addiction lawsuit against Meta and Google.
A New Mexico jury has found that Meta endangered children by misleading users about the safety of its platforms. The decision comes after a nearly seven-week trial, resulting in Meta being told to pay $375m in damages.
“The jury’s verdict is a historic victory for every child and family who has paid the price for Meta’s choice to put profits over kids’ safety,” said New Mexico attorney general Raúl Torrez, who filed the lawsuit against Meta in 2023.
In the suit, he claimed Meta knowingly exposes children to sexual exploitation and mental harm for profit. Meta owns Instagram, Facebook and WhatsApp.
Advertisement
According to the New Mexico Department of Justice, evidence presented at the trial established that Meta’s design features enable bad actors to engage in child sexual exploitation. Evidence also showed platforms are also designed to addict young people, according to the department.
“Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew. Today the jury joined families, educators, and child safety experts in saying enough is enough,” Torrez added. The company has been found to have violated parts of the state’s Unfair Practices Act.
Meta plans to appeal the decision. “We respectfully disagree with the verdict and will appeal,” Meta said in statement yesterday (24 March).
“We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content. We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online.”
Advertisement
Meanwhile Torrez will be asking the presiding judge to place additional penalties on Meta during a bench trial scheduled in early May. Torrez will also be asking the court to force Meta to make its apps safer for children.
The New Mexico verdict is a major loss for Meta, which is gearing up for a number of trials set for this year. A jury in Los Angeles is currently deliberating a social media addiction suit against Meta and Google. TikTok and Snapchat were involved in the original suit, but have since settled out of court.
Thousands have filed lawsuits against social media companies over the alleged harm they pose to their users, including more than 40 US state attorney generals.
A coalition suit filed in 2023 accused Meta of designing and deploying “harmful features” on Instagram and Facebook, which get younger people addicted to these platforms.
Advertisement
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Big Tech is not alone in the AI innovation race. Four startup founders took the stage at GeekWire’s Agents of Transformation event Tuesday in Seattle for a rapid-fire pitch competition.
Ideas from Pay-i, Cascade, Autessa and GemaTEG were pitched to the crowd and a panel of judges, with Pay-i founder David Tepper emerging as winner and most impressive under pressure.
Judges Bryan Hale of Anthos Capital, Yifan Zhang of AI House, and T.A. McCann of Pioneer Square Labs said they were looking for someone who was “both great at presenting but also fantastic at answering the questions.”
Read more about each pitch:
Advertisement
Pay-i(pitch by David Tepper, founder/CEO)David Tepper, right, founder and CEO of Pay-i, pitches alongside judges, from left, Bryan Hale of Anthos Capital, Yifan Zhang of AI House, and T.A. McCann of Pioneer Square Labs during GeekWire’s Agents of Transformation event in Seattle on Tuesday. (GeekWire Photo / Kevin Lisota)
An AI spend management platform that tracks ROI across an organization’s entire AI footprint — not just tokens, but the full cost stack including models, tools, and GPU resources.
David Tepper argued that tokens account for only 72% of the total expense associated with AI, and that the complexity multiplies fast when agents are drawing on multiple models, enterprise discounts, and rented GPU banks simultaneously.
Born from his days tracking Microsoft’s internal Gen AI spend on Excel spreadsheets — a period when he says he once saved his division $300,000 a week by simply asking the right questions — the company targets enterprises spending at least $500,000 on AI annually.
“After all the hype and FOMO wears off, there’s three letters that are going to survive the AI revolution, and that’s ROI,” he said.
Advertisement
Cascade AI (pitch by Ana-Maria Constantin, co-founder/CEO)Ana-Maria Constantin, co-founder and CEO of Cascade, pitches during GeekWire’s Agents of Transformation event in Seattle. (GeekWire Photo / Kevin Lisota)
An agentic HR and IT support platform that deploys AI agents to handle sensitive employee situations — benefits navigation, mental health resources, leave management — confidentially and around the clock, freeing HR teams for higher-stakes human judgment.
Ana-Maria Constantin opened her pitch with a show of hands, asking the audience whether they’d ever hesitated to go to HR because they weren’t sure whose side HR would be on.
“Imagine if that’s the case for the people in this room, senior leaders working for some of the most successful companies in the world,” she said. “Imagine how regular employees are feeling. That’s the problem we’re working on at Cascade AI.”
Roshnee Sharma, CEO of Autessa, pitches at GeekWire’s Agents of Transformation event in Seattle. (GeekWire Photo / Kevin Lisota)
A platform that replaces off-the-shelf SaaS with custom-built software staffed by “AI employees” — agents that handle workflows like lead qualification and order processing.
Roshnee Sharma’s pitch opened with a crowd-participation moment: what does SaaS really stand for? “Software as a spend,” she declared.
The company targets mid-market businesses with $20 million to $500 million in revenue, and prices its AI employees at roughly $7 to $10 each.
Judges pushed back on whether results were truly cost-saving or merely cost-neutral; Sharma argued the savings are real because clients avoid having to hire additional headcount: “We didn’t fire people. We got people able to do more of the work that they wanted to do.”
Is this any way to cool an overheating data center processor.? Manfred Markevitch, co-founder and CEO of GemaTEG, makes the case for his startup’s alternative at GeekWire’s Agents of Transformation event in Seattle. (GeekWire Photo / Kevin Lisota)
The outlier of the group: a hardware thermal management company targeting AI data centers, using solid-state cooling technology that requires no water and uses 40% less power than conventional systems.
“AI runs on hardware. It’s not only software,” co-founder and CEO Manfred Markevitch told the crowd, noting that a conventional hybrid-scale data center can consume a million gallons of water per day.
GemaTEG’s granular approach cools at the individual chip level rather than the whole building, and the company claims its systems perform twice as well as conventional ones on a per-watt basis. The company already has installations with the U.S. Department of Energy, and partners in Italy and Switzerland.
Hyperscaler deployment is one to two years out, with chip manufacturer design-in conversations already underway. Judges pressed hard on customer lock-in risk; Markevitch compared the stickiness of their solution to Intel Inside — once designed in, it spans multiple chip generations.
‘Wispit 2C’ is estimated to be about 5m years old and most likely 10 times the mass of Jupiter.
Galway native Chloe Lawlor has discovered a new planet – the second one to be found forming near an infant star called ‘Wispit 2’, some 437 light years away.
As a child, Lawlor wanted to be an artist, she tells SiliconRepublic.com. However, she changed her mind once she joined university. “I moved into physics because I did like physics in school, so I thought, ‘Oh, maybe I’ll just try this out.’”
The 25-year-old says discoveries such as these feed the innate curiosity humans have in wanting to know how we came to be, how we evolved and why we are here. Lawlor is a PhD student at the University of Galway’s Centre for Astronomy at the School of Natural Sciences and the Ryan Institute.
Advertisement
She is working in collaboration with project lead Richelle van Capelleveen, a PhD student from Leiden Observatory in the Netherlands, and postdoctoral researcher Guillaume Bourdarot from the Max Planck Institute for Extraterrestrial Physics in Germany, to learn more about young planets and how they’re forming.
“Most of the planets that we’ve observed have been much older,” Lawlor says. “We don’t know how they get to those sort of final stages like something like our solar system. This is really key for these formation theories and it’s hopefully going to tell us a lot about these young systems, how they’re forming, and then how they evolve.”
Lawlor’s new discovery, an exoplanet named ‘Wispit 2C’, is thought to be about 5m years old. ‘Wispit 2B’, a nearby planet, was discovered last year by van Capelleveen and Dr Laird Close from the University of Arizona.
Both these exoplanets are at early stages of formation in the disc around Wispit 2, which is located in the Constellation of the Eagle, a prominent equatorial constellation visible in the northern hemisphere summer along the Milky Way.
Advertisement
Lawlor’s discovery makes Wispit 2 the second known young and still forming multi-planet system. The only other system yet discovered with more than one planet developing is PDS 70, some 400 light years away from Earth.
Wispit 2C is a gas giant, likely around 10 times the mass of Jupiter. It is twice as massive as Wispit 2B and orbits four-times closer to its host star, which makes it incredibly difficult to detect with ground-based telescopes.
Wispit 2B and Wispit 2C forming around Wispit 2. Image: ESO/C Lawlor, R F van Capelleveen et al.
The exoplanet was detected using the European Southern Observatory’s Very Large Telescope in Chile’s Atacama desert. By linking several telescopes together to act as one giant instrument, the research team was able to observe regions very close to the star. In their analysis, the team was able to detect carbon monoxide gas, a chemical commonly found in the atmospheres of young giant planets.
Advertisement
Lawlor said earlier this week: “At first, we weren’t sure if it was a planet or a very large dust clump. We very quickly made follow-up observations using the Very Large Telescope Interferometer, an incredible setup where multiple telescopes can be connected to form a large virtual telescope.
“This allowed us to take what we call a spectrum, which is essentially a chemical fingerprint revealing the elements and molecules in an object’s atmosphere.”
Prof Frances Fahy, director of the Ryan Institute, said: “The discovery of the planet Wispit 2C is a remarkable achievement and highlights the world-class astrophysics research taking place at University of Galway.”
Advertisement
The team will continue on with their efforts to hopefully find more planets in the system.
Go ahead and get ready to cue your inevitable comparisons: the new trailer for HBO’s Harry Potter series dropped on Wednesday, giving audiences a first look at muggles, magical Hogwarts students and The Boy Who Lived. Due to hit HBO and HBO Max for Christmas 2026, the TV show will be a direct adaptation of the wizarding books, starting off with The Philosopher’s Stone.
To fans familiar with the movie franchise, this may feel like a rediscovery — or reintroduction — to the live-action version of the world of Harry Potter. The trailer shows a young Harry and his signature scar, his tyrant of an aunt and the moment he received his invitation from Hogwarts. Take a look at the first meet with Hagrid, tender moments with Ron and Hermione, and a look at Lucius Malfoy and Snape.
The series features Dominic McLaughlin as Harry Potter, Arabella Stanton as Hermione Granger and Alastair Stout as Ron Weasley, and the expansive cast also includes Nick Frost as Hagrid, John Lithgow as Hogwarts headmaster Dumbledore and Paapa Essiedu as Professor Snape.
Nintendo might finally be doing something gamers have been asking for… forever. The company has officially confirmed that Switch 2 games will have different pricing for digital and physical versions, with digital copies expected to be cheaper.
Unsplash
The change begins in May 2026, starting with titles like Yoshi and the Mysterious Book. For example, early listings on the eShop show the game priced at $59.99 digitally vs $69.99 physically, marking a clear shift in how Nintendo handles game pricing.
Why is Nintendo doing this?
Let’s be real, physical games are comparatively expensive to make. Nintendo says the change reflects the higher costs of manufacturing and distributing cartridges, compared to digital downloads. This aligns with what the industry has been doing for years, except Nintendo has been one of the few holdouts where digital and physical games often cost the same.
Nintendo
There’s also a bigger strategy at play here. By making digital games cheaper, Nintendo could nudge more players toward digital purchases. That essentially translates to higher margins, fewer logistics headaches, and a tighter grip on its ecosystem. In other words, this isn’t just about fairness in pricing… It’s also about where Nintendo wants its future sales to go.
What does this mean for players?
So, does this mean all games going forward will have different prices? Well, not exactly, and this is where things get a little messy. While Nintendo is setting a lower MSRP for digital games, actual pricing can still vary depending on the title and retailer. Plus, not every game will follow the same pattern, so bigger releases could still carry higher price tags, making the gap between digital and physical a bit inconsistent.
Unsplash
For players, though, this is still a win. Digital games are finally getting a clear pricing advantage after years of being oddly equal (or sometimes pricier) than physical copies. That said, the trade-off remains. Physical games can be resold or shared, while digital ones stay locked to your account.
A jury found Meta and YouTube negligent in a landmark social media addiction case, ruling that addictive design features such as infinite scroll and algorithmic recommendations harmed a young user and contributed to her mental health distress. The verdict awards $3 million in compensatory damages so far and could pave the way for more lawsuits seeking financial penalties and product changes across the social media industry. “Meta is responsible for 70 percent of that cost and YouTube for the remainder,” notes The New York Times. “TikTok and Snap both settled with the plaintiff for undisclosed terms before the trial started.” From the report: The bellwether case, which was brought by a now 20-year-old woman identified as K.G.M., had accused social media companies of creating products as addictive as cigarettes or digital casinos. K.G.M. sued Meta, which owns Instagram and Facebook, and Google’s YouTube over features like infinite scroll and algorithmic recommendations that she claimed led to anxiety and depression.
The jury of seven women and five men will deliberate further to decide what further punitive damages the companies should pay for malice or fraud. The verdict in K.G.M.’s case — one of thousands of lawsuits filed by teenagers, school districts and state attorneys general against Meta, YouTube, TikTok and Snap, which owns Snapchat — was a major win for the plaintiffs. The finding validates a novel legal theory that social media sites or apps can cause personal injury. It is likely to factor into similar cases expected to go to trial this year, which could expose the internet giants to further financial damages and force changes to their products. The verdict also comes on the heels of a New Mexico jury ruling that found Meta liable for violating state law by failing to protect users of its apps from child predators.
Standard RAG pipelines break when enterprises try to use them for long-term, multi-session LLM agent deployments. This is a critical limitation as demand for persistent AI assistants grows.
xMemory, a new technique developed by researchers at King’s College London and The Alan Turing Institute, solves this by organizing conversations into a searchable hierarchy of semantic themes.
Experiments show that xMemory improves answer quality and long-range reasoning across various LLMs while cutting inference costs. According to the researchers, it drops token usage from over 9,000 to roughly 4,700 tokens per query compared to existing systems on some tasks.
For real-world enterprise applications like personalized AI assistants and multi-session decision support tools, this means organizations can deploy more reliable, context-aware agents capable of maintaining coherent long-term memory without blowing up computational expenses.
Advertisement
RAG wasn’t built for this
In many enterprise LLM applications, a critical expectation is that these systems will maintain coherence and personalization across long, multi-session interactions. To support this long-term reasoning, one common approach is to use standard RAG: store past dialogues and events, retrieve a fixed number of top matches based on embedding similarity, and concatenate them into a context window to generate answers.
However, traditional RAG is built for large databases where the retrieved documents are highly diverse. The main challenge is filtering out entirely irrelevant information. An AI agent’s memory, by contrast, is a bounded and continuous stream of conversation, meaning the stored data chunks are highly correlated and frequently contain near-duplicates.
To understand why simply increasing the context window doesn’t work, consider how standard RAG handles a concept like citrus fruit.
Imagine a user has had many conversations saying things like “I love oranges,” “I like mandarins,” and separately, other conversations about what counts as a citrus fruit. Traditional RAG may treat all of these as semantically close and keep retrieving similar “citrus-like” snippets.
Advertisement
“If retrieval collapses onto whichever cluster is densest in embedding space, the agent may get many highly similar passages about preference, while missing the category facts needed to answer the actual query,” Lin Gui, co-author of the paper, told VentureBeat.
A common fix for engineering teams is to apply post-retrieval pruning or compression to filter out the noise. These methods assume that the retrieved passages are highly diverse and that irrelevant noise patterns can be cleanly separated from useful facts.
This approach falls short in conversational agent memory because human dialogue is “temporally entangled,” the researchers write. Conversational memory relies heavily on co-references, ellipsis, and strict timeline dependencies. Because of this interconnectedness, traditional pruning tools often accidentally delete important bits of a conversation, leaving the AI without vital context needed to reason accurately.
Naive RAG vs structured memory (source: arXiv)
Advertisement
Why the fix most teams reach for makes things worse
To overcome these limitations, the researchers propose a shift in how agent memory is built and searched, which they describe as “decoupling to aggregation.”
Instead of matching user queries directly against raw, overlapping chat logs, the system organizes the conversation into a hierarchical structure. First it decouples the conversation stream into distinct, standalone semantic components. These individual facts are then aggregated into a higher-level structural hierarchy of themes.
When the AI needs to recall information, it searches top-down through the hierarchy, going from themes to semantics and finally to raw snippets. This approach avoids redundancy. If two dialogue snippets have similar embeddings, the system is unlikely to retrieve them together if they have been assigned to different semantic components.
For this architecture to succeed, it must balance two vital structural properties. The semantic components must be sufficiently differentiated to prevent the AI from retrieving redundant data. At the same time, the higher-level aggregations must remain semantically faithful to the original context to ensure the model can craft accurate answers.
Advertisement
A four-level hierarchy that shrinks the context window
The researchers developed xMemory, a framework that combines structured memory management with an adaptive, top-down search strategy.
xMemory continuously organizes the raw stream of conversation into a structured, four-level hierarchy. At the base are the raw messages, which are first summarized into contiguous blocks called “episodes.” From these episodes, the system distills reusable facts as semantics that disentangle the core, long-term knowledge from repetitive chat logs. Finally, related semantics are grouped together into high-level themes to make them easily searchable.
xMemory architecture (source: arXiv)
xMemory uses a special objective function to constantly optimize how it groups these items. This prevents categories from becoming too bloated, which slows down search, or too fragmented, which weakens the model’s ability to aggregate evidence and answer questions.
Advertisement
When it receives a prompt, xMemory performs a top-down retrieval across this hierarchy. It starts at the theme and semantic levels, selecting a diverse, compact set of relevant facts. This is crucial for real-world applications where user queries often require gathering descriptions across multiple topics or chaining connected facts together for complex, multi-hop reasoning.
Once it has this high-level skeleton of facts, the system controls redundancy through what the researchers call “Uncertainty Gating.” It only drills down to pull the finer, raw evidence at the episode or message level if that specific detail measurably decreases the model’s uncertainty.
“Semantic similarity is a candidate-generation signal; uncertainty is a decision signal,” Gui said. “Similarity tells you what is nearby. Uncertainty tells you what is actually worth paying for in the prompt budget.” It stops expanding when it detects that adding more detail no longer helps answer the question.
What are the alternatives?
Existing agent memory systems generally fall into two structural categories: flat designs and structured designs. Both suffer from fundamental limitations.
Advertisement
Flat approaches such as MemGPT log raw dialogue or minimally processed traces. This captures the conversation but accumulates massive redundancy and increases retrieval costs as the history grows longer.
Structured systems such as A-MEM and MemoryOS try to solve this by organizing memories into hierarchies or graphs. However, they still rely on raw or minimally processed text as their primary retrieval unit, often pulling in extensive, bloated contexts. These systems also depend heavily on LLM-generated memory records that have strict schema constraints. If the AI deviates slightly in its formatting, it can cause memory failure.
xMemory addresses these limitations through its optimized memory construction scheme, hierarchical retrieval, and dynamic restructuring of its memory as it grows larger.
When to use xMemory
For enterprise architects, knowing when to adopt this architecture over standard RAG is critical. According to Gui, “xMemory is most compelling where the system needs to stay coherent across weeks or months of interaction.”
Advertisement
Customer support agents, for instance, benefit greatly from this approach because they must remember stable user preferences, past incidents, and account-specific context without repeatedly pulling up near-duplicate support tickets. Personalized coaching is another ideal use case, requiring the AI to separate enduring user traits from episodic, day-to-day details.
Conversely, if an enterprise is building an AI to chat with a repository of files, such as policy manuals or technical documentation, “a simpler RAG stack is still the better engineering choice,” Gui said. In those static, document-centric scenarios, the corpus is diverse enough that standard nearest-neighbor retrieval works perfectly well without the operational overhead of hierarchical memory.
The write tax is worth it
xMemory cuts the latency bottleneck associated with the LLM’s final answer generation. In standard RAG systems, the LLM is forced to read and process a bloated context window full of redundant dialogue. Because xMemory’s precise, top-down retrieval builds a much smaller, highly targeted context window, the reader LLM spends far less compute time analyzing the prompt and generating the final output.
In their experiments on long-context tasks, both open and closed models equipped with xMemory outperformed other baselines, using considerably fewer tokens while increasing task accuracy.
Advertisement
xMemory increases performance on different tasks while reducing token costs (source: arXiv)
However, this efficient retrieval comes with an upfront cost. For an enterprise deployment, the catch with xMemory is that it trades a massive read tax for an upfront write tax. While it ultimately makes answering user queries faster and cheaper, maintaining its sophisticated architecture requires substantial background processing.
Unlike standard RAG pipelines, which cheaply dump raw text embeddings into a database, xMemory must execute multiple auxiliary LLM calls to detect conversation boundaries, summarize episodes, extract long-term semantic facts, and synthesize overarching themes.
Furthermore, xMemory’s restructuring process adds additional computational requirements as the AI must curate, link, and update its own internal filing system. To manage this operational complexity in production, teams can execute this heavy restructuring asynchronously or in micro-batches rather than synchronously blocking the user’s query.
Advertisement
For developers eager to prototype, the xMemory code is publicly available on GitHub under an MIT license, making it viable for commercial uses. If you are trying to implement this in existing orchestration tools like LangChain, Gui advises focusing on the core innovation first: “The most important thing to build first is not a fancier retriever prompt. It is the memory decomposition layer. If you get only one thing right first, make it the indexing and decomposition logic.”
Retrieval isn’t the last bottleneck
While xMemory offers a powerful solution to today’s context-window limitations, it clears the path for the next generation of challenges in agentic workflows. As AI agents collaborate over longer horizons, simply finding the right information won’t be enough.
“Retrieval is a bottleneck, but once retrieval improves, these systems quickly run into lifecycle management and memory governance as the next bottlenecks,” Gui said. Navigating how data should decay, handling user privacy, and maintaining shared memory across multiple agents is exactly “where I expect a lot of the next wave of work to happen,” he said.
Liquid cooling is rewriting the rules of AI infrastructure, but most deployments have not fully crossed the line. GPUs and CPUs have moved to liquid cooling, while storage has depended on airflow, creating an operationally inefficient hybrid architecture.
What appears to be a pragmatic transition strategy is, in practice, a structural liability.
“A hybrid cooling approach is an operationally inefficient situation,” explains Hardeep Singh, thermal-mechanical hardware team manager at Solidigm. “You’re paying for and maintaining two entirely separate, expensive cooling infrastructures, and could be exposed to the worst-of-both-world’s problems.”
Advertisement
While liquid cooling requires pumps, fluid manifolds, and coolant distribution units (CDUs), air-cooled components require CRAC units, cold aisles, and evaporative cooling towers. Organizations moving to a hybrid solution by just adding some liquid cooling are absorbing the cost premium without capturing the full TCO benefit.
The thermal physics makes things worse. Bulky liquid-cooling cold plates, thick hoses, and manifolds physically obstruct airflow inside the GPU server chassis. This concentrates thermal stress on the remaining air-cooled components, including storage drives, memory, and network cards, because server fans cannot push adequate airflow around the liquid plumbing. The components most reliant on fans end up in the worst possible thermal environment.
Water consumption is an all-but ignored, equally serious problem. Traditional air-cooled components rely on server fans to move heat into ambient air, which is then absorbed by a water loop and pumped to evaporative cooling towers. These systems can consume millions of gallons of water over time. As rack power densities continue to climb to support modern AI workloads, the evaporative water penalty becomes, as Singh puts it, “environmentally and economically indefensible.”
As AI infrastructure evolves toward liquid-cooled and fanless GPU systems, the true constraints on scale are shifting from compute performance to system-level thermal design. Modern AI platforms are no longer built server by server; they are engineered as tightly integrated rack- and pod-level systems where power delivery, cooling distribution, and component placement are inseparable.
Advertisement
In this environment, storage architectures designed for airflow-dependent data centers are becoming a limiting factor. As GPU platforms move fully into shared liquid-cooling domains, anchored by rack-level CDUs, every component in the system must operate natively within the same thermal and mechanical design. Storage can no longer rely on isolated cooling paths or bespoke thermal assumptions without introducing inefficiency, complexity, or density trade-offs at the system level.
Why storage is no longer a passive subsystem
For infrastructure leaders, this marks a fundamental transition. Storage is no longer a passive subsystem attached to compute, but instead an active participant in system-level cooling, serviceability, and GPU utilization. The ability to scale AI now depends on whether storage can integrate cleanly into liquid-cooled GPU systems, without fragmenting cooling architectures or constraining rack-level design.
And the race to scale AI is no longer just about who has the most GPUs, but instead about who can keep them cool, says Scott Shadley, director of leadership narrative and evangelist at Solidigm.
“Finding a way to enable liquid-cooled storage while still making it user serviceable has been one of the biggest challenges in designing fanless system solutions,” Shadley says. “As AI workloads evolve, the pressure on storage will only intensify.”
Advertisement
Techniques like KV cache offload, which move data between GPU memory and high-speed storage during inference, make storage latency and thermal performance directly relevant to model serving efficiency. In these architectures, a storage subsystem that throttles due to poor traditional airflow under thermal load slows down both reads and the model itself.
Moving to integrated liquid cooling
Moving from traditional air-cooled GPU servers to integrated liquid-cooled racks improves power usage efficiency (PUE) and reduces the operational cost for the datacenter. It also replaces the noisy computer room air handler (CRAH) and introduces a modern, efficient liquid CDU with potential scope to eliminate chillers if racks can be cooled to a liquid temperature of 45° Celsius.
When storage is cooled through liquid in absence of fans, it must also support serviceability with no liquid leakage. It also creates a new requirement that many infrastructure teams are only beginning to grapple with: every component in the rack must operate natively within the same cooling architecture.
Storage as an active participant in system design
Storage design is no longer an isolated engineering problem. It is a direct variable in GPU utilization, system reliability, and operational efficiency. The solution is to redesign storage from the ground up for liquid-cooled, fanless environments. This is harder than it sounds. Traditional SSD design assumes airflow for thermal management and places components on both sides of a thermally insulated PCB. Neither assumption holds in a CDU-anchored architecture.
Advertisement
“SSDs need to be designed with a best-in-class thermal solution to specifically conduct heat from internal components efficiently and transfer it to fluid,” says Singh. “The design must include a low-resistance path for heat to transfer to a single cold plate attached on one side.”
At the same time, drives must support serviceability without liquid leakage during insertion and removal, and without degrading the thermal interface between the drive and the cold plate.
Solidigm has worked with NVIDIA to address SSD liquid-cooling challenges, such as hot swap-ability and single-side cooling, reducing the thermal footprint of storage within the shared liquid loop, and ensuring GPUs receive their proportional share of coolant.
“If storage is not designed for a liquid-cooled environment efficiently, it will either throttle to lower performance or require more liquid volume,” he says. “Which directly and indirectly leads to under-utilization of GPU capability.”
Advertisement
Alignment on standards and the path to interoperability
Solidigm is not working on this in isolation. The broader industry is coalescing around standards to ensure liquid-cooled AI systems are interoperable rather than a patchwork of custom solutions. The SNIA and the Open Compute Project (OCP) are the primary bodies driving this work.
Solidigm led the industry standard for liquid cooling in SFF-TA-1006 for the E1.S form factor and is an active participant in OCP work streams covering rack design, thermal management, and sustainability. Custom, bespoke cooling solutions for storage are giving way to standards-aligned, production-ready designs that integrate cleanly into liquid-cooled GPU platforms.
“There are several organizations involved in this work,” says Shadley, who is also a SNIA board member. “They started with component-level solutions, driven heavily by SNIA and the SFF TA TWG. The next level is solution-level work, which is currently being heavily driven by OCP.”
Solidigm’s roadmap is leading the way
The design rules for system level architectures have changed due to the advent of liquid and immersion cooling technologies that allow for more unique design rules and removal of some barriers. The ability for systems to drive NVMe SSD-only platforms also allows for the removal of the platter-based box constraint that exists with HDD solutions, Shadley says.
Advertisement
“Solidigm customers have an active and lead role in roadmap decisions for our products due to their deep technical alignment with the ecosystem,” he says. “We do not simply make and sell products, we integrate, co-design, co-develop, and innovate with and alongside our partners, customers, and their customers.”
Adds Singh: “Solidigm’s key strength is innovation and customer-inspired system level engineering. This will continue to aggressively lead the way for liquid cooling adoption for storage.”
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.
Apple released iOS 26.4 on Tuesday, March 24, about a week after the tech giant released iOS 26.3.1 (a), the company’s first Background Security Improvement update. The most recent update brings a slew of features to your iPhone, including new emoji and video podcasts.
You can download iOS 26.4 now by going to Settings and tapping General. Next, select Software Update, tap Update Now and follow the prompts on your screen.
Here are some of the new features iOS 26.4 brings to your iPhone.
Advertisement
New emoji
All the new emoji iOS 26.4 brings to your iPhone.
CNET/Apple
With iOS 26.4, your iPhone gets eight new emoji. Those emoji include:
The Unicode Consortium is responsible for creating emoji, and it approved these new emoji in September as part of Unicode 17.0. But this is the first time the emoji are showing up on iPhones.
Advertisement
Video podcasts come to Apple Podcasts
The iOS 26.4 update also brings video to your Podcasts app. To view these video podcasts, open the Podcasts app and start listening to an episode with the video player icon in the top right corner of the title card. Once you’re listening, open the media player and tap the Turn Video On button near the podcast’s progress bar. The podcast’s artwork will be replaced with the video. To turn the video off again, tap Turn Video Off and the podcast’s artwork will return.
Video podcasts are a fun addition to the Podcasts app.
Apple/Screenshots by CNET
Reduce some Liquid Glass effects across your device
Apple’s iOS 26.4 update adds another setting to minimize Liquid Glass effects across your device: Reduce Bright Effects. Here’s where to find this setting.
Advertisement
1. Tap Settings. 2. Tap Accessibility. 3. Tap Display & Text Size. 4. Scroll down the menu to find Reduce Bright Effects.
Reduce Bright Effects can eliminate some Liquid Glass effects.
Apple/Screenshot by CNET
Apple says the setting will minimize highlighting and flashing when interacting with on-screen elements, such as buttons or the keyboard. So if you find certain flash elements annoying, you can now disable them.
Advertisement
Playlist Playground in Apple Music
The iOS 26.4 update also introduces a new playlist generator for Apple Music subscribers called Playlist Playground. Apple says the feature can create a playlist based on your description. Once you enter your description, it will create a playlist with a title, tracklist and general description.
To access Playlist Playground, first you have to be an Apple Music subscriber. Then, open Apple Music and go to your Library. In your Library, you’ll see a new icon at the top of your screen with a plus and a few lines next to it. Tap this, and you’ll be prompted to describe your playlist.
Playlist Playground can generate a playlist for you in no time.
Advertisement
Apple/Screenshots by CNET
Apple notes this feature is still in beta, so it might create unexpected results. So you might ask for a good gym mix and end up with some Whitney Houston — but who’s to say Whitney isn’t good gym music?
Find nearby concerts with the aptly named Concerts feature
iOS 26.4 brings a new Concerts feature to your Apple Music app.
“Concerts helps you discover nearby shows from artists in your library and recommends new artists based on what you listen to,” Apple writes in the update’s description. That way, you can easily find nearby shows.
To find Concerts, tap the magnifying glass icon at the bottom of your Apple Music screen, then tap Concerts. The feature may ask for your location the first time you use it. Then you’ll see popular shows nearby, along with their dates, times and locations. Tapping into any of these shows gives you more information on the show, as well as a link to buy tickets.
Advertisement
The Concerts tab in Apple Music makes it easy to see upcoming shows in your area.
Apple/Screenshot by CNET
Shazam works offline, kind of
With iOS 26.4, your Control Center’s Shazam app can work in more ways. Now, if you aren’t connected to the internet and use the Control Center app to identify a song, the app will eventually tell you the song’s identity once you’re back online.
Ambient Music home screen widgets
Apple introduced two new Ambient Music widgets for your home screen with iOS 26.4. These widgets let you easily access the four Ambient Music playlists: Sleep, Chill, Productivity, Wellbeing. You can quickly turn on a relaxing playlist to unwind after a long day, or one to help you focus on the task at hand.
Advertisement
The Ambient Music widget makes it easy to play music for just the right setting.
Apple/Screenshot by CNET
Apple introduced these playlists to your iPhone alongside iOS 18.4 in 2024. However, you could only access those playlists from your Control Center at the time.
Let other adults in your Family pay for themselves
In iOS 26.4, other adults in your Family sharing group can now use their own payment instead of depending on the group organizer’s payment method. That means if you’re an adult and have a family sharing group with your own parents, siblings or other family members, you can now purchase a new game, movie or something else with your own information instead of using someone else’s information and then paying them back for using their money.
Advertisement
This can be a helpful feature that allows you to avoid the hassle of paying someone else back for using their payment information. And if you’re the person whose card is always used, it can be a nice way to ensure others pay for their own stuff and don’t freeload off you.
More caption options when viewing videos
With iOS 26.4, you can easily change the caption style while watching content in certain apps, such as Apple TV.
To see these options, start playing a video, then tap the speech bubble icon in the bottom-right corner of your screen to open the subtitle menu. Tap Style, and you’ll see the subtitle options Classic (the default setting), Large Text, Outline Text and Transparent Background. So if you and a few others are watching something on your iPhone and want to make sure everyone can see the captions, you might choose Large Text.
Advertisement
You can adjust the subtitles in some apps thanks to iOS 26.4.
Apple/Screenshot by CNET
More control over wallpaper Collections
The iOS 26.4 update also gives you more control over which wallpaper Collections are on your iPhone. Now, if you go to Settings > Wallpaper > Add New Wallpaper, you can tap Get under Collections like Weather and Astronomy.
If you want to delete a Collection from your device, tap the check mark to the right of the downloaded Collection, and the option to Remove from Gallery appears. Tap this to delete the Collection from your iPhone, saving you some precious space.
Advertisement
You can remove wallpaper Collections from your iPhone if you want to save a little more space.
Apple/Screenshot by CNET
Here are the release notes for iOS 26.4.
Apple Music
Playlist Playground (beta) generates a playlist from your description, complete with a title, description and tracklist.
Concerts helps you discover nearby shows from artists in your library and recommends new artists based on what you listen to.
Offline Music Recognition in Control Center identifies songs without an internet connection and delivers results automatically when you’re back online.
Ambient Music widget for Sleep, Chill, Productivity and Wellbeing brings curated playlists to the Home Screen.
Full-screen backgrounds give album and playlist pages a more immersive look.
Accessibility
Reduce bright effects setting minimizes bright flashes when tapping on elements like buttons.
Subtitle and caption settings are available from the captions icon while viewing media, making them easier to find, customize and preview.
Reduce Motion setting more reliably reduces the animations of Liquid Glass for users sensitive to on-screen motion.
This update also includes the following enhancements:
Support for AirPods Max 2.
8 new emoji, including an orca, trombone, landslide, ballet dancer and distorted face, are available in the emoji keyboard.
Freeform gains advanced image creation and editing tools, and a premium content library, joining Apple Creator Studio.
Mark reminders as urgent from the Quick Toolbar or by touching and holding, and filter for urgent reminders in your Smart Lists.
Purchase Sharing lets adult members in Family Sharing groups use their own payment method when making purchases, without relying on the family organizer.
Improved keyboard accuracy when typing quickly.
For more iOS news, check out what features were included in iOS 26.3 and iOS 26.2. You can also take a look at our iOS 26 cheat sheet for other tips and tricks.
Advertisement
Watch this: Don’t Wait: iOS 26.4 Brings New Emoji, Keyboard Fixes, AI Playlists
You must be logged in to post a comment Login