The generative AI era began for most people with the launch of OpenAI’s ChatGPT in late 2022, but the underlying technology — the “Transformer” neural network architecture that allows AI models to weigh the importance of different words in a sentence (or pixels in an image) differently and train on information in parallel — dates back to Google’s seminal 2017 paper “Attention Is All You Need.”
Yet while Transformers deliver unparalleled model quality and have underpinned most of the major generative AI models used today, they are computationally gluttonous. They are burdened by quadratic compute and linear memory demands that make large-scale inference an expensive, often prohibitive, endeavor. Hence, the desire by some researchers to improve on them by developing a new architecture, Mamba, in 2023, which has gone on to be included in hybrid Mamba-Transformer models like Nvidia’s Nemotron 3 Super.
Now, the same researchers behind the original Mamba architecture including leaders Albert Gu of Carnegie Mellon and Tri Dao of Princeton have released the latest version of their new architecture, Mamba-3, as a language model under a permissive Apache 2.0 open source license — making it immediately available to developers, including enterprises for commercial purposes. A technical paper has also been published on arXiv.org.
This model signals a paradigm shift from training efficiency to an “inference-first” design. As Gu noted in the official announcement, while Mamba-2 focused on breaking pretraining bottlenecks, Mamba-3 aims to solve the “cold GPU” problem: the reality that during decoding, modern hardware often remains idle, waiting for memory movement rather than performing computation.
Advertisement
Perplexity (no, not the company) and the newfound efficiency of Mamba 3
Mamba, including Mamba 3, is a type of State Space Model (SSM).
These are effectively a high-speed “summary machine” for AI. While many popular models (like the ones behind ChatGPT) have to re-examine every single word they’ve already seen to understand what comes next—which gets slower and more expensive the longer the conversation lasts—an SSM maintains a compact, ever-changing internal state. This state is essentially a digital “mental snapshot” of the entire history of the data.
As new information flows in, the model simply updates this snapshot instead of re-reading everything from the beginning. This allows the AI to process massive amounts of information, like entire libraries of books or long strands of DNA, with incredible speed and much lower memory requirements.
To appreciate the leap Mamba-3 represents, one must first understand perplexity, the primary metric used in the research to measure model quality.
Advertisement
In the context of language modeling, perplexity is a measure of how “surprised” a model is by new data.
Think of a model as a professional gambler. If a model has high perplexity, it is unsure where to place its bets; it sees many possible next words as equally likely.
A lower perplexity score indicates that the model is more “certain”—it has a better grasp of the underlying patterns of human language. For AI builders, perplexity serves as a high-fidelity proxy for intelligence.
The breakthrough reported in the Mamba-3 research is that it achieves comparable perplexity to its predecessor, Mamba-2, while using only half the state size. This means a model can be just as smart while being twice as efficient to run.
Advertisement
A new philosophy
Mamba 3 architecture diagram. Credit: Tri Dao
The philosophy guiding Mamba-3 is a fundamental shift in how we think about AI “intelligence” versus the speed of the hardware it runs on. While the previous generation, Mamba-2, was designed to be trained at record-breaking speeds, Mamba-3 is an “inference-first” architecture — inference referring to the way AI models are served to end users, through websites like ChatGPT or Google Gemini, or through application programming interfaces (APIs).
Mamba 3’s primary goal is to maximize every second the computer chip (GPU) is active, ensuring that the model is thinking as hard as possible without making the user wait for an answer.
In the world of language models, every point of accuracy is hard-won. At the 1.5-billion-parameter scale, the most advanced “MIMO” variant of Mamba-3 achieved a 57.6% average accuracy across benchmarks, representing a 2.2-percentage-point leap over the industry-standard Transformer.
Advertisement
Mamba 3 benchmark comparison chart. Credit: Aakash Lahoti, Kevin Y. Li, Berlin Chen, Caitlin Wang, Aviv Bick, J. Zico Kolter, Tri Dao, Albert Gu
While a two-point jump might sound modest, it actually represents a nearly 4% relative increase in language modeling capability compared to the Transformer baseline. Even more impressively, as alluded to above, Mamba-3 can match the predictive quality of its predecessor while using only half the internal “state size,” effectively delivering the same level of intelligence with significantly less memory lag.
For years, efficient alternatives to Transformers suffered from a “logic gap”—they often failed at simple reasoning tasks, like keeping track of patterns or solving basic arithmetic, because their internal math was too rigid. Mamba-3 solves this by introducing complex-valued states.
This mathematical upgrade acts like an internal compass, allowing the model to represent “rotational” logic. By using this “rotary” approach, Mamba-3 can near-perfectly solve logic puzzles and state-tracking tasks that its predecessors could only guess at, finally bringing the reasoning power of linear models on par with the most advanced systems.
Advertisement
The final piece of the puzzle is how Mamba-3 interacts with physical hardware. Most AI models today are “memory-bound,” meaning the computer chip spends most of its time idle, waiting for data to move from memory to the processor.
Mamba-3 introduces a Multi-Input, Multi-Output (MIMO) formulation that fundamentally changes this dynamic. By performing up to four times more mathematical operations in parallel during each step, Mamba-3 utilizes that previously “idle” power. This allows the model to do significantly more “thinking” for every word it generates without increasing the actual time a user spends waiting for a response. More on these below.
Three new technological leaps
The appeal of linear models has always been their constant memory requirements and linear compute scaling.
However, as the Mamba 3 authors point out, there is “no free lunch”. By fixing the state size to ensure efficiency, these models are forced to compress all historical context into a single representation—the exact opposite of a Transformer’s ever-growing KV cache. Mamba-3 pulls three specific levers to make that fixed state do more work.
Advertisement
1. Exponential-Trapezoidal Discretization
State Space Models are fundamentally continuous-time systems that must be “discretized” to handle the discrete sequences of digital data.
Previous iterations relied on “Exponential-Euler” discretization—a heuristic that provided only a first-order approximation of the system.
Mamba-3 introduces a generalized trapezoidal rule, providing second-order accurate approximation. This isn’t just a mathematical refinement; it induces an “implicit convolution” within the core recurrence.
By combining this with explicit B and C bias terms, the researchers were able to remove the short causal convolution that has been a staple of recurrent architectures for years.
Advertisement
2. Complex-Valued SSMs and the “RoPE Trick”
One of the most persistent criticisms of linear models has been their inability to solve simple state-tracking tasks, such as determining the parity of a bit sequence.
This failure stems from restricting the transition matrix to real numbers, which prevents the model from representing “rotational” dynamics.Mamba-3 overcomes this by viewing the underlying SSM as complex-valued.
Using what the team calls the “RoPE trick,” they demonstrate that a complex-valued state update is mathematically equivalent to a data-dependent rotary embedding (RoPE) applied to the input and output projections.
This allows Mamba-3 to solve synthetic reasoning tasks that were impossible for Mamba-2.
Advertisement
3. MIMO: Boosting Arithmetic Intensity
The most significant leap in inference efficiency comes from the transition from Single-Input, Single-Output (SISO) to Multi-Input, Multi-Output (MIMO) SSMs.
In a standard SSM, the state update is an outer-product operation that is heavily memory-bound.By switching to a matrix-multiplication-based state update, Mamba-3 increases the “arithmetic intensity” of the model—the ratio of FLOPs to memory traffic.
This allows the model to perform more computation during the memory-bound decoding phase. Essentially, Mamba-3 utilizes the “idle” compute cores of the GPU to increase model power for “free,” maintaining the same decoding speed as its simpler predecessors.
What Mamba 3 means for enterprises and AI builders
For enterprises, Mamba-3 represents a strategic shift in the total cost of ownership (TCO) for AI deployments.
Advertisement
Cost vs. Performance: By matched-parameter performance, Mamba-3 (MIMO) matches the perplexity of Mamba-2 while using half the state size. For enterprise deployment, this effectively doubles the inference throughput for the same hardware footprint.
Agentic Workflows: As organizations move toward parallel, agentic workflows (like automated coding or real-time customer service agents), the demand for low-latency generation increases exponentially. Mamba-3 is designed specifically to prevent GPU hardware from sitting “cold” during these tasks.
The Hybrid Advantage: The researchers predict that the future of enterprise AI lies in hybrid models. By interleaving Mamba-3 with self-attention, organizations can combine the efficient “memory” of SSMs with the precise “database” storage of Transformers.
Availability, licensing, and usage
Mamba-3 is not merely a theoretical research paper; it is a fully realized, open-source release available for immediate use with model code published on Github.
The project is released under the Apache-2.0 License. This is a permissive, business-friendly license that allows for free usage, modification, and commercial distribution without requiring the disclosure of proprietary source code.
This release is good for developers building long-context applications, real-time reasoning agents, or those seeking to reduce GPU costs in high-volume production environments.
Leading the State Space Models (SSM) revolution
The release was met with enthusiasm on social media, particularly regarding the “student-led” nature of the project. Gu, whose X/Twitter bio describes him as “leading the ssm revolution,” gave full credit to the student leads, including Aakash Lahoti and Kevin Y. Li
“We’re quite happy with the final model design! The three core methodological changes are inspired by (imo) some elegant math and methods.”
As agentic workflows push inference demand “through the roof,” the arrival of Mamba-3 suggests that the future of AI may not just be about having the biggest model, but about having the most efficient one.
Mamba-3 has successfully re-aligned the SSM with the realities of modern hardware, proving that even in the age of the Transformer, the principles of classical control theory still have a vital role to play.
Google is rolling out a fresh update for the Google Home app that makes Gemini a lot more useful in day-to-day use, while also addressing several small but frustrating issues that have been holding it back.
What’s new with Gemini for Home?
One of the biggest upgrades with this update is speed. Google says common smart home commands like turning lights on or off can now be up to 40 percent faster. That should make a noticeable difference for those who rely on voice controls throughout the day. Gemini’s Live Translation feature is also quicker and more responsive, and now supports Canadian French, taking the total number of supported languages to 30.
Google
The update also focuses heavily on making responses less chatty. Instead of long confirmations, Gemini now keeps things short and direct. So a command like setting an alarm gets a simple “Alarm set for 9 AM” instead of a full sentence. It is a small change, but one that should make interactions feel smoother.
What else is changing with the latest update?
On the features front, Gemini is getting smarter with alarms and timers. Users can now set them based on real-world events, manage multiple actions in one go, and even ask about the original timer duration. Recurring alarms and proper snooze controls have also been fixed, addressing one of the main annoyances users had with Gemini for Home.
There are improvements beyond voice, too. Google is expanding Gemini for Home to more countries and introducing new automation options in the Google Home app. These include triggers tied to appliances like ovens and new lighting effects such as wake and sleep modes.
Advertisement
Individually, these updates are minor, but together they should make Gemini feel faster, more responsive, and much more reliable than before. The new release follows an update from earlier this month that also brought performance improvements and bug fixes for Gemini’s smart home voice controls.
The streaming wars never slow down. They just find new ways to charge admission.
Starting April 10, 2026, Amazon will rename its existing Prime Video Ad Free tier as Prime Video Ultra, priced at $4.99 per month in the United States. The new tier adds several upgrades that Amazon clearly hopes will justify the new branding and the monthly fee: up to five concurrent streams instead of three, as many as 100 downloads instead of 25, and exclusive access to 4K and UHD streaming.
Amazon frames the change as a necessary step to support the cost of premium streaming. According to the company, delivering ad free video with higher-end features requires significant investment, and the new structure brings Prime Video more in line with the pricing models used by other major streaming services. In other words, welcome to the club.
For Prime members, the baseline Prime Video benefit remains intact. Subscribers will still receive HD and HDR streaming as part of the standard Prime membership, and Amazon says Dolby Vision support will now be included at no additional cost. The new Ultra tier simply stacks additional perks on top of the existing service for viewers who want more streams, more downloads, and access to the highest video resolution.
Advertisement
All of this arrives against a particularly chaotic backdrop in the streaming business. The recent bidding war involving Netflix and Paramount over the future of Warner Bros Discovery, CNN, and HBO MAX has already reshaped the landscape, with the Ellisons emerging victorious and the industry bracing for the fallout. One thing seems certain as the dust settles: none of these services are getting cheaper.
Amazon may have deeper pockets than most of its competitors, but it is not immune to the math. Producing blockbuster series and films at scale costs real money, and those glossy originals are not paying for themselves. Renaming the ad free tier Prime Video Ultra may sound like a cosmetic change, but the message behind it is clearer than ever.
The era of cheap streaming is over. The meter is running.
Amazon’s new Prime Video Ultra tier doesn’t replace the core Prime Video benefit included with a Prime membership. Instead, it layers premium streaming features on top of the existing service for viewers who want ad free playback, higher video resolution, and more flexibility for downloads and concurrent streams.
Advertisement
The chart below breaks down what stays included with Prime and what the new $4.99 per month Ultra tier adds starting April 10, 2026.
Feature / Option
Prime Video Benefit (Included with Prime Membership)
Prime Video Ultra Subscription
Content Library
Thousands of premium movies, TV series, and live sports including NFL, NBA, NASCAR, and The Masters
$4.99 per month starting April 10. Prime or Prime Video subscription required. Annual option $45.99 per year (about 23% savings vs monthly).
Access to Prime Originals, Movies, and Live Sports
Whether you stick with the Prime Video benefit included with a Prime membership or upgrade to Prime Video Ultra, the underlying content library does not change. Both options provide access to Amazon’s full catalog of Amazon MGM Studios originals, licensed films and series, and exclusive live sports programming.
That lineup includes popular Prime Original series such as Fallout, Reacher, The Boys, The Lord of the Rings: The Rings of Power, and The Summer I Turned Pretty. Amazon’s growing slate of original films is also included, with titles such as Heads of State, Red One, Road House, and The Accountant 2.
Advertisement. Scroll to continue reading.
Live sports remain a major draw for the platform as well. Prime Video carries exclusive coverage and events tied to the NFL, NBA, WNBA, NASCAR, NWSL, and The Masters, alongside additional licensed programming and films.
Advertisement
In other words, Prime Video Ultra does not unlock additional content. The catalog remains the same. What the Ultra tier adds are premium viewing features such as ad free playback, higher video resolution, Dolby Atmos surround sound, and expanded streaming and download limits.
The Fine Print: What Prime Video Ultra Still Won’t Do
Before anyone assumes Prime Video Ultra is a magic “no ads, everything in 4K, watch it anywhere forever” button, there are a few realities worth noting.
First, Prime Video Ultra is currently limited to customers in the United States. If you’re outside the U.S., the “Ultra” experience will have to wait.
Second, ad free does not mean ad free everywhere. Live programming such as sports broadcasts, certain licensed content, and third party channel subscriptions may still contain advertising. That’s the nature of live television and licensing deals. Amazon can remove ads from its own playback environment, but it can’t rewrite every contract in the sports world.
Advertisement
Third, the improved download and concurrent stream limits apply to your entire account, not to each individual profile. So if five people in the household are streaming at once or loading devices with downloads before a trip, those limits are shared across everyone using the account. There may also be additional restrictions depending on the specific title, device, or content provider.
Finally, the premium tech perks come with the usual fine print. 4K UHD video, Dolby Vision, and Dolby Atmos are only available on supported titles and require compatible devices and enough internet bandwidth to actually deliver them. Not every movie or show in the catalog is available in every format.
The Bottom Line
Amazon’s Prime Video Ultra tier is less about new content and more about unlocking the premium viewing and audio experience. For $4.99 per month extra, subscribers get ad free playback, expanded streaming and download limits, and access to higher resolution 4K UHD video, along with Dolby Vision and Dolby Atmos surround sound.
Prime members who stick with the included Prime Video benefit will still get the same catalog of movies, series, and live sports, but without the highest resolution formats or ad free viewing. However, this tier does get Dolby Vision added, which wasn’t included before, at no extra charge.
Advertisement
In the bigger picture, this move reflects where the streaming business is heading. As studios spend billions on original content and compete for sports rights, subscription tiers are becoming more segmented and more expensive. Prime Video Ultra is simply Amazon’s latest reminder that the era of cheap streaming is over.
Looking for the most recent Strands answer? Click here for our daily Strands hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle, Connections and Connections: Sports Edition puzzles.
Today’s NYT Strands puzzle is kind of bizarre. Even after I had found some of the answers, the theme didn’t click in my brain until I was almost done with the puzzle. And some of the answers are difficult to unscramble, so if you need hints and answers, read on.
If that doesn’t help you, here’s a clue: Not death…
Clue words to unlock in-game hints
Your goal is to find hidden words that fit the puzzle’s theme. If you’re stuck, find any words you can. Every time you find three words of four letters or more, Strands will reveal one of the theme words. These are the words I used to get those hints but any words of four or more letters that you find will work:
These are the answers that tie into the theme. The goal of the puzzle is to find them all, including the spangram, a theme word that reaches from one side of the puzzle to the other. When you have all of them (I originally thought there were always eight but learned that the number can vary), every letter on the board will be used. Here are the nonspangram answers:
COACH, HACK, BLOOD, CYCLE, STYLE, LESSON, PRESERVER. (All words that can follow the word “LIFE.”)
Today’s Strands spangram
The completed NYT Strands puzzle for March 18, 2026.
NYT/Screenshot by CNET
Today’s Strands spangram is AFTERLIFE. To find it, start with the A that is the furthest-left letter on the top row, and wind down.
Advertisement
Toughest Strands puzzles
Here are some of the Strands topics I’ve found to be the toughest.
#1: Dated slang. Maybe you didn’t even use this lingo when it was cool. Toughest word: PHAT.
#2: Thar she blows! I guess marine biologists might ace this one. Toughest word: BALEEN or RIGHT.
#3: Off the hook. Again, it helps to know a lot about sea creatures. Sorry, Charlie. Toughest word: BIGEYE or SKIPJACK.
Today it’s widely acknowledged that the future of computing will involve the quantum realm. Companies like Google, Microsoft, IBM, and a few well-funded startups are frantically building quantum computers and routinely claiming advances that seem to bring this exotic, world-changing technology within reach. In 1979 all of this was unthinkable. But that summer, two scientists met in the Atlantic Ocean off the coast of Puerto Rico, and their aquatic conversation led to a body of work that created quantum information theory. In a larger sense, their contributions helped bring computer science into the quantum age.
Those water-logged scientists, Charles Bennett and Gilles Brassard, are now the latest recipients of the ACM A.M. Turing Award, the Nobel Prize of the field.
Until that 1979 meeting, there had been a disconnect between information science and physics. The latter field had experienced a disruption in the early 20th century when physicists discovered quantum mechanics, a deeper explanation of how the universe operated that superseded the classical physics of Issac Newton. Computer science, however, didn’t account for the quantum world, except for having to deal with its effects on tiny chips, where the behavior of electrons were relevant.
“In the 1950s through the 1980s people thought of quantum effects as occurring in very small things and as a source of noise—you had to understand quantum theory to build transistors,” explains Bennett. “People thought of quantum mechanics as a nuisance.” He and Broussard discovered methods—like quantum coin-tossing and quantum entanglement—that turned the perceived handicaps of quantum reality into a powerful tool.
Advertisement
At the time of their meeting, Bennett was at a career crossroads; he’d joined IBM in 1973, but had taken a years-long break from academic publishing. One source of continuing fascination was an idea shared by a college classmate, Steven Weisner—that using a quantum form of cryptography could enable digital money that could not be counterfeited. (Yep, Weisner envisioned cryptocurrency in the late 1960s!) At the 1979 conference, Bennett saw that a cryptographer named Brassard was in attendance—he had just completed a dissertation on public-key crypto—and located him offshore.
“So there I was swimming in the beach when a complete stranger came up to me and started telling me that a friend of his found that we can use quantum mechanics to make affordable banking notes out of nowhere,” Broussard tells me. “If I had been on firm ground, I would have run for my life, but I was trapped in the ocean, so I listened politely.” Though Brassard had no previous interest in physics, he was intrigued by the approach, and the pair eventually published a theory called BB84, essentially creating an alternative to classic public-key cryptography based on what would become quantum information theory. Suddenly, the world of the quantum became a source of solutions—if scientists could invent the mechanisms to make it happen. As Yannis Ioannidis—president of ACM, which bestows the Turing Award—put it in a statement, “Bennett and Brassard fundamentally changed our understanding of information itself.”
Both scientists take pains to say that their original work did not lead directly to the current scramble to build quantum computers. Bennett notes that in a 1981 conference at MIT, legendary physicist Richard Feynman “made the point that, since nature is quantum, probably some computational jobs would need to be done by a quantum computer.” He also credits physicist David Deutsch for key ideas about quantum computers. Bennett and Brassard became part of that effort.
“Quantum computing was invented independently from us, but then we jumped in,” says Brassard. “I was the first person to design a quantum circuit to do quantum teleportation.” Brassard and Bennett’s work on teleportation, while still in an experimental stage, is now part of the quantum lore. Brassard has said that “one day, it will fuel the quantum internet.”
MSI plans to increase the price of its PC products by 15 – 30%, company general manager Huang Jinqing recently said. Speaking with investors, Jinqing confirmed that the entire hardware industry is facing unprecedented market conditions. Memory manufacturers have almost entirely shifted their priorities, allocating the majority of their production… Read Entire Article Source link
AI agents independently discovered vulnerabilities and exploited them while performing routine tasks
Multi-agent systems collaborated to bypass data-loss prevention and steal sensitive credentials
Backup server AI escalated privileges to disable endpoint protection and complete downloads
Routine tasks assigned to artificial intelligence agents can sometimes escalate into actions resembling cyberattacks, experts have warned.
Security laboratory Irregular examined how autonomous agents behaved inside a simulated corporate environment while performing ordinary assignments.
During several runs, the AI agents searched for vulnerabilities, escalated privileges, disabled security protections, and moved sensitive information outside internal systems.
Article continues below
Advertisement
Emergent offensive behavior during routine tasks
These actions occurred while the agents attempted to complete tasks such as retrieving documents, downloading files, or preparing content for publication.
The simulated network represented a fictional company called MegaCorp and included internal servers, applications, and services commonly found inside enterprise infrastructure.
Advertisement
Within this environment, agents handled routine duties that many organizations already automate.
These duties included researching information on an internal wiki, maintaining backup archives on a server, and drafting social media posts using technical documents.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
According to the researchers, the instructions given to the agents did not mention hacking or exploitation.
Advertisement
“AI agents deployed for routine enterprise tasks are autonomously hacking the systems they operate in. No one asked it to. No adversarial prompting was involved,” Irregular said in a blog post.
“The agents independently discovered vulnerabilities, escalated privileges, disabled security tools, and exfiltrated data, all while trying to complete ordinary assignments.”
One scenario involved a multi-agent research system retrieving documents from a company wiki.
Advertisement
When access restrictions blocked a request, the system examined the application’s code repository and discovered a hardcoded secret key.
The agent used that key to forge an administrative session cookie and retrieved a restricted document.
Another test involved a backup server agent responsible for downloading files to a storage directory.
When Windows Defender blocked a file that contained malware, the agent searched local scripts and discovered administrator credentials embedded inside a utility file.
Advertisement
It then elevated its privileges and disabled endpoint protection so the download could proceed.
A separate experiment showed agents collaborating to bypass a data loss prevention system.
Two agents responsible for drafting social media content were asked to include credentials from a technical document.
When the system blocked the message, the agents developed a steganographic method that concealed the password within the text.
Advertisement
According to researchers, several factors contributed to this behavior, including access to tools such as code execution and shell commands, and many prompts encouraged persistence when tasks encountered errors.
Interactions between multiple agents also created feedback loops that pushed them toward workarounds when obstacles appeared.
The researchers argue that existing cybersecurity defenses were designed to stop human attackers rather than autonomous systems operating inside enterprise networks.
Organizations deploying such agents should not underestimate how quickly routine automation can drift toward behavior resembling internal cyber intrusion.
‘This year is the most challenging year since the company was founded’: MSI exec makes it clear how bad the RAM crisis has got, despite some prices plateauing
The dose of pessimism comes from MSI, which is going to jack up the prices of its ‘gaming products’ in the order of 15% to 30% this year. This is according to general manager Huang Jinqing on a recent earnings call, as per a report from Taiwan’s United Daily News (via Tom’s Hardware).
The increases are driven by the RAM shortage, and also problems with GPU supply from Nvidia — we’re told there’s a 20% shortfall in securing stock of the latter.
Article continues below
Advertisement
The result is that MSI will cut back on its low-end gaming laptops to the tune of 30%, so it can focus more on mid-range and higher-end PCs. The simple equation to keep revenue flowing is selling fewer devices at higher prices.
Huang said the PC industry is facing severe challenges, and that: “This year is the most challenging year since the company was founded” (text translated from Chinese).
On top of the shifting priorities with laptops, MSI is switching its motherboards to favor models supporting DDR4 memory. Whereas previously four times as many DDR5 motherboards were shipped versus DDR4, that situation has reversed completely, so the older standard is now coming off production lines in fourfold compared to the quantities of DDR5 boards. That’s quite a remarkable turnaround.
Advertisement
Analysis: tough times despite some sparks of hope
(Image credit: Shutterstock / LightField Studios)
As noted at the outset, VideoCardz noticed another update from German tech site 3D Center, which keeps tabs on RAM pricing over in Germany, and observes that the price of DDR5 memory dropped by around 7% in March compared to February.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
So that sounds quite positive, and it echoes other observations from the European market last month, too. However, lest we get carried away, remember that DDR5 RAM is still quadruple what it cost compared to the price in September 2025, according to 3D Center’s price watching. It’s just that it has dropped back a little, after plateauing from January to February this year.
Obviously, it’s good to witness any kind of downward correction — or indeed just to see that RAM pricing isn’t going up — but there is, of course, a limit to how much prices will rise before most consumers throw their hands up in the air and (rightly) just refuse to buy. Unless they have absolutely no choice, that is.
However, to call 2026 the “most challenging year” is quite a statement, considering that the pandemic in 2020 was a very tough time for the market (and it isn’t the first time we’ve heard this sentiment in the tech industry this year).
Huang is predicting a 10% to 20% decline in PC sales this year, whereas analyst firms are pitching their estimations at a 10% drop for 2026. That’s the best-case scenario as far as MSI’s general manager is concerned, which is troubling to say the least, as is the fact that the budget end of the PC market is going to be hit hardest.
And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
Many substances display crystallization, allowing them to keep adding to a basic shape to reach pretty humongous proportions. Although we usually tend to think of pretty stones that get fashioned into jewelry or put up for display, sugar also crystallizes and thus you can create pretty large sugar crystals. How to do this is demonstrated by [Chase] of Crystalverse fame in a recent video.
This is effectively a follow-up to a 2022 blog article in which [Chase] showed a few ways to create pretty table sugar (sucrose) based crystals. In that article the growth of single sucrose crystals was attempted, but a few additional crystals got stuck to the main crystal so that it technically wasn’t a single crystal any more.
With this new method coarse sugar is used both for seed crystals as well as for creating the syrupy liquid from mixing 100 mL of water with 225 grams of sugar. Starting a single crystal is attempted by using thin fishing wire in a small vessel with the syrup and some seed crystals, hoping that a crystal will lodge to said fishing wire.
After a few attempts this works and from there the crystals can be suspended in the large jar with syrup to let them continue growing. It’s important to cover the jar during this period, as more crystals will form in the syrup over time, requiring occasional removal of these stray ones.
Advertisement
Naturally this process takes a while, with a solid week required to get a sizeable crystal as in the video. After this the crystal is effectively just a very large version of the sugar crystals in that 1 kg bag from the supermarket, ergo it will dissolve again just as easily. If you want a more durable crystal that’s equally easy to grow, you can toss some vinegar and scrap copper together to create very pretty, albeit toxic, copper(II) acetate crystals.
Although it dates back to the early days of the Marconi Company in the 1920s, the Franklin oscillator has remained a relatively obscure circuit, its memory mostly kept alive by ham radio operators who prize its high stability at higher frequencies. At the core of the circuit is an LC tank circuit, a fact which [nobcha] used to build quite a precise LC meter.
The meter is built around two parts: the Franklin oscillator, which resonates at a frequency defined by its inductance and capacitance, and an Arduino which counts the frequency of the signal. In operation, the Arduino measures the frequency of the original LC circuit, then measures again after another element (capacitor or inductor) has been added to the circuit. By measuring how much the resonant frequency changes, it’s possible to determine the value of the new element.
Before operation, the meter must be calibrated with a known reference capacitor to determine the values of the base LC circuit. In one iteration of the design, this was done automatically using a relay, while in a later version a manual switch connects the reference capacitor. Because the meter measures frequency differences and not absolute values, it minimizes parasitic effects. In testing, it was capable of measuring inductances as low as 0.1 µH.
— Steven VanRoekel, a longtime former Microsoft leader and U.S. chief information officer under President Obama, is now CEO of Earth Species Project (ESP). The non-profit research lab is using artificial intelligence to better understand animal communication in creatures from carrion crows to beluga whales.
VanRoekel, who is based in Bend, Ore., said his career has focused on driving impact at scale, and that ESP is poised for big breakthroughs.
AI can “unlock the mysteries of our planet, especially around animal communication,” he said in an ESP blog. “Once we begin unlocking that mystery, we could see shifts on the scale of Copernican or Galilean moments in history: new science, new understanding, and perhaps most importantly, new relationships with our planet.”
Krzysztof Duleba. (LinkedIn Photo)
— Krzysztof Duleba joined LinkedIn’s Bellevue, Wash., office as a distinguished engineer in its infrastructure program. Duleba has spent his career at Google, working there for 18 years in roles across search, ads, maps, AI and cloud. In separate posts on LinkedIn, Duleba shared his career journey.
“Eighteen years ago, a kid from rural Poland walked into Google with no idea what he was getting into. He walked out a very different engineer, a father of three, and — he hopes — a better person,” Duleba wrote in announcing his Google departure.
And regarding his new role: “LinkedIn is in the middle of a major infrastructure transformation, and the timing matters. I consider getting reliability economics right during this window, before agentic development fully hits, the difference between drowning in the AI wave and catching it.”
Advertisement
Dennis Stansbury. (LinkedIn Photo)
— London-based Dennis Stansbury is resigning from Amazon after more than 18 years. He has held a variety of leadership roles in European offices, most recently serving as a principal product manager for Prime Video and Amazon MGM Studios in the United Kingdom.
“I started in Seattle in March 2008, shortly after Kindle launched but before Prime Video or Alexa were likely even ideas,” Stansbury said on LinkedIn, adding that he’s going “to take some time off and put more thought into what’s next.”
Miranda Chen. (LinkedIn Photo)
— After nearly 14 years at Amazon, Miranda Chen is leaving her role as a director and technical advisor for leaders in worldwide corporate and business development. Chen, who is based in the San Francisco Bay Area, did not indicate her next move.
“I first started working for Amazon at A9, a Bay Area subsidiary, where we could review the key metrics for our entire offsite advertising business in a single weekly meeting,” she said on LinkedIn. “Now we have Amazon offices worldwide and Amazon Ads is a meaningfully large business.”
— Scott Lawson, Amazon director of Global Real Estate and Facilities (GREF) design and construction, is leaving his role. Seattle-based Lawson has been with Amazon for nearly nine years. He was previously with Clark Construction Group working on developments nationwide. Lawson hinted on LinkedIn that information on his “next chapter” would be coming soon.
Danielle Decatur. (LinkedIn Photo)
— Danielle Decatur is vice president of community engagement and communications for Cloverleaf Infrastructure, a startup based in Seattle and Houston that’s coordinating between landowners and power providers to offer ready-to-build sites tailored for data centers.
“I’ll be dedicated to enabling data center infrastructure that works for and directly benefits communities,” Decatur said on LinkedIn. The sector is facing pushback over concerns about energy prices and environmental impacts of the facilities.
Advertisement
Decatur was previously at Microsoft for more than 14 years, working most recently as director of energy and sustainability. Cloverleaf co-founder Brian Janous is Microsoft’s former vice president of energy. Earlier in her career, Decatur served with the U.S. Air Force and with FEMA.
Bradford Snow. (LinkedIn Photo)
— Augmodo named Bradford Snow as chief technology officer. The Seattle startup is developing wearable tech for retail store employees and Snow will focus on Augmodo’s technical vision and innovation strategy.
Snow joined the company from Axon, which sells taser devices and body cameras. His career also includes leadership roles at multiple tech giants where he worked on a variety of virtual reality technologies such as AR and VR devices at Meta; Amazon’s Alexa AI and health and wellness wearable tech; and HoloLens initiatives at Microsoft.
Abhishek Mathur. (LinkedIn Photo)
— Abhishek Mathur is now chief technology and product officer for ServiceTitan, a California software giant building an agentic operating system to serve trades such as plumbing, electrical and roofing by automating workflows and supporting technicians in the field.
“This sector remains one of the largest untapped opportunities for technology to drive meaningful impact,” Mathur said on LinkedIn.
Mathur, who is based in the Seattle area, has held engineering leadership roles at Meta and was at Microsoft for more than 11 years. He was most recently at Figma as senior VP of engineering.
Advertisement
Anush Kumar. (LinkedIn Photo)
— Anush Kumar is now founder and CEO of Intelligent Systems, a Bellevue, Wash.-based startup that aims to “transform operational workflows” with AI tools.
“We’re on a mission to help enterprises stop piloting and start producing,” Kumar said in a LinkedIn post that includes links to five articles explaining the team’s approach.
Kumar was previously head of product for agentic automation at Atlassian. Other past roles include VP of technology at Expedia Group, senior VP of product at Zendesk, and director roles at Oracle and Avanade. His first tech role was lead product manager at Microsoft.
— Chris Cappello joined Provn as vice president of marketing. Cappello has worked in multiple marketing roles for companies including WE Communications, Marina Maher Communications and M-Squared. He and Provn CEO Nikesh Parekh both worked earlier in their careers at HouseValues, which rebranded as Market Leader.
Provn, a new Seattle startup, wants companies to scrap the traditional resume and replace it with portfolios of real work and challenge-based assessments.
Advertisement
— Fred Hutch Cancer Center appointed two new leaders. Dr. Mazyar Shadmanand Vyshak Venurwere named as deputy chief medical officers, effective April 1. Shadman will serve as deputy CMO for classical hematology, hematologic malignancies, transplant and immunotherapy, while Venur will serve as deputy CMO for solid tumor and acute care services.
And two Fred Hutch researchers received endowed chairs: Dr. Soheil Meshinchi, a global leader in treatments for acute myeloid leukemia, was awarded the Dylan Burke Endowed Chair in Immunotherapy; and Holly Harris received the inaugural Bus Family Endowed Chair in recognition for her work in prevention, early detection and precision oncology for uterine, ovarian and breast cancers.
— Seattle’s Marianne Bichsel, former VP of external affairs at Comcast, has launched Engaged Public Affairs, a PR and policy firm advising “leaders at the intersection of government, public trust, and corporate responsibility.” Bichsel’s co-founders are Julie Anderson, who has served in city and Washington state government, and Natasha Jones, a longtime leader in King County government.
— Theodora, a Seattle-area wine recommendation app, appointed Lindsey Singhavi as its founding marketing lead.
Advertisement
— In case you missed it, GeekWire took deeper dives into these recent notable tech moves (in no particular order, except maybe the first item):
You must be logged in to post a comment Login