Sometimes a visually compelling metaphor is all you need to get an otherwise complicated idea across. In the summer of 2001, a Tulane physics professor named John P. Perdew came up with a banger. He wanted to convey the hierarchy of computational complexity inherent in the behavior of electrons in materials. He called it “Jacob’s Ladder.” He was appropriating an idea from the Book of Genesis, in which Jacob dreamed of a ladder “set up on the earth, and the top of it reached to heaven. And behold the angels of God ascending and descending on it.”
Jacob’s Ladder represented a gradient and so too did Perdew’s ladder, not of spirit but of computation. At the lowest rung, the math was the simplest and least computationally draining, with materials represented as a smoothed-over, cartoon version of the atomic realm. As you climbed the ladder, using increasingly more intensive mathematics and compute power, descriptions of atomic reality became more precise. And at the very top, nature was perfectly described via impossibly intensive computation—something like what God might see.
With this metaphor in mind, we propose to extend Jacob’s Ladder beyond Perdew’s version, to encompass all computational approaches to simulating the behavior of electrons. And instead of climbing rung by rung toward an unreachable summit, we have an idea to bend the ladder so that even the very top lies within our grasp. Specifically, we at Microsoft envision a hybrid approach. It starts with using quantum computers to generate exquisitely accurate data about the behavior of electrons—data that would be prohibitively expensive to compute classically. This quantum-generated data will then train AI models running on classical machines, which can predict the properties of materials with remarkable speed. By combining quantum accuracy with AI-driven speed, we can ascend Jacob’s Ladder faster, designing new materials with novel properties and at a fraction of the cost.
At the base of Jacob’s Ladder are classical models that treat atoms as simple balls connected by springs—fast enough to handle millions of atoms over long times but with the lowest precision. Moving up along the black line, semiempirical methods add some quantum mechanical calculations. Next are approximations based on Hartree-Fock (HF) and density functional theory (DFT), which include full quantum behavior of individual electrons but model their interactions in an averaged way. The greater accuracy requires significant computing power, which limits them to simulating molecules with no more than a few hundred atoms. At the top are coupled-cluster and full configuration interaction (FCI) methods—exquisitely accurate but, at the moment, restricted to tiny molecules or subsets of electrons due to the large computational costs involved. Quantum computing can bend the accuracy-versus-cost curve at the top of Jacob’s Ladder [orange line], making highly accurate calculations feasible for large systems. AI, trained on this quantum-accurate data, can flatten this curve [purple line], enabling rapid predictions for similar systems at a fraction of the cost of classical computing.Source: Microsoft Quantum
In our approach, the base of Jacob’s Ladder still starts with classical models that treat atoms as simple balls connected by springs—models that are fast enough to handle millions of atoms over long times, but with the lowest precision. As we ascend the ladder, some quantum mechanical calculations are added to semiempirical methods. Eventually, we’ll get to the full quantum behavior of individual electrons but with their interactions modeled in an averaged way; this greater accuracy requires significant compute power, which means you can only simulate molecules of no more than a few hundred atoms. At the top will be the most computationally intensive methods—prohibitively expensive on classical computers but tractable on quantum computers.
Advertisement
In the coming years, quantum computing and AI will become critical tools in the pursuit of new materials science and chemistry. When combined, their forces will multiply. We believe that by using quantum computers to train AI on quantum data, the result will be hyperaccurate AI models that can reach ever higher rungs of computational complexity without the prohibitive computational costs.
This powerful combination of quantum computing and AI could unlock unprecedented advances in chemical discovery, materials design, and our understanding of complex reaction mechanisms. Chemical and materials innovations already play a vital—if often invisible—role in our daily lives. These discoveries shape the modern world: new drugs to help treat disease more effectively, improving health and extending life expectancy; everyday products like toothpaste, sunscreen, and cleaning supplies that are safe and effective; cleaner fuels and longer-lasting batteries; improved fertilizers and pesticides to boost global food production; and biodegradable plastics and recyclable materials to shrink our environmental footprint. In short, chemical discovery is a behind-the-scenes force that greatly enhances our everyday lives.
The potential is vast. Anywhere AI is already in use, this new quantum-enhanced AI could drastically improve results. These models could, for instance, scan for previously unknown catalysts that could fix atmospheric carbon and so mitigate climate change. They could discover novel chemical reactions to turn waste plastics into useful raw materials and remove toxic “forever chemicals” from the environment. They could uncover new battery chemistries for safer, more compact energy storage. They could supercharge drug discovery for personalized medicine.
And that would just be the beginning. We believe quantum-enhanced AI will open up new frontiers in materials science and reshape our ability to understand and manipulate matter at its most fundamental level. Here’s how.
Advertisement
How Quantum Computing Will Revolutionize Chemistry
To understand how quantum computing and AI could help bend Jacob’s Ladder, it’s useful to look at the classical approximation techniques that are currently used in chemistry. In atoms and molecules, electrons interact with one another in complex ways called electron correlations. These correlations are crucial for accurately describing chemical systems. Many computational methods, such as density functional theory (DFT) or the Hartree-Fock method, simplify these interactions by replacing the intricate correlations with averaged ones, assuming that each electron moves within an average field created by all other electrons. Such approximations work in many cases, but they can’t provide a full description of the system.
A joint project between Microsoft and Pacific Northwest National Laboratory used AI and high-performance computing to identify potential materials for battery electrolytes. The most promising were synthesized [top and middle] and tested [bottom] at PNNL. Dan DeLong/Microsoft
Electron correlation is particularly important in systems where the electrons are strongly interacting—as in materials with unusual electronic properties, like high-temperature superconductors—or when there are many possible arrangements of electrons with similar energies—such as compounds containing certain metal atoms that are crucial for catalytic processes.
In these cases, the simplified approach of DFT or Hartree-Fock breaks down, and more sophisticated methods are needed. As the number of possible electron configurations increases, we quickly reach an “exponential wall” in computational complexity, beyond which classical methods become infeasible.
Enter the quantum computer. Unlike classical bits, which are either on or off, qubits can exist in superpositions—effectively coexisting in multiple states simultaneously. This should allow them to represent many electron configurations at once, mirroring the complex quantum behavior of correlated electrons. Because quantum computers operate on the same principles as the electron systems they will simulate, they will be able to accurately simulate even strongly correlated systems—where electrons are so interdependent that their behavior must be calculated collectively.
Advertisement
AI’s Role in Advancing Computational Chemistry
At present, even the computationally cheap methods at the bottom of Jacob’s Ladder are slow, and the ones higher up the ladder are slower still. AI models have emerged as powerful accelerators to such calculations because they can serve as emulators that predict simulation outcomes without running the full calculations. The models can speed up the time it takes to solve problems up and down the ladder by orders of magnitude.
This acceleration opens up entirely new scales of scientific exploration. In 2023 and 2024, we collaborated with researchers at Pacific Northwest National Laboratory (PNNL) on using advanced AI models to evaluate over 32 million potential battery materials, looking for safer, cheaper, and more environmentally friendly options. This enormous pool of candidates would have taken about 20 years to explore using traditional methods. And yet, within less than a week, that list was narrowed to 500,000 stable materials and then to 800 highly promising candidates. Throughout the evaluation, the AI models replaced expensive and time-consuming quantum chemistry calculations, in some cases delivering insights half a million times as fast as would otherwise have been the case.
We then used high-performance computing (HPC) to validate the most promising materials with DFT and AI-accelerated molecular dynamics simulations. The PNNL team then spent about nine months synthesizing and testing one of the candidates—a solid-state electrolyte that uses sodium, which is cheap and abundant, and some other materials, with 70 percent less lithium than conventional lithium-ion designs. The team then built a prototype solid-state battery that they tested over a range of temperatures.
This potential battery breakthrough isn’t unique. AI models have also dramatically accelerated research in climate science, fluid dynamics, astrophysics, protein design, and chemical and biological discovery. By replacing traditional simulations that can take days or weeks to run, AI is reshaping the pace and scope of scientific research across disciplines.
Advertisement
However, these AI models are only as good as the quality and diversity of their training data. Whether sourced from high-fidelity simulations or carefully curated experimental results, these data must accurately represent the underlying physical phenomena to ensure reliable predictions. Poor or biased data can lead to misleading outcomes. By contrast, high-quality, diverse datasets—such as those full-accuracy quantum simulations—enable models to generalize across systems and uncover new scientific insights. This is the promise of using quantum computing for training AI models.
How to Accelerate Chemical Discovery
The real breakthrough will come from strategically combining quantum computing’s and AI’s unique strengths. AI already excels at learning patterns and making rapid predictions. Quantum computers, which are still being scaled up to be practically useful, will excel at capturing electron correlations that classical computers can only approximate. So if you train classical models on quantum-generated data, you’ll get the best of both worlds: the accuracy of quantum delivered at the speed of AI.
As we learned from the Microsoft-PNNL collaboration on electrolytes, AI models alone can greatly speed up chemical discovery. In the future, quantum-accurate AI models will tackle even bigger challenges. Consider the basic discovery process, which we can think of as a funnel. Scientists begin with a vast pool of candidate molecules or materials at the wide-mouthed top, narrowing them down using filters based on desired properties—such as boiling point, conductivity, viscosity, or reactivity. Crucially, the effectiveness of this screening process depends heavily on the accuracy of the models used to predict these properties. Inaccurate predictions can create a “leaky” funnel, where promising candidates are mistakenly discarded or poor ones are mistakenly advanced.
Quantum-accurate AI models will dramatically improve the precision of chemical-property predictions. They’ll be able to help identify “first-time right” candidates, sending only the most promising molecules to the lab for synthesis and testing—which will save both time and cost.
Advertisement
Another key aspect of the discovery process is understanding the chemical reactions that govern how new substances are formed and behave. Think of these reactions as a network of roads winding through a mountainous landscape, where each road represents a possible reaction step, from starting materials to final products. The outcome of a reaction depends on how quickly it travels down each path, which in turn is determined by the energy barriers along the way—like mountain passes that must be crossed. To find the most efficient route, we need accurate calculations of these barrier heights, so that we can identify the lowest passes and chart the fastest path through the reaction landscape.
Even small errors in estimating these barriers can lead to incorrect predictions about which products will form. Case in point: A slight miscalculation in the energy barrier of an environmental reaction could mean the difference between labeling a compound a “forever chemical” or one that safely degrades over time.
Accurate modeling of reaction rates is also essential for designing catalysts—substances that speed up and steer reactions in desired directions. Catalysts are crucial in industrial chemical production, carbon capture, and biological processes, among many other things. Here, too, quantum-accurate AI models can play a transformative role by providing the high-fidelity data needed to predict reaction outcomes and design better catalysts.
Advertisement
Once trained, these AI models, powered by quantum-accurate data, will revolutionize computational chemistry by delivering quantum-level precision. And once the AI models, which run on classical computers, are trained with quantum computing data, researchers will be able to run high-accuracy simulations on laptops or desktop computers, rather than relying on massive supercomputers or future quantum hardware. By making advanced chemical modeling more accessible, these tools will democratize discovery and empower a broader community of scientists to tackle some of the most pressing challenges in health, energy, and sustainability.
Remaining Challenges for AI and Quantum Computing
By now, you’re probably wondering: When will this transformative future arrive? It’s true thatquantum computers still struggle with error rates and limited lifetimes of usable qubits. And they still need to scale to the size required for meaningful chemistry simulations. Meaningful chemistry simulations beyond the reach of classical computation will require hundreds to thousands of high-quality qubits with error rates of around 10-15, or one error in a quadrillion operations. Achieving this level of reliability will require fault tolerance through redundant encoding of quantum information in logical qubits, each consisting of hundreds of physical qubits, thus requiring a total of about a million physical qubits. Current AI models for chemical-property predictions may not have to be fully redesigned. We expect that it will be sufficient to start with models pretrained on classical data and then fine-tune them with a few results from quantum computers.
Despite some open questions, the potential rewards in terms of scientific understanding and technological breakthroughs make our proposal a compelling direction for the field. The quantum computing industry has begun to move beyond the early noisy prototypes, and high-fidelity quantum computers with low error rates could be possible within a decade.
Realizing the full potential of quantum-enhanced AI for chemical discovery will require focused collaboration between chemists and materials scientists who understand the target problems, experts in quantum computing who are building the hardware, and AI researchers who are developing the algorithms. Done right, quantum-enhanced AI could start to tackle the world’s toughest challenges—from climate change to disease—years ahead of anyone’s expectations.
European Commission President Ursula von der Leyen says the app is technically ready and will be available to citizens soon.
The European Commission yesterday (15 April) unveiled a digital age verification app aimed at shielding children from harmful content online, with European Commission president Ursula von der Leyen declaring there are “no more excuses” for platforms that fail to act.
Announcing the tool in Brussels on Wednesday (15 April), von der Leyen painted a stark picture of the risks children face in the digital world. “One child in six is bullied online. One child in eight is bullying another child online,” she said, warning that social media platforms use “highly addictive designs” that damage young minds and leave children vulnerable to predators.
Users set up the app using a passport or ID card, after which they can confirm their age anonymously. The free app, which the Commission says is technically ready and will soon be available to citizens, allows users to verify their age when accessing online platforms “without revealing any other personal data”, according to von der Leyen. “Users cannot be tracked,” von der Leyen stressed, adding that the app is fully open source and compatible with any device.
Advertisement
Drawing a comparison with the EU’s Covid certificate – adopted in record time and used across 78 countries – von der Leyen said the age verification tool follows “the same principles, the same model.” Seven member states, including France, Italy, Spain and Ireland, are already planning to integrate the app into their national digital wallets.
The announcement comes ahead of the second meeting of the Commission’s Special Panel on Children’s Safety Online, which is due to deliver its recommendations by summer. Von der Leyen was unambiguous about the Commission’s direction of travel on enforcement. “Children’s rights in the European Union come before commercial interest. And we will make sure they do.”
Platforms were put on notice that voluntary compliance alone will not suffice. “We will have zero tolerance for companies that do not respect our children’s rights,” she said, adding that the Commission is “moving ahead with full speed and determination on the enforcement of our European rules”.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Consumer Intelligence Research Partners estimates the Mac Mini accounted for roughly 3% of Apple’s US Mac unit sales last year. That position has shifted quickly. Read Entire Article Source link
Jeff Bezos’ space company Blue Origin successfully re-used one of its New Glenn rockets for the first time ever on Sunday, but the company failed at its primary mission: delivering a communications satellite to orbit for customer AST SpaceMobile.
AST SpaceMobile issued a statement Sunday afternoon that the upper stage of the New Glenn rocket placed BlueBird 7 satellite into an orbit that was “lower than planned.” The satellite successfully separated from the rocket and powered on, the company said, but the altitude is too low “to sustain operations” and will now have to be de-orbited — left to burn up in the atmosphere of Earth.
The cost of the loss of the satellite is covered by AST SpaceMobile’s insurance policy, according the company, and there are successive BlueBird satellites that will be completed in around a month. AST SpaceMobile has contracts with more than just Blue Origin, and the company said it expects to be able to launch 45 more to space by the end of 2026.
But this represents the first major failure for Blue Origin’s New Glenn program, which only made its first flight in January 2025 after more than a decade in development. This was the second mission where New Glenn carried a customer payload to space, after launching twin spacecraft bound for Mars on behalf of NASA last November. The company did not immediately respond to a request for comment.
Advertisement
The apparent failure of New Glenn’s second stage could have wider implications beyond Blue Origin’s near-term commercial ambitions. The company is pushing hard to become one of the main launch providers for NASA’s Artemis missions to the moon and beyond. The space agency — and the Trump administration — has put pressure on Blue Origin and SpaceX to be able to put landers on the moon by the end of President Donald Trump’s second term, before advancing to returning humans to the lunar surface.
Blue Origin CEO Dave Limp has even said his company “will move heaven and Earth” to help NASA get back to the moon faster.
Blue Origin recently completed testing its first version of its own lunar lander, which the company is expected to try and launch at some point this year (without any crew). Blue Origin had suggested last year that it was considering launching this lander on New Glenn’s third mission, but ultimately decided to launch the AST SpaceMobile satellite instead.
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
The third New Glenn launch seemed to start just fine on Sunday, with the the mega-rocket lifting off at 7:35 a.m. local time from Cape Canaveral, Florida. It was the first time Blue Origin re-used a previously-flown New Glenn booster — the same one that flew during New Glenn’s second mission. Roughly 10 minutes after liftoff, the booster came back down and landed on a drone ship in the ocean, just like it had last November. Jeff Bezos even shared drone footage of the booster’s landing on X, the social media site owned by his rival Elon Musk. (Musk offered congratulations.)
Advertisement
Roughly two hours after the launch, though, Blue Origin announced in its own post that the New Glenn upper stage placed AST SpaceMobile satellite in an “off-nominal orbit.” The company has not released any more information since that post.
Blue Origin spent a long time developing New Glenn, and it has been taken as a sign of confidence in that process that the company decided to start launching commercial payloads during these early missions. By comparison, SpaceX has spent the last few years flying test versions of its massive Starship, but has stuck with using dummy payloads as it works out the rocket’s kinks.
SpaceX did lose payloads deeper into its Falcon 9 program. In 2015, on the 19th Falcon 9 mission, the rocket blew up mid-flight and lost an entire International Space Station cargo spacecraft. In 2016, a Falcon 9 exploded on the launch pad during testing, causing the loss of an internet satellite for Meta.
A new NYT Connections puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Sunday’s puzzle instead then click here: NYT Connections hints and answers for Sunday, April 19 (game #1043).
Good morning! Let’s play Connections, the NYT’s clever word game that challenges you to group answers in various categories. It can be tough, so read on if you need Connections hints.
What should you do once you’ve finished? Why, play some more word games of course. I’ve also got daily Strands hints and answers and Quordle hints and answers articles if you need help for those too, while Marc’s Wordle today page covers the original viral word game.
Advertisement
SPOILER WARNING: Information about NYT Connections today is below, so don’t read on if you don’t want to know the answers.
Article continues below
NYT Connections today (game #1044) – today’s words
(Image credit: New York Times)
Today’s NYT Connections words are…
CYBER
HOURGLASS
BLUE
CLOUD
ROD
WEB
MANIC
NET
PUFF
VENOM
HOOK
BILLOW
BAIT
PLUME
MEATLESS
CANNIBALISM
NYT Connections today (game #1044) – hint #1 – group hints
What are some clues for today’s NYT Connections groups?
YELLOW: A bunch of fumes
GREEN: Used for angling
BLUE: Linked to an infamous arachnid
PURPLE: Start the week
Need more clues?
We’re firmly in spoiler territory now, but read on if you want to know what the four theme answers are for today’s NYT Connections puzzles…
Advertisement
Sign up for breaking news, reviews, opinion, top tech deals, and more.
NYT Connections today (game #1044) – hint #2 – group answers
What are the answers for today’s NYT Connections groups?
YELLOW: MASS OF SMOKE
GREEN: FISHING GEAR
BLUE: ASSOCIATED WITH BLACK WIDOW SPIDERS
PURPLE: _____ MONDAY
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON’T WANT TO SEE THEM.
Advertisement
NYT Connections today (game #1044) – the answers
(Image credit: New York Times)
The answers to today’s Connections, game #1044, are…
YELLOW: MASS OF SMOKE BILLOW, CLOUD, PLUME, PUFF
GREEN: FISHING GEAR BAIT, HOOK, NET, ROD
BLUE: ASSOCIATED WITH BLACK WIDOW SPIDERS CANNIBALISM, HOURGLASS, VENOM, WEB
PURPLE: _____ MONDAY BLUE, CYBER, MANIC, MEATLESS
My rating: Easy
My score: Perfect
A bit of music knowledge got me to my 33rd “Purple First” thanks to Blue Monday by New Order and Prince’s Manic Monday,made famous by The Bangles. CYBER I was confident about, but MEATLESS I went with purely because of the alliteration.
This actually seemed the easiest group, not that I’m complaining.
Advertisement
Elsewhere, nature knowledge may have helped me get ASSOCIATED WITH BLACK WIDOW SPIDERS, but I spotted the more obvious yellow and green groups first.
Yesterday’s NYT Connections answers (Sunday, April 19, game #1043)
BLUE: CARDS IN TEXAS HOLD ‘EM FLOP, HOLE, RIVER, TURN
PURPLE: LAST WORDS OF CANDY BRANDS IN THE SINGULAR CAP, DUD, KID, MINT
What is NYT Connections?
NYT Connections is one of several increasingly popular word games made by the New York Times. It challenges you to find groups of four items that share something in common, and each group has a different difficulty level: green is easy, yellow a little harder, blue often quite tough and purple usually very difficult.
On the plus side, you don’t technically need to solve the final one, as you’ll be able to answer that one by a process of elimination. What’s more, you can make up to four mistakes, which gives you a little bit of breathing room.
Advertisement
It’s a little more involved than something like Wordle, however, and there are plenty of opportunities for the game to trip you up with tricks. For instance, watch out for homophones and other word games that could disguise the answers.
It’s playable for free via the NYT Games site on desktop or mobile.
There are many drivers who often bemoan the very existence of traffic lights. Despite incurring the daily ire of commuters who are running late for work, even those haters have to acknowledge the traffic signal’s invaluable function in helping to keep our roadways safe.
Traffic signals have, of course, evolved considerably since they were first pressed into use in the late-1860s, with the first electric lights coming into play sometime around 1912. It wasn’t long until those signals started using colored lights, and have since evolved into the red, yellow, and green modes we are all too familiar with today. Even as safety remains the primary purpose of the hundreds of thousands of traffic lights currently employed throughout the United States, some theorize that the life-saving devices may one day cease to exist.
Until that fateful day, getting stuck at red lights when you’re in a rush will remain a constant source of commuter frustration. On some occasions, however, a stream of greens opens up on the road ahead like the parting of the Red Sea. That stream of green has a name, with researchers dubbing it the “Green Wave.” While they may seem rare, the “Green Wave” is a common occurrence in certain parts of the world, and it serves a very important purpose.
Advertisement
What is the purpose of a traffic light Green Wave?
Brasil2/Getty Images
While it might seem like a weird sort of karmic intervention, that “Green Wave” of traffic lights was actually programmed for a specific purpose by whatever government organization is in charge of maintaining the traffic signals in your city, state or township. They are, however, far more commonly utilized on high-volume roads in urban areas. The purpose of a “Green Wave” is to improve the flow of traffic in those areas, particularly during times with increased traffic volume.
At its core, the concept is very simple. The idea is to keep traffic flowing during peak volume times by simply reducing the number of stops at concurrent traffic signals. To enact a “Green Wave,” planners and engineers simply synchronize the traffic lights in congested areas to all turn green at the same time and stay that way for a specified period that ensures a steady flow of traffic in one direction. The method is, naturally, easier to manage on one-way streets with no turning lanes, though some cities have attempted to aid traffic flow further by simply outlawing left turns in metropolitan areas. Some have even taken to banning right turns too.
Advertisement
In any case, on top of aiding the flow of traffic in congested areas, “Green Wave” traffic patterns are also believed to have a positive effect on the environment. After all, the reduction in stop-and-go traffic also reduces a vehicle’s idling time, which, in turn, leads to reduced greenhouse gas emissions.
Digit is seen performing deadlifts with a 65-pound weight in the center of a lab. Agility Robotics shared the video a few days ago, and to be honest, the robot maintains a fairly steady balance and completes the task from beginning to end. Someone mentions that the new version can lift significantly more weight than the previous one, while another laughs about how it can run all day without stopping.
The engineers designed the test so that Digit had to work harder than usual. Every additional pound it must lift causes the robot to modify its entire body at simultaneously, including its arms, legs, torso, and everything else. The system must keep the weight centered and avoid tipping over, therefore the legs, arms, and rest of the robot must all function together. These actuators and joints can withstand repeated load without breaking down. Digit’s video simply shows the robot grasping the weight, rising up, then effortlessly placing it down repeatedly in a standard indoor location built for people.
Sleek & Durable Design: Standing at 132cm tall and weighing only approx. 35kg, the G1 is constructed with aerospace-grade aluminum alloy and carbon…
High Flexibility & Safe Movement: Boasting 23 joint degrees of freedom (6 per leg, 5 per arm), it offers an extensive range of motion. For safety, it…
Smart Interaction & Connectivity: Powered by an 8-core high-performance CPU and equipped with a depth camera and 3D LiDAR. It supports Wi-Fi 6 and…
Simulation is where all of the training takes place, because before it touches a real weight, an engineer creates a digital copy of the same thing in a virtual world. Then they anticipate what will happen when the weight shifts. The grip pressure remains constant, with no slipping or lowering. Any changes to the robot’s equilibrium are registered extremely instantly. The policy learns the perfect lift in the simulated environment with no complications before being transmitted directly to the real robot. When you see the real robot perform it, it looks fairly natural because it has already handled every potential variable thousands of times in the simulation.
Engineers chose deadlifts for the test because the movement requires complete body control. A simple arm raise would not put the hardware under the same level of stress. By incorporating weight into the simulation loop, the team is able to handle balancing changes that a pre-programmed script cannot handle alone. As a result, Digit lifts consistently, with no wobbling or resets. This method is easily adaptable to other objects or larger loads in future tests.
Digit was built by Agility to manage long, repetitive jobs that wear people out, such as working in factories or warehouses where you must squeeze into tight spaces, pick up oddly shaped goods, and continue without taking a break. This deadlift test demonstrates Digit’s ability to lift weight on ordinary floors while remaining steady, which is ideal for picking up boxes, carrying tools, and stacking things in human-designed places.
Advertisement
It also illustrates how far they’ve come in teaching robots to perform physical tasks. Whole-body synchronization was originally a nightmare, with hand-tuned code for each joint angle. But now they can simply train a policy in simulation that adapts on the go. Digit detects weight using its sensors, corrects itself in real time, and completes the lift without assistance, while the hardware can keep up because the training has already taught the actuators and joints to be more durable. [Source]
In October and through November, America’s EV sales reached their lowest point since 2022 after government subsidies expired, remembers Time. “But first-quarter data for 2026 shows that used EV sales were 12% higher than the same time last year and 17% higher than the previous quarter.
“One factor likely helping push buyers toward these cars is high gas prices, which recently topped $4.00 a gallon for the first time in four years,” they write — but it’s not just in the U.S. Instead, they argue the conflict “is driving a global surge of interest in electric vehicles…”
In the U.K., electric car sales reached a record high, with 86,120 vehicles sold in March… The French online used-car retailer Aramisauto reported its share of EV sales nearly doubled from February 16 to March 9, rising to 12.7% from 6.5%, while sales of fueled models dropped to 28% of sales from 34%, and sales of diesel models dropped to 10% from 14%. Germany’s largest online car market, mobile.de, told Reuters that the share of EV searches on its website has tripled since the start of March — from 12% to 36%, with car dealers receiving 66% more enquiries for used EVs than in February.
South Korea reported that registrations for electric vehicles more than doubled in March compared to the prior year, due in part to rising fuel prices and government subsidies… In New Zealand, more than 1,000 EVs were registered in the week that ended on March 22, close to double the week before, making it the country’s biggest week for electric vehicle registrations since the end of 2023, according to the country’s Transport Minister, Chris Bishop.
In America, Bloomberg also reports 605 high-speed EV charging stations switched on in just the first three months of 2025, “a 34% increase over the year-earlier period,” according to their analysis of federal data. A data platform focused on EV infrastructure tells Bloomberg that speedier and more reliable chargers are convincing more drivers to go electric and use public plugs.
Most loudspeaker designers don’t spend much time debating open versus closed the way headphone enthusiasts do. Cabinets are part of the equation for a reason, offering control, efficiency, and predictable performance. That’s the accepted playbook. But like any good rule in audio, someone is always trying to break it.
At AXPONA 2026, La Dolce Audio showed what happens when you ignore that playbook and lean into experimentation. Founder Terry Gesualdo isn’t approaching amplification or speaker design from a traditional standpoint, he’s part of a growing group of builders exploring open designs and current drive amplification as an alternative to the usual voltage driven norm.
I met Gesualdo on the shuttle ride over to the show, which feels about right. This isn’t a polished, corporate origin story, it’s the familiar path of someone who started by modifying gear, then building his own tube amps for himself, then for friends and family. The difference here is that he didn’t stop at tweaking circuits. He kept pushing until the results looked and sounded like something entirely his own.
Current Drive Tube Amplification: Why La Dolce Audio Isn’t Following the Script
Having built a few tube amps, I’m always curious to see what others are doing, and Terry Gesualdo is not following the usual path. Most of his designs are single ended pentode circuits, not triodes, and not push pull designs chasing more voltage swing. That choice alone puts him in a different lane than a lot of tube builders.
Advertisement
Where things really diverge is the move to current drive. Most amplifiers are voltage driven. That’s the standard approach across both solid state and tube designs. Current drive shows up more often inside DACs where signal levels are extremely small, and occasionally in headphone amplifiers, but rarely in loudspeaker systems where current demands are far higher.
The idea behind current drive is fairly straightforward. By controlling current instead of voltage, the amplifier reduces the impact of back EMF from the driver. That back EMF is the voice coil behaving like a generator as it moves through the magnetic field, feeding energy back into the amplifier. Reduce that interaction and, in theory, you reduce distortion and improve control over the driver.
It’s not a new concept, but it’s one that almost nobody is applying to loudspeakers in this way, especially with tube amplification. That’s what makes what La Dolce Audio is doing worth paying attention to.
Control Over Harmonics Instead of Chasing Purity
Circling back to that idea of ignoring the usual playbook, another aspect that reinforces how La Dolce Audio is taking a different path is the near exclusive use of pentode tubes instead of the more common triodes. Triodes are the simplest form of amplification with three active elements, anode, cathode, and grid. Fewer parts in the signal path is why many listeners and designers gravitate toward them. The assumption is less complexity means lower distortion and fewer unwanted artifacts.
But that’s only part of the story. Harmonic distortion doesn’t disappear just because the circuit is simpler. It just changes character. And not all harmonics are a problem. A lot of what people describe as tube warmth comes from second and third order harmonics, which many listeners actually prefer.
Advertisement
Terry Gesualdo leans into that reality rather than trying to avoid it. By using pentodes, which add additional control elements beyond what a triode offers, he can shape those harmonic structures instead of accepting whatever the circuit gives him. That includes adjusting the balance between second and third order harmonics and even their phase relationships.
It’s a different mindset. Instead of chasing the lowest possible distortion number, the goal is control over how that distortion presents itself, and giving the listener a way to fine tune the result.
Advertisement. Scroll to continue reading.
Some will find that approach a bit sacrilegious. There’s a large part of the hobby focused on removing as much of this behavior as possible, chasing lower distortion numbers and cleaner measurements. That’s not the goal here.
Advertisement
La Dolce Audio leans into a different philosophy. “If it sounds good, do it” is more than a slogan. It reflects the idea that listening is subjective and that not every system needs to be locked into a single interpretation of neutrality. By giving users control over harmonic structure, the design puts some of that decision making back in the listener’s hands.
UA2.5 and UA2.5M: Modular Power and User Tunability
La Dolce Audio UA2.5M monoblock
La Dolce Audio offers two amplifier paths built around the same core ideas but with different roles. The UA2.5 is a dual channel amplifier rated at roughly 3 to 5 watts depending on tube selection, and it’s where most of the flexibility lives. With 24 possible sound signatures, it gives the user direct control over how the amplifier presents harmonic content and overall character.
The UA2.5M monoblocks step things up in output, delivering around 9 watts per channel, but they take a more focused approach. They are designed to be paired with the UA2.5, which handles preamp duties and sound shaping. As a result, the monoblocks do not include the same tuning controls, focusing instead on providing additional power while maintaining the same underlying design philosophy.
HPA2.3 Headphone Adapter
La Dolce Audio UA2.5 Tube Amplifier (top) with HPA2.3 Headphone Adapter (bottom)
Alongside its amplifiers, La Dolce Audio offers the HPA2.3 headphone “amplifier,” although that label needs a bit of clarification. It’s not an amplifier in the traditional sense. The HPA2.3 is a passive device designed to work with the UA2.5, relying on it for signal processing and gain. In practice, it converts the UA2.5 into a headphone amplifier rather than operating as one on its own.
That means the HPA2.3 can drive a wide range of headphones depending on how the UA2.5 is configured, but it cannot function independently. No preamp, no sound.
Pricing reflects that modular approach. The UA2.5, which serves as the foundation of the system, runs between $1,799 and $2,499 depending on configuration and tube selection. The UA2.5M monoblocks are $1,999 each, and the HPA2.3 adds another $599. A full system lands in the $3,500 range, depending on how far you go down the rabbit hole.
Advertisement
The Bottom Line
La Dolce Audio isn’t trying to fit into the usual mold, and that’s the point. In a category where a lot of designs feel like small variations on the same theme, this is a reminder that there are still different ways to approach amplification and system building.
Beyond the amplifiers, the partnership with ABX Audiophiles on Discord to offer open baffle speaker kits adds another layer. It invites listeners to get involved, not just as buyers but as participants, with a community that shares ideas, solves problems, and pushes designs forward together. We’ll have more on that ABX side of things in a forthcoming article.
It won’t be for everyone. If you want plug and play simplicity, this isn’t it. But if you’re the type who likes to understand what your system is doing and shape it to your preferences, La Dolce offers something most companies don’t. A system you can actually interact with, not just listen to.
Unlike previous years in what TV nerds like me call the “brightness wars,” the U7SG doesn’t outblast its predecessor, but it’s not a problem. It gets around three times as bright as anything you can stream (which is naturally capped due to compression), and has enough firepower for all but the flashiest 4K HDR Blu-rays. Its color processing shows a little more restraint than in previous models. It’s not quite what I’d call “accurate to the director’s intent,” like the best TVs I test, but it does keep itself from blasting your eyeballs most of the time.
The high brightness is matched by deep black levels, without much of the “blooming” or “haloing” around bright objects that can dilute the contrast of many budget-friendly TVs. It’s not as striking as OLED TVs, which can control each of their millions of pixels on demand, but it’ll wow you in deep space scenes just the same. I was pleased that the TV’s odd local dimming issue didn’t crop up in real-world content, but the picture does tend to flatten shadows in dark scenes more than expected, even as the matte-like screen does a good job keeping reflections at bay.
Photograph: Ryan Waniata
There are some other notable flaws. Moving off to the TV’s side in my easy chair led to dimmer colors, washed-out contrast between the brightest and darkest images, and uneven backlighting, aka the “dirty-screen effect.” That stood out most in the green backdrop of the Masters on Sunday as Rory McIlroy held on for the win. It wasn’t an issue when viewing head-on, but even then, I noticed some dingy yellow lines along the screen’s left and right sides with light backgrounds. (I may not have noticed them much if I hadn’t been bombarding this TV with test content first.)
The U7SG still doesn’t feel quite like a premium model. But it’s a very clear, bright TV, and will feel more like it’s worth the money once RGB shows up on other Hisense models and the price on this one drops. If you want something brighter than a similarly priced OLED like the LG B5, the U7 is a great buy and has a few good upgrades over last year’s U75QG.
Advertisement
We’ll know more about the 2026 TV landscape once the new RGB TVs have landed, but if you need a powerful, classy-looking TV before then, the U7SG should be on your list.
Feroze Motafram is an operations consultant based in Sammamish, Wash., and founder of Avestan LLC. This piece is adapted from a LinkedIn post.
Someone asked me recently what made me think about writing this. The trigger, I told them, was simpler than you might expect.
I live in Sammamish, in the shadow of Microsoft’s looming presence. Microsoft employees are my neighbors, my social circle, the people I run into at weekend gatherings. Over time I noticed that conversations with them had a distinctive gravitational pull — always inward, toward reorgs, internal politics, who reports to whom now, who’s ascendant, who’s out. Customers were rarely part of the conversation. This usually means navigating the organization has become more consuming than building anything within it.
Microsoft’s stock decline and the softening of real estate in this corridor (both affecting me personally) were the prompts to write it down. The material was already sitting in front of me.
I should be clear about what I am and am not. My formal training is in electrical engineering. The primary instruments of my early career were set squares and slide rules, which will tell you something about both my vintage and my domain. I have spent the intervening decades as a senior executive at Fortune 100 companies and, more recently, as an operations and supply chain consultant. I build and fix things: supply chains, organizations that have lost their way. What I can offer is not insider knowledge. It is 30 years of pattern recognition, applied to what is visible from where I stand.
Advertisement
This is the lens I am bringing. Take it for what it is worth.
The market is asking a question
Microsoft stock declined roughly 25% in Q1 2026, representing its worst quarterly performance since the 2008 financial crisis despite blockbuster results. The market may overreact, but it is not stupid. When the stock of a company of this scale underperforms that of its peer group by double digits, the question worth asking is not “is this a buying opportunity.” The question is: what does the market understand about this organization that the headlines don’t capture?
Part of the answer is visible in the financials. A striking portion of Microsoft’s forward revenue backlog is tied to a single counterparty, OpenAI, an unprofitable startup that has since signed a landmark cloud agreement with Amazon, directly challenging the Azure exclusivity Microsoft had treated as a cornerstone of its AI strategy. Meanwhile, Microsoft is building its own internal AI model as a hedge, an expensive bet layered on top of an already expensive bet.
But the part that does not show up in an earnings report may be the more consequential story. That is what I want to offer here.
Advertisement
The monopoly dividend, and its hidden cost
For the better part of three decades, Microsoft enjoyed something very few companies in history have had: a captive market. Enterprise customers did not use Office because they loved it. They used it because leaving was more painful than staying. That distinction between loyalty and lock-in matters enormously, and it is one that organizations rarely make honestly about themselves.
When your customers cannot leave, the feedback loops that drive genuine innovation go silent. The tendency is to stop asking “what does the customer need?” and start asking “what can we get away with?” Processes multiply. Committees proliferate. Bureaucracy thrives. The organization optimizes for defending territory rather than creating it.
This is not a character failing. It occurs insidiously and unconsciously. It is an entirely rational organizational response to a monopolistic competitive environment. But it leaves a mark. And that mark does not disappear simply because the competitive environment changes.
Satya Nadella earned his laurels, but the work isn’t finished
The Azure pivot was a genuine strategic achievement, and Microsoft CEO Satya Nadella’s cultural reset from “know-it-all” to “learn-it-all,” as he framed it, was real and necessary. The stack-ranking era that preceded him did generational damage to Microsoft’s ability to collaborate, retain talent, and take meaningful risks. He arrested that decline and deserves full credit for it.
Advertisement
But here one must tread carefully. Stack ranking was formally abolished in the final months of Steve Ballmer’s tenure. The announcement was celebrated, the headlines were laudatory. What is rather more interesting is what one hears in conversations since. Ask Microsoft employees about the performance review system that replaced it, and the response is rarely enthusiastic. Whether the underlying mechanics genuinely changed, or whether the organization simply learned to dress the same instincts in more palatable language, is a question I cannot answer from the outside. What I can observe is that the people doing the work don’t appear to believe the answer is reassuring.
Cultural transformation in a 220,000-person organization moves at a glacial pace. You can change the language in a decade. Changing the instincts takes considerably longer. One has to wonder how many of the engineers and managers who learned to survive the Ballmer years by navigating politics rather than building products have since moved on, and how many remain, in leadership positions, still oriented by instinct toward self-protection over bold action.
What I can observe is the output. Copilot (inarguably Microsoft’s most strategically critical product) has converted just 15 million paid subscribers from a captive base of 450 million Microsoft 365 users. That is 3.3%. When your own customers will not buy what you are selling at scale, it is worth asking whether the product is genuinely solving a problem or simply a feature in search of a use case.
Microsoft’s internal preoccupations do not stay inside the building. I have observed versions of this dynamic before, most vividly when I lived in Brookfield, Wis., in the orbit of GE Healthcare’s then-headquarters. But what I observe in this corridor is of a different magnitude. It is not just politics that dominates the conversation. It is the organization itself — its structure, its hierarchies, its shifting priorities — that has become the primary subject of intellectual energy.
Advertisement
The campus, in a very real sense, has become the product. When navigating the organization becomes more consuming than building anything within it, that is not a criticism of the individuals. It is a diagnosis of the system they are operating inside.
The human capital story no one is writing
There is a dimension to this that the financial press has largely missed, and I raise it because I see it in my community every day… including, in ways I did not anticipate, in my own backyard.
A significant proportion of Microsoft’s engineering talent (and the engineering talent of the broader Seattle tech corridor) consists of H-1B visa holders. These are exceptional professionals: highly educated, deeply skilled, often carrying decade-long career investments in the United States. They have built lives here. Many have children born here. They have been, in many cases, the intellectual engine of the products Microsoft is depending on to compete in the AI era.
That population is operating under a level of personal anxiety that is, in my observation, without modern precedent. Travel advisories from their own employers. A $100,000 petition fee for new visa applications. Proposed rule changes touching birthright citizenship. A policy environment that sends a clear and unambiguous message: your presence here is conditional, negotiable, and subject to revision without notice.
Advertisement
The behavioral consequence of that anxiety is not visible in a quarterly earnings report. But it is real and consequential. People operating under existential personal uncertainty do not take professional risks. They do not champion the bold new initiative. They do not volunteer for the high-visibility project that could fail. They execute reliably on what already exists and protect their position. In an organization that already has a cultural predisposition toward risk aversion, this compounds the pathology in ways that will show up — perhaps not this quarter, but in the product decisions made over the next eighteen months.
The effects are visible beyond the campus walls. Conversations with real estate professionals in this corridor tell a consistent story: demand from this community, which has historically been among the most financially capable buyers in the region, has softened measurably. Not because the finances have changed, but because the horizon has. When you are uncertain whether your visa will be renewed, or whether your children’s citizenship status may be revisited, you do not buy a house.
The softening of demand is not merely an abstraction for those of us who live here. But the more significant consequence is not measured in property values. It is measured in the quality of risk-taking inside those campuses. And risk-taking is precisely what Microsoft needs most right now.
The case for optimism, and why it requires more than patience
None of this is to suggest Microsoft is broken beyond repair. Betting against Microsoft has historically been an enterprise for the foolhardy. The balance sheet remains stellar. The enterprise relationships are genuinely extraordinary. Ripping out Azure, Teams, and the M365 stack is not a decision any CIO makes lightly. The installed-base moat is real, and should not be underestimated by anyone, least of all an operations consultant from the suburbs.
Advertisement
What I would offer, more modestly, is this: the bull case requires more than a great balance sheet and sticky products. It requires an organization capable of genuine innovation at speed. Which in turn requires a culture that rewards risk, retains its most creative talent, and executes with urgency. Whether Microsoft can summon those qualities at this particular moment is a question I cannot answer with conviction.
What I can say is that the market, which is considerably more qualified than I am, appears to be asking the same question. The valuation has compressed to levels not seen in a decade, briefly falling below the S&P 500 for the first time in a generation. That is not the posture of a market betting with conviction that the answer is yes.
Perhaps it should be. I honestly don’t know. What I do know is that the signals visible from outside the building — from the neighborhood, from weekend gatherings, from the casual conversations — are worth paying attention to. They usually are.
You must be logged in to post a comment Login