Standard RAG pipelines break when enterprises try to use them for long-term, multi-session LLM agent deployments. This is a critical limitation as demand for persistent AI assistants grows.
xMemory, a new technique developed by researchers at King’s College London and The Alan Turing Institute, solves this by organizing conversations into a searchable hierarchy of semantic themes.
Experiments show that xMemory improves answer quality and long-range reasoning across various LLMs while cutting inference costs. According to the researchers, it drops token usage from over 9,000 to roughly 4,700 tokens per query compared to existing systems on some tasks.
For real-world enterprise applications like personalized AI assistants and multi-session decision support tools, this means organizations can deploy more reliable, context-aware agents capable of maintaining coherent long-term memory without blowing up computational expenses.
Advertisement
RAG wasn’t built for this
In many enterprise LLM applications, a critical expectation is that these systems will maintain coherence and personalization across long, multi-session interactions. To support this long-term reasoning, one common approach is to use standard RAG: store past dialogues and events, retrieve a fixed number of top matches based on embedding similarity, and concatenate them into a context window to generate answers.
However, traditional RAG is built for large databases where the retrieved documents are highly diverse. The main challenge is filtering out entirely irrelevant information. An AI agent’s memory, by contrast, is a bounded and continuous stream of conversation, meaning the stored data chunks are highly correlated and frequently contain near-duplicates.
To understand why simply increasing the context window doesn’t work, consider how standard RAG handles a concept like citrus fruit.
Imagine a user has had many conversations saying things like “I love oranges,” “I like mandarins,” and separately, other conversations about what counts as a citrus fruit. Traditional RAG may treat all of these as semantically close and keep retrieving similar “citrus-like” snippets.
Advertisement
“If retrieval collapses onto whichever cluster is densest in embedding space, the agent may get many highly similar passages about preference, while missing the category facts needed to answer the actual query,” Lin Gui, co-author of the paper, told VentureBeat.
A common fix for engineering teams is to apply post-retrieval pruning or compression to filter out the noise. These methods assume that the retrieved passages are highly diverse and that irrelevant noise patterns can be cleanly separated from useful facts.
This approach falls short in conversational agent memory because human dialogue is “temporally entangled,” the researchers write. Conversational memory relies heavily on co-references, ellipsis, and strict timeline dependencies. Because of this interconnectedness, traditional pruning tools often accidentally delete important bits of a conversation, leaving the AI without vital context needed to reason accurately.
Naive RAG vs structured memory (source: arXiv)
Advertisement
Why the fix most teams reach for makes things worse
To overcome these limitations, the researchers propose a shift in how agent memory is built and searched, which they describe as “decoupling to aggregation.”
Instead of matching user queries directly against raw, overlapping chat logs, the system organizes the conversation into a hierarchical structure. First it decouples the conversation stream into distinct, standalone semantic components. These individual facts are then aggregated into a higher-level structural hierarchy of themes.
When the AI needs to recall information, it searches top-down through the hierarchy, going from themes to semantics and finally to raw snippets. This approach avoids redundancy. If two dialogue snippets have similar embeddings, the system is unlikely to retrieve them together if they have been assigned to different semantic components.
For this architecture to succeed, it must balance two vital structural properties. The semantic components must be sufficiently differentiated to prevent the AI from retrieving redundant data. At the same time, the higher-level aggregations must remain semantically faithful to the original context to ensure the model can craft accurate answers.
Advertisement
A four-level hierarchy that shrinks the context window
The researchers developed xMemory, a framework that combines structured memory management with an adaptive, top-down search strategy.
xMemory continuously organizes the raw stream of conversation into a structured, four-level hierarchy. At the base are the raw messages, which are first summarized into contiguous blocks called “episodes.” From these episodes, the system distills reusable facts as semantics that disentangle the core, long-term knowledge from repetitive chat logs. Finally, related semantics are grouped together into high-level themes to make them easily searchable.
xMemory architecture (source: arXiv)
xMemory uses a special objective function to constantly optimize how it groups these items. This prevents categories from becoming too bloated, which slows down search, or too fragmented, which weakens the model’s ability to aggregate evidence and answer questions.
Advertisement
When it receives a prompt, xMemory performs a top-down retrieval across this hierarchy. It starts at the theme and semantic levels, selecting a diverse, compact set of relevant facts. This is crucial for real-world applications where user queries often require gathering descriptions across multiple topics or chaining connected facts together for complex, multi-hop reasoning.
Once it has this high-level skeleton of facts, the system controls redundancy through what the researchers call “Uncertainty Gating.” It only drills down to pull the finer, raw evidence at the episode or message level if that specific detail measurably decreases the model’s uncertainty.
“Semantic similarity is a candidate-generation signal; uncertainty is a decision signal,” Gui said. “Similarity tells you what is nearby. Uncertainty tells you what is actually worth paying for in the prompt budget.” It stops expanding when it detects that adding more detail no longer helps answer the question.
What are the alternatives?
Existing agent memory systems generally fall into two structural categories: flat designs and structured designs. Both suffer from fundamental limitations.
Advertisement
Flat approaches such as MemGPT log raw dialogue or minimally processed traces. This captures the conversation but accumulates massive redundancy and increases retrieval costs as the history grows longer.
Structured systems such as A-MEM and MemoryOS try to solve this by organizing memories into hierarchies or graphs. However, they still rely on raw or minimally processed text as their primary retrieval unit, often pulling in extensive, bloated contexts. These systems also depend heavily on LLM-generated memory records that have strict schema constraints. If the AI deviates slightly in its formatting, it can cause memory failure.
xMemory addresses these limitations through its optimized memory construction scheme, hierarchical retrieval, and dynamic restructuring of its memory as it grows larger.
When to use xMemory
For enterprise architects, knowing when to adopt this architecture over standard RAG is critical. According to Gui, “xMemory is most compelling where the system needs to stay coherent across weeks or months of interaction.”
Advertisement
Customer support agents, for instance, benefit greatly from this approach because they must remember stable user preferences, past incidents, and account-specific context without repeatedly pulling up near-duplicate support tickets. Personalized coaching is another ideal use case, requiring the AI to separate enduring user traits from episodic, day-to-day details.
Conversely, if an enterprise is building an AI to chat with a repository of files, such as policy manuals or technical documentation, “a simpler RAG stack is still the better engineering choice,” Gui said. In those static, document-centric scenarios, the corpus is diverse enough that standard nearest-neighbor retrieval works perfectly well without the operational overhead of hierarchical memory.
The write tax is worth it
xMemory cuts the latency bottleneck associated with the LLM’s final answer generation. In standard RAG systems, the LLM is forced to read and process a bloated context window full of redundant dialogue. Because xMemory’s precise, top-down retrieval builds a much smaller, highly targeted context window, the reader LLM spends far less compute time analyzing the prompt and generating the final output.
In their experiments on long-context tasks, both open and closed models equipped with xMemory outperformed other baselines, using considerably fewer tokens while increasing task accuracy.
Advertisement
xMemory increases performance on different tasks while reducing token costs (source: arXiv)
However, this efficient retrieval comes with an upfront cost. For an enterprise deployment, the catch with xMemory is that it trades a massive read tax for an upfront write tax. While it ultimately makes answering user queries faster and cheaper, maintaining its sophisticated architecture requires substantial background processing.
Unlike standard RAG pipelines, which cheaply dump raw text embeddings into a database, xMemory must execute multiple auxiliary LLM calls to detect conversation boundaries, summarize episodes, extract long-term semantic facts, and synthesize overarching themes.
Furthermore, xMemory’s restructuring process adds additional computational requirements as the AI must curate, link, and update its own internal filing system. To manage this operational complexity in production, teams can execute this heavy restructuring asynchronously or in micro-batches rather than synchronously blocking the user’s query.
Advertisement
For developers eager to prototype, the xMemory code is publicly available on GitHub under an MIT license, making it viable for commercial uses. If you are trying to implement this in existing orchestration tools like LangChain, Gui advises focusing on the core innovation first: “The most important thing to build first is not a fancier retriever prompt. It is the memory decomposition layer. If you get only one thing right first, make it the indexing and decomposition logic.”
Retrieval isn’t the last bottleneck
While xMemory offers a powerful solution to today’s context-window limitations, it clears the path for the next generation of challenges in agentic workflows. As AI agents collaborate over longer horizons, simply finding the right information won’t be enough.
“Retrieval is a bottleneck, but once retrieval improves, these systems quickly run into lifecycle management and memory governance as the next bottlenecks,” Gui said. Navigating how data should decay, handling user privacy, and maintaining shared memory across multiple agents is exactly “where I expect a lot of the next wave of work to happen,” he said.
The Brave browser “has introduced Brave Origin, a stripped-down version of its browser that removes built-in monetization features like Rewards and other extras tied to its business model,” writes Slashdot reader BrianFagioli”
The stripped-down browser is available either as a separate browser download or as an upgrade to the existing Brave install, unlocked through a one-time purchase that can be activated across multiple devices. The idea is simple on paper: pay once, and you get a cleaner, more minimal browsing experience without the add-ons that fund Brave’s ecosystem. What makes the move unusual is the pricing model itself. While paying to support a browser is not controversial, charging users specifically to remove features raises questions about whether those additions are seen as value or clutter.
The situation gets even stranger on Linux, where Brave Origin is reportedly available at no cost, creating an uneven experience across platforms and leaving some users wondering why they are being asked to pay for something others get for free.
A new Quordle puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Sunday’s puzzle instead then click here: Quordle hints and answers for Sunday, April 19 (game #1546).
Quordle was one of the original Wordle alternatives and is still going strong now more than 1,400 games later. It offers a genuine challenge, though, so read on if you need some Quordle hints today – or scroll down further for the answers.
Enjoy playing word games? You can also check out my NYT Connections today and NYT Strands today pages for hints and answers for those puzzles, while Marc’s Wordle today column covers the original viral word game.
Advertisement
SPOILER WARNING: Information about Quordle today is below, so don’t read on if you don’t want to know the answers.
Article continues below
Quordle today (game #1547) – hint #1 – Vowels
How many different vowels are in Quordle today?
• The number of different vowels in Quordle today is 4*.
* Note that by vowel we mean the five standard vowels (A, E, I, O, U), not Y (which is sometimes counted as a vowel too).
Advertisement
Quordle today (game #1547) – hint #2 – repeated letters
Do any of today’s Quordle answers contain repeated letters?
• The number of Quordle answers containing a repeated letter today is 2.
Quordle today (game #1547) – hint #3 – uncommon letters
Do the letters Q, Z, X or J appear in Quordle today?
• Yes. One of Q, Z, X or J appears among today’s Quordle answers.
What letters do today’s Quordle answers start with?
• Q
Advertisement
• T
• S
• E
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON’T WANT TO SEE THEM.
Advertisement
Quordle today (game #1547) – the answers
(Image credit: Merriam-Webster)
The answers to today’s Quordle, game #1547, are…
Advertisement
Sign up for breaking news, reviews, opinion, top tech deals, and more.
A really tough game this one, and it took me a good deal longer to complete than usual.
Getting the rare letter Q ended up being the easy part as the remaining words had a few possibilities. Fortunately I got away with it, making just one mistake — but it felt lucky.
Advertisement
Daily Sequence today (game #1547) – the answers
(Image credit: Merriam-Webster)
The answers to today’s Quordle Daily Sequence, game #1547, are…
A new NYT Strands puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Sunday’s puzzle instead then click here: NYT Strands hints and answers for Sunday, April 19 (game #777).
Strands is the NYT’s latest word game after the likes of Wordle, Spelling Bee and Connections – and it’s great fun. It can be difficult, though, so read on for my Strands hints.
Want more word-based fun? Then check out my NYT Connections today and Quordle today pages for hints and answers for those games, and Marc’s Wordle today page for the original viral word game.
Advertisement
SPOILER WARNING: Information about NYT Strands today is below, so don’t read on if you don’t want to know the answers.
Article continues below
NYT Strands today (game #778) – hint #1 – today’s theme
What is the theme of today’s NYT Strands?
• Today’s NYT Strands theme is… Gloriously glaring!
NYT Strands today (game #778) – hint #2 – clue words
Play any of these words to unlock the in-game hints system.
Advertisement
SHEATH
CRAG
CHEAT
TOTAL
LINT
MILE
NYT Strands today (game #778) – hint #3 – spangram letters
How many letters are in today’s spangram?
• Spangram has 13 letters
NYT Strands today (game #778) – hint #4 – spangram position
What are two sides of the board that today’s spangram touches?
First side: bottom, 3rd column
Last side: top, 4th column
Advertisement
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON’T WANT TO SEE THEM.
Advertisement
NYT Strands today (game #778) – the answers
(Image credit: New York Times)
The answers to today’s Strands, game #778, are…
GLINT
GLITTER
GLISTEN
GLEAM
GLOW
GLIMMER
SPANGRAM: CATCHTHELIGHT
My rating: Easy
My score: Perfect
The theme was a little bit confusing initially, but after spotting GLINT and GLITTER I understood that every light-associated word we were searching for began with the letter G.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
There was no place then for shimmer or sparkle in this gaggle of G words.
The spangram was harder to spot, but with “light” not featured among the game words I worked backwards to find “catch” and then CATCHTHELIGHT.
Advertisement
Yesterday’s NYT Strands answers (Sunday, April 19, game #777)
ADJUST
MODIFY
TWEAK
REFINE
IMPROVE
ALTER
SPANGRAM: THEREIFIXEDIT
What is NYT Strands?
Strands is the NYT’s not-so-new-any-more word game, following Wordle and Connections. It’s now a fully fledged member of the NYT’s games stable that has been running for a year and which can be played on the NYT Games site on desktop or mobile.
I’ve got a full guide to how to play NYT Strands, complete with tips for solving it, so check that out if you’re struggling to beat it each day.
European Commission President Ursula von der Leyen says the app is technically ready and will be available to citizens soon.
The European Commission yesterday (15 April) unveiled a digital age verification app aimed at shielding children from harmful content online, with European Commission president Ursula von der Leyen declaring there are “no more excuses” for platforms that fail to act.
Announcing the tool in Brussels on Wednesday (15 April), von der Leyen painted a stark picture of the risks children face in the digital world. “One child in six is bullied online. One child in eight is bullying another child online,” she said, warning that social media platforms use “highly addictive designs” that damage young minds and leave children vulnerable to predators.
Users set up the app using a passport or ID card, after which they can confirm their age anonymously. The free app, which the Commission says is technically ready and will soon be available to citizens, allows users to verify their age when accessing online platforms “without revealing any other personal data”, according to von der Leyen. “Users cannot be tracked,” von der Leyen stressed, adding that the app is fully open source and compatible with any device.
Advertisement
Drawing a comparison with the EU’s Covid certificate – adopted in record time and used across 78 countries – von der Leyen said the age verification tool follows “the same principles, the same model.” Seven member states, including France, Italy, Spain and Ireland, are already planning to integrate the app into their national digital wallets.
The announcement comes ahead of the second meeting of the Commission’s Special Panel on Children’s Safety Online, which is due to deliver its recommendations by summer. Von der Leyen was unambiguous about the Commission’s direction of travel on enforcement. “Children’s rights in the European Union come before commercial interest. And we will make sure they do.”
Platforms were put on notice that voluntary compliance alone will not suffice. “We will have zero tolerance for companies that do not respect our children’s rights,” she said, adding that the Commission is “moving ahead with full speed and determination on the enforcement of our European rules”.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Consumer Intelligence Research Partners estimates the Mac Mini accounted for roughly 3% of Apple’s US Mac unit sales last year. That position has shifted quickly. Read Entire Article Source link
Jeff Bezos’ space company Blue Origin successfully re-used one of its New Glenn rockets for the first time ever on Sunday, but the company failed at its primary mission: delivering a communications satellite to orbit for customer AST SpaceMobile.
AST SpaceMobile issued a statement Sunday afternoon that the upper stage of the New Glenn rocket placed BlueBird 7 satellite into an orbit that was “lower than planned.” The satellite successfully separated from the rocket and powered on, the company said, but the altitude is too low “to sustain operations” and will now have to be de-orbited — left to burn up in the atmosphere of Earth.
The cost of the loss of the satellite is covered by AST SpaceMobile’s insurance policy, according the company, and there are successive BlueBird satellites that will be completed in around a month. AST SpaceMobile has contracts with more than just Blue Origin, and the company said it expects to be able to launch 45 more to space by the end of 2026.
But this represents the first major failure for Blue Origin’s New Glenn program, which only made its first flight in January 2025 after more than a decade in development. This was the second mission where New Glenn carried a customer payload to space, after launching twin spacecraft bound for Mars on behalf of NASA last November. The company did not immediately respond to a request for comment.
Advertisement
The apparent failure of New Glenn’s second stage could have wider implications beyond Blue Origin’s near-term commercial ambitions. The company is pushing hard to become one of the main launch providers for NASA’s Artemis missions to the moon and beyond. The space agency — and the Trump administration — has put pressure on Blue Origin and SpaceX to be able to put landers on the moon by the end of President Donald Trump’s second term, before advancing to returning humans to the lunar surface.
Blue Origin CEO Dave Limp has even said his company “will move heaven and Earth” to help NASA get back to the moon faster.
Blue Origin recently completed testing its first version of its own lunar lander, which the company is expected to try and launch at some point this year (without any crew). Blue Origin had suggested last year that it was considering launching this lander on New Glenn’s third mission, but ultimately decided to launch the AST SpaceMobile satellite instead.
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
The third New Glenn launch seemed to start just fine on Sunday, with the the mega-rocket lifting off at 7:35 a.m. local time from Cape Canaveral, Florida. It was the first time Blue Origin re-used a previously-flown New Glenn booster — the same one that flew during New Glenn’s second mission. Roughly 10 minutes after liftoff, the booster came back down and landed on a drone ship in the ocean, just like it had last November. Jeff Bezos even shared drone footage of the booster’s landing on X, the social media site owned by his rival Elon Musk. (Musk offered congratulations.)
Advertisement
Roughly two hours after the launch, though, Blue Origin announced in its own post that the New Glenn upper stage placed AST SpaceMobile satellite in an “off-nominal orbit.” The company has not released any more information since that post.
Blue Origin spent a long time developing New Glenn, and it has been taken as a sign of confidence in that process that the company decided to start launching commercial payloads during these early missions. By comparison, SpaceX has spent the last few years flying test versions of its massive Starship, but has stuck with using dummy payloads as it works out the rocket’s kinks.
SpaceX did lose payloads deeper into its Falcon 9 program. In 2015, on the 19th Falcon 9 mission, the rocket blew up mid-flight and lost an entire International Space Station cargo spacecraft. In 2016, a Falcon 9 exploded on the launch pad during testing, causing the loss of an internet satellite for Meta.
A new NYT Connections puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Sunday’s puzzle instead then click here: NYT Connections hints and answers for Sunday, April 19 (game #1043).
Good morning! Let’s play Connections, the NYT’s clever word game that challenges you to group answers in various categories. It can be tough, so read on if you need Connections hints.
What should you do once you’ve finished? Why, play some more word games of course. I’ve also got daily Strands hints and answers and Quordle hints and answers articles if you need help for those too, while Marc’s Wordle today page covers the original viral word game.
Advertisement
SPOILER WARNING: Information about NYT Connections today is below, so don’t read on if you don’t want to know the answers.
Article continues below
NYT Connections today (game #1044) – today’s words
(Image credit: New York Times)
Today’s NYT Connections words are…
CYBER
HOURGLASS
BLUE
CLOUD
ROD
WEB
MANIC
NET
PUFF
VENOM
HOOK
BILLOW
BAIT
PLUME
MEATLESS
CANNIBALISM
NYT Connections today (game #1044) – hint #1 – group hints
What are some clues for today’s NYT Connections groups?
YELLOW: A bunch of fumes
GREEN: Used for angling
BLUE: Linked to an infamous arachnid
PURPLE: Start the week
Need more clues?
We’re firmly in spoiler territory now, but read on if you want to know what the four theme answers are for today’s NYT Connections puzzles…
Advertisement
Sign up for breaking news, reviews, opinion, top tech deals, and more.
NYT Connections today (game #1044) – hint #2 – group answers
What are the answers for today’s NYT Connections groups?
YELLOW: MASS OF SMOKE
GREEN: FISHING GEAR
BLUE: ASSOCIATED WITH BLACK WIDOW SPIDERS
PURPLE: _____ MONDAY
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON’T WANT TO SEE THEM.
Advertisement
NYT Connections today (game #1044) – the answers
(Image credit: New York Times)
The answers to today’s Connections, game #1044, are…
YELLOW: MASS OF SMOKE BILLOW, CLOUD, PLUME, PUFF
GREEN: FISHING GEAR BAIT, HOOK, NET, ROD
BLUE: ASSOCIATED WITH BLACK WIDOW SPIDERS CANNIBALISM, HOURGLASS, VENOM, WEB
PURPLE: _____ MONDAY BLUE, CYBER, MANIC, MEATLESS
My rating: Easy
My score: Perfect
A bit of music knowledge got me to my 33rd “Purple First” thanks to Blue Monday by New Order and Prince’s Manic Monday,made famous by The Bangles. CYBER I was confident about, but MEATLESS I went with purely because of the alliteration.
This actually seemed the easiest group, not that I’m complaining.
Advertisement
Elsewhere, nature knowledge may have helped me get ASSOCIATED WITH BLACK WIDOW SPIDERS, but I spotted the more obvious yellow and green groups first.
Yesterday’s NYT Connections answers (Sunday, April 19, game #1043)
BLUE: CARDS IN TEXAS HOLD ‘EM FLOP, HOLE, RIVER, TURN
PURPLE: LAST WORDS OF CANDY BRANDS IN THE SINGULAR CAP, DUD, KID, MINT
What is NYT Connections?
NYT Connections is one of several increasingly popular word games made by the New York Times. It challenges you to find groups of four items that share something in common, and each group has a different difficulty level: green is easy, yellow a little harder, blue often quite tough and purple usually very difficult.
On the plus side, you don’t technically need to solve the final one, as you’ll be able to answer that one by a process of elimination. What’s more, you can make up to four mistakes, which gives you a little bit of breathing room.
Advertisement
It’s a little more involved than something like Wordle, however, and there are plenty of opportunities for the game to trip you up with tricks. For instance, watch out for homophones and other word games that could disguise the answers.
It’s playable for free via the NYT Games site on desktop or mobile.
There are many drivers who often bemoan the very existence of traffic lights. Despite incurring the daily ire of commuters who are running late for work, even those haters have to acknowledge the traffic signal’s invaluable function in helping to keep our roadways safe.
Traffic signals have, of course, evolved considerably since they were first pressed into use in the late-1860s, with the first electric lights coming into play sometime around 1912. It wasn’t long until those signals started using colored lights, and have since evolved into the red, yellow, and green modes we are all too familiar with today. Even as safety remains the primary purpose of the hundreds of thousands of traffic lights currently employed throughout the United States, some theorize that the life-saving devices may one day cease to exist.
Until that fateful day, getting stuck at red lights when you’re in a rush will remain a constant source of commuter frustration. On some occasions, however, a stream of greens opens up on the road ahead like the parting of the Red Sea. That stream of green has a name, with researchers dubbing it the “Green Wave.” While they may seem rare, the “Green Wave” is a common occurrence in certain parts of the world, and it serves a very important purpose.
Advertisement
What is the purpose of a traffic light Green Wave?
Brasil2/Getty Images
While it might seem like a weird sort of karmic intervention, that “Green Wave” of traffic lights was actually programmed for a specific purpose by whatever government organization is in charge of maintaining the traffic signals in your city, state or township. They are, however, far more commonly utilized on high-volume roads in urban areas. The purpose of a “Green Wave” is to improve the flow of traffic in those areas, particularly during times with increased traffic volume.
At its core, the concept is very simple. The idea is to keep traffic flowing during peak volume times by simply reducing the number of stops at concurrent traffic signals. To enact a “Green Wave,” planners and engineers simply synchronize the traffic lights in congested areas to all turn green at the same time and stay that way for a specified period that ensures a steady flow of traffic in one direction. The method is, naturally, easier to manage on one-way streets with no turning lanes, though some cities have attempted to aid traffic flow further by simply outlawing left turns in metropolitan areas. Some have even taken to banning right turns too.
Advertisement
In any case, on top of aiding the flow of traffic in congested areas, “Green Wave” traffic patterns are also believed to have a positive effect on the environment. After all, the reduction in stop-and-go traffic also reduces a vehicle’s idling time, which, in turn, leads to reduced greenhouse gas emissions.
Digit is seen performing deadlifts with a 65-pound weight in the center of a lab. Agility Robotics shared the video a few days ago, and to be honest, the robot maintains a fairly steady balance and completes the task from beginning to end. Someone mentions that the new version can lift significantly more weight than the previous one, while another laughs about how it can run all day without stopping.
The engineers designed the test so that Digit had to work harder than usual. Every additional pound it must lift causes the robot to modify its entire body at simultaneously, including its arms, legs, torso, and everything else. The system must keep the weight centered and avoid tipping over, therefore the legs, arms, and rest of the robot must all function together. These actuators and joints can withstand repeated load without breaking down. Digit’s video simply shows the robot grasping the weight, rising up, then effortlessly placing it down repeatedly in a standard indoor location built for people.
Sleek & Durable Design: Standing at 132cm tall and weighing only approx. 35kg, the G1 is constructed with aerospace-grade aluminum alloy and carbon…
High Flexibility & Safe Movement: Boasting 23 joint degrees of freedom (6 per leg, 5 per arm), it offers an extensive range of motion. For safety, it…
Smart Interaction & Connectivity: Powered by an 8-core high-performance CPU and equipped with a depth camera and 3D LiDAR. It supports Wi-Fi 6 and…
Simulation is where all of the training takes place, because before it touches a real weight, an engineer creates a digital copy of the same thing in a virtual world. Then they anticipate what will happen when the weight shifts. The grip pressure remains constant, with no slipping or lowering. Any changes to the robot’s equilibrium are registered extremely instantly. The policy learns the perfect lift in the simulated environment with no complications before being transmitted directly to the real robot. When you see the real robot perform it, it looks fairly natural because it has already handled every potential variable thousands of times in the simulation.
Engineers chose deadlifts for the test because the movement requires complete body control. A simple arm raise would not put the hardware under the same level of stress. By incorporating weight into the simulation loop, the team is able to handle balancing changes that a pre-programmed script cannot handle alone. As a result, Digit lifts consistently, with no wobbling or resets. This method is easily adaptable to other objects or larger loads in future tests.
Digit was built by Agility to manage long, repetitive jobs that wear people out, such as working in factories or warehouses where you must squeeze into tight spaces, pick up oddly shaped goods, and continue without taking a break. This deadlift test demonstrates Digit’s ability to lift weight on ordinary floors while remaining steady, which is ideal for picking up boxes, carrying tools, and stacking things in human-designed places.
Advertisement
It also illustrates how far they’ve come in teaching robots to perform physical tasks. Whole-body synchronization was originally a nightmare, with hand-tuned code for each joint angle. But now they can simply train a policy in simulation that adapts on the go. Digit detects weight using its sensors, corrects itself in real time, and completes the lift without assistance, while the hardware can keep up because the training has already taught the actuators and joints to be more durable. [Source]
In October and through November, America’s EV sales reached their lowest point since 2022 after government subsidies expired, remembers Time. “But first-quarter data for 2026 shows that used EV sales were 12% higher than the same time last year and 17% higher than the previous quarter.
“One factor likely helping push buyers toward these cars is high gas prices, which recently topped $4.00 a gallon for the first time in four years,” they write — but it’s not just in the U.S. Instead, they argue the conflict “is driving a global surge of interest in electric vehicles…”
In the U.K., electric car sales reached a record high, with 86,120 vehicles sold in March… The French online used-car retailer Aramisauto reported its share of EV sales nearly doubled from February 16 to March 9, rising to 12.7% from 6.5%, while sales of fueled models dropped to 28% of sales from 34%, and sales of diesel models dropped to 10% from 14%. Germany’s largest online car market, mobile.de, told Reuters that the share of EV searches on its website has tripled since the start of March — from 12% to 36%, with car dealers receiving 66% more enquiries for used EVs than in February.
South Korea reported that registrations for electric vehicles more than doubled in March compared to the prior year, due in part to rising fuel prices and government subsidies… In New Zealand, more than 1,000 EVs were registered in the week that ended on March 22, close to double the week before, making it the country’s biggest week for electric vehicle registrations since the end of 2023, according to the country’s Transport Minister, Chris Bishop.
In America, Bloomberg also reports 605 high-speed EV charging stations switched on in just the first three months of 2025, “a 34% increase over the year-earlier period,” according to their analysis of federal data. A data platform focused on EV infrastructure tells Bloomberg that speedier and more reliable chargers are convincing more drivers to go electric and use public plugs.
You must be logged in to post a comment Login