My Yamaha XSR900 is a real hoot. It’s powerful, it sounds amazing, and it’s properly fast. But it’s getting old. I bought my 2017 XSR from the first owner a few years ago, and it’s got over 20,000 miles on the odometer. In the next few years I’ll likely see some serious value drop out of the bike if I continue to tack on the miles and maintenance costs will continue to rise.
So, it’s time to commit to one of two ideas: Buy a new bike, or keep riding the old one for the foreseeable future. And with so many excellent café-styled bikes on the market now, there’s a lot to choose from. In the last few years, there’s been a pretty significant expansion of the café bike trend. Many manufacturers have leaned into the idea that people like classic, round-headlight styling, but they want it paired with legit performance and modern features.
Advertisement
How I tested these four bikes
Travis Langness/SlashGear
To see what was out there, and what stood a chance of replacing my XSR, I hit up a few motorcycle manufacturers and asked what they had in the fleet that fit my needs: Café motorcycle looks, but with modern tech. Fun to ride, but reasonably priced.
The bikes that fit my needs (and were available for testing) are as follows: BMW R 12, Suzuki GSX-8TT, and Kawasaki Z900RS. So, it was a four-bike test to see what could potentially replace a bike I love.
Travis Langness/SlashGear
Back-to-back-to-back-to-back. I rode the BMW, Kawasaki, Suzuki, and Yamaha bikes you see here for a few weeks, rotating between each model and familiarizing myself with the controls, quirks, and features. Then, I spent a weekend riding them all on the same canyon routes, about 100 miles at a time, to see how they stacked up on my local roads. I wanted to see what they were like to live with and what sort of fun I could have on each bike — and what it would cost me to upgrade.
Advertisement
Pricing out the rivals
Travis Langness/SlashGear
I bought my XSR used, so the price I paid for it isn’t really a fair yardstick by which to judge the other bikes. The current XSR900 is a better starting point, coming in at $11,299 (including $700 destination fee). The modern but classically-styled Kawasaki Z900RS SE has an MSRP of $15,439, while the base trim non-SE model will set you back $13,739.
The BMW R 12 has some serious heritage, and it has a base MSRP of $13,640, but the options on the model you see here brought it up to $17,359. That doesn’t put it completely out of range as a rival of the XSR, but it makes it a reach — still, it was definitely on my list of potential replacements.
The Suzuki GSX-8TT has an MSRP of $11,849, though there is a version called the GSX-8T that’s slightly lower at $11,349, but it’s missing the headlight cowl and the gold wheels.
Advertisement
What I liked about the BMW R 12
Travis Langness/SlashGear
The BMW felt extremely well built, using excellent materials everywhere. Every time I swung my leg over the seat, I felt like I was riding an ultra-premium product. Up close, it’s one of the best-looking bikes on the road today. The upfront pricing might scare some people off, but it’s worth the extra cash.
The R 12 is powered by a two-cylinder 1,170cc boxer engine that makes 95 horsepower and 81 lb-ft of torque. The back-and-forth rumble provided by the flat twin boxer engine is utterly unique. At stoplights, the bike felt like it was rocking back and forth, idling like a child on a swing that rocks to build momentum before jumping off.
Advertisement
Travis Langness/SlashGear
The thick-sidewall tires gave the BMW some small-pothole-absorption capabilities, but the ride was rough over larger road imperfections. Thankfully, the seat is made of thick and forgiving materials, so a long ride doesn’t wear you down much. The Brembo brakes felt excellent — quick to respond, even if the BMW’s weight added some stopping distance.
Travis Langness/SlashGear
Stable at speed, and maneuverable for its size, the BMW felt good stitching a few corners together. It’s also plenty low enough that I can flat-foot it while stopped (the seat height is just 29.7 inches), but the low ride height meant it was the only bike of the bunch to scrape during my test.
Things about the R 12 that weren’t so impressive
Travis Langness/SlashGear
The R 12’s engine felt so wide that I had to double-check to make sure it didn’t outsize the handlebars. Splitting lanes and fitting into tight spaces felt particularly precarious, based solely on the engine’s large footprint. The BMW is also missing a temp gauge and a fuel gauge, both of which could easily be displayed on the digital readout, but they simply aren’t. Really, there isn’t much in the way of information on the small display, other than RPMs and riding mode.
Travis Langness/SlashGear
The single-sided swingarm is an excellent aesthetic, and the paralever brace is a unique suspension setup, but with just 3.5 inches of suspension travel, those large imperfections mentioned earlier can bounce you around a bit. The BMW is also the heaviest of the bunch, with a 500-pound curb weight to throw around.
Travis Langness/SlashGear
The BMW’s quick shifter is a bit delayed sometimes, too. From the time my foot performed a shift request at the foot lever to the time I felt the bike make the physical shift in the transmission, there was often a one-Mississippi count to fill the time delay.
Advertisement
The Kawasaki Z900RS makes a strong case for itself
Travis Langness/SlashGear
With four-cylinder power, the Kawasaki has the smoothest powerband of all the bikes assembled here. Power comes on in a linear and predictable fashion when you twist the throttle. The 948cc inline four-cylinder makes 115 horsepower and 73 lb-ft of torque, which is a match for my XSR, but no matter what scenario I was in, it never felt snappy or scary. The resonance of the Kawasaki’s four-cylinder engine is excellent, too. Aside from my modified Yamaha, the Kawi is the best-sounding bike of the bunch.
Travis Langness/SlashGear
The Z900RS’ highly-adjustable Öhlins suspension was also a highlight of the test. The bike turned in with ease, tracked well through corners, and absorbed mid-corner bumps without any detectable disturbance from the seat. The seat was excellent too — perfectly shaped and well-padded for long rides.
Travis Langness/SlashGear
Styled after the classic Z1, the Z900RS really looks the part of a café bike. Of the three new bikes in this test, it’s the only one without a proper TFT screen. Instead, it gets a pair of gauges with a small digital readout between them, so it feels a bit more nostalgic, but that does introduce a small issue.
Advertisement
The Z900RS has very few drawbacks
Travis Langness/SlashGear
There’s not a lot to complain about with the Z900RS. It’s mostly competent in the areas where it isn’t masterful. The tall mirrors look a bit silly, but that’s an easy fix on the aftermarket. The biggest gripe I have is with the mismatched look and feel of some of the controls. The cruise control buttons and various other handlebar controls feel out of place on such a classic-looking bike.
Travis Langness/SlashGear
The small digital readout between the two analog gauges feels squeezed in, with a completely different style than the rest of the bike. The big cruise-control buttons feel the same way. I get it – Kawi has to put some modern tech on this bike, but I’d almost prefer a stripped-down version without those features to make the view forward a bit better. That said, the cruise control did work well during my test, taking away some riding fatigue on open stretches of highway.
Advertisement
A strong entry from the Suzuki GSX-8TT
Travis Langness/SlashGear
At just 445 lbs, the GSX-8TT is nimble and light on its feet. It moves extremely well through corners, though some of that is likely attributed to the aftermarket tires fitted to the test bike Suzuki let me ride. The 776cc parallel-twin engine puts out just 82 hp and 57 lb-ft, but it’s an excellent fit for this bike, providing torque low in the rev range, and enough top-end power for faster maneuvers on the highway. On the highway and between lanes, the GSX-8TT feels narrow. It’s thin enough to slice-and-dice traffic with no issues.
Travis Langness/SlashGear
On the highway and over rougher city streets, the Suzuki was unbothered by bumps and cracks in the pavement. The seat, while basic, is comfortable enough for long rides. Even with mid-corner bumps, the GSX-8TT felt stable.
Travis Langness/SlashGear
Aesthetically, Suzuki nailed it with the GSX-8TT. The Pearl Matte Shadow Green paint contrasted with the gold wheels is a timeless combination. The small stripes give it a bit of extra flare without looking gaudy, and the lower cowl rounds out the look of a bike that feels modded directly from the factory. For less than $12k, this is one hell of a bike.
The Suzuki GSX-8TT is good, but not perfect
Travis Langness/SlashGear
The GSX-8TT was probably my favorite of the three competitors I lined up to potentially replace my Yamaha. Like the XSR, the GSX felt playful and eager to perform. It had a nice combination of modern and classic vibes, without feeling like it was faking its aesthetic. The 5-inch TFT screen was the best of the bunch, with high contrast graphics and a display that didn’t wash out in heavy sunlight (helped by the headlight cowl, no doubt).
Travis Langness/SlashGear
Unfortunately, the brakes on the GSX-8TT were the least confidence-inspiring of the bunch. Both the front and rear levers felt a bit spongy, with poor feedback for a bike that felt so impressive otherwise. The bike required much more brake pressure than any of the other three to bring it to a similar stop. It’s possible that this was an issue of boiled brake fluid from a previous rider (these media-loan bikes see some serious abuse), but if that’s the way the bike rides from the factory, it’s something I’d need to address right away.
Advertisement
The joys of a modified bike
Travis Langness/SlashGear
It’s always hard to let go of a vehicle you’ve modified, so I figured listing what I like about my bike would help me be a bit more objective. Even before I started messing with it, this Yamaha XSR900 had a rowdy character. The 847cc three-cylinder engine feels like it wants you to wheelie every time you set off. The most aggressive ride modes are twitchy.
Travis Langness/SlashGear
Yamaha doesn’t list horsepower numbers for the XSR, but according to most sources, it’s around 115 hp: still enough to keep up with all the modern bikes on this list (and the new version is only up to 117 hp, according to UK specs). And even before I started doing things like removing the passenger pegs, the XSR900 was light for its class, with a weight of just 430 lbs when stock (the 2026 model weighs just 425 lbs). It makes its way between corners with an urgency that none of the other bikes quite matched. Turn in is light and immediate with the XSR, which is part of what makes it such a versatile bike.
Advertisement
My XSR could use some updates
Travis Langness/SlashGear
Being an older version of the XSR, my bike is missing some modern features. It doesn’t have the modern bike’s TFT screen (it’s just a simple digital readout), nor does it have the larger-displacement version of the CP3 engine, so it’s down on power a bit. Plus, my older XSR is missing a quick shifter. I might eventually install one, but the newest Yamaha has a quick shifter as part of the package.
Travis Langness/SlashGear
Having sat on a new XSR, it also has a better seat. I love the comfort that the Corbin seat provides on long rides, but it’s a bit too wide. Even though it’s set at a proper height, the width of the seat makes it harder for me to place my feet flat at a stoplight.
Travis Langness/SlashGear
The aftermarket mirrors are great for splitting lanes. I can quickly fold them in, making the bike instantly a few inches narrower for fitting in tight spaces, but they’re small and sometimes hard to see — ah, the trade-offs we make for aesthetics.
Advertisement
The verdict: Best big café bike
Travis Langness/SlashGear
Every bike here was extremely good in its own unique way, and all three of the rival bikes gave me inspiration for modification of my own Yamaha. I’ll probably be powder coating my wheels gold and adding some suspension upgrades very soon. But none of the other bikes were so earth-shatteringly good that I wanted to get rid of mine, which probably means that the latest XSR 900 would win this test too. Of the four, the Kawasaki was the most enjoyable to ride, and the Suzuki presented the best value for money; the BMW felt special, but the riding experience didn’t justify its big price tag.
These aren’t the only bikes in the class, though. There are all sorts of café-styled bikes available from Triumph, Royal Enfield, and even a few Hondas, potentially landing in the U.S. in the near future. Maybe it’s worth repeating this test with a few of those British bikes in the near future (my DM’s are open to Triumph and Royal Enfield loans). Did somebody say annual café bike round-up?
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
The functional replication and actuation of complex structures inspired by nature is a longstanding goal for humanity. Creating such complex structures combining soft and rigid features and actuating them with artificial muscles would further our understanding of natural kinematic structures. We printed a biomimetic hand in a single print process comprised of a rigid skeleton, soft joint capsules, tendons, and printed touch sensors.
This is our latest work on the trajectory planning method for floating-based articulated robots, enabling the global path searching in complex and cluttered environments.
OmniPlanner is a unified solution for exploration and inspection path planning (as well as target reach) across aerial, ground, and underwater robots. It has been verified through extensive simulations and a multitude of field tests, including in underground mines, ballast water tanks, forests, university buildings, and submarine bunkers.
In the ARISE project, the FZI Research Center for Information Technology and its international partners ETH Zurich, University of Zurich, University of Bern, and University of Basel took a major step toward future lunar missions by testing cooperative autonomous multi-robot teams under outdoor conditions.
Gugusse and the Automaton’ is a 1897 French film by Georges Méliès featuring a humanoid robot in nearly as realistic of a way as some of the humanoid promo videos we’ve seen lately.
Anca Dragan is no stranger to Waymo. She worked with us for six years while also at UC Berkeley and now, GoogleDeepMind. Her focus on making AI safer helped Waymo as it launched commercially. In this final episode of our season, Anca describes how her work enables AI agents to work fluently with people, based on human goals and values.
This UPenn GRASP SFI Seminar is by Junyao Shi, on “Unlocking Generalist Robots with Human Data and Foundation Models.”
Advertisement
Building general-purpose robots remains fundamentally constrained by data scarcity and labor-intensive engineering. Unlike vision and language, robotics lacks large, diverse datasets spanning tasks, environments, and embodiments, limiting both scalability and generalization. This talk explores how human data and foundation models trained at scale can help overcome these bottlenecks.
If you’ve spent any time following gaming news in early 2026, you might think the end of Xbox is right around the corner. Between reports of a 32% year-over-year drop in hardware revenue, the sudden departure of longtime Xbox boss Phil Spencer, and wild speculation that Microsoft might pivot the entire gaming division toward AI, the internet has been flooded with dramatic takes about the “death of Xbox.”
Official Xbox Podcast
But the eulogies are premature. Despite the noise, Xbox still sits on one of the most powerful portfolios in gaming, including Halo, Forza, Gears of War, Call of Duty, Minecraft, and more. Microsoft also has the financial backing, infrastructure, and studio network to remain a major player for decades. The real issue isn’t survival, but identity.
You see, for several years, Xbox leadership pushed an ambitious idea that “every screen is an Xbox.” The strategy expanded the brand through cloud gaming, PC integration, and Game Pass across multiple platforms. While that approach broadened reach, it also created confusion about what Xbox actually is. Now, under the new leadership of Microsoft Gaming CEO Asha Sharma, the company appears to be acknowledging that confusion and attempting a course correction.
Sharma recently confirmed Project Helix, the codename for Xbox’s next-generation hardware, promising a device that will “lead in performance and play your Xbox and PC games.” That announcement alone signals a shift in direction. Xbox isn’t ending, but it is entering a critical rebuilding phase. And if the company wants to return to its former glory, experts and players alike largely agree that three major changes are essential.
1. Nail the execution of Project Helix
One of the biggest challenges Xbox faces today is simple: many players aren’t sure why they should buy an Xbox console anymore.
If the same games appear on PC, and sometimes even on rival platforms, what makes the Xbox console special? That’s where Project Helix could become the most important product Microsoft has released in years. Rumored for a 2027 launch, Helix is expected to be a hybrid system, essentially a powerful AMD-powered console running a “console-ized” version of Windows. The promise is compelling: the simplicity of a traditional console combined with the flexibility of a gaming PC.
Imagine a device that boots straight into a controller-friendly interface but also lets players access platforms like Steam or Epic from the living room. If done right, Helix could blur the line between PC and console in a way no competitor currently offers. But execution will determine everything. Helix must never feel like a desktop computer awkwardly connected to a TV. Instead, it needs to launch into a seamless controller-first experience, as the “Xbox Full Screen Experience” we saw on the ROG Xbox Ally, preserving the plug-and-play simplicity that console players expect.
If Microsoft can successfully merge the PC and console ecosystems without sacrificing ease of use, Helix won’t just save Xbox hardware, but it could redefine what a console is. Yes, it’s likely going to be expensive, with rumors suggesting a price tag that could cross the $1,000 mark. But Xbox could still justify that premium if it delivers on the other two pillars that matter just as much.
2. Let the studios deliver the games
The second major fix is both obvious and unavoidable: Xbox needs more great games, more consistently.
Advertisement
Over the past decade, Microsoft has spent nearly $100 billion acquiring studios, including Bethesda and Activision Blizzard. On paper, that gives Xbox one of the strongest first-party lineups in gaming history. Yet the results have been uneven. Franchises like Halo, Gears of War, and Forza, once the backbone of the platform, have seen long development gaps. Meanwhile, studio closures, layoffs, and shifting corporate priorities have created uncertainty inside Microsoft’s gaming division.
Halo
To further add to the injury, when Sharma took over, some players worried that her background in AI-driven tech companies might push Xbox toward algorithm-generated content. Thankfully, she has quickly pushed back on that idea, stating that Microsoft will not “chase short-term efficiency or flood our ecosystem with soulless AI slop.” Now the company needs to prove it.
Xbox
Microsoft now owns some of the most talented developers in the world. What they need most is stability. Fewer shifting mandates, fewer corporate interruptions, and enough time to create the kind of system-defining games that drive entire console generations. Because ultimately, subscriptions and hardware don’t sell themselves. Great games do. The upcoming Forza Horizon 6 is already generating plenty of buzz and appears well on track to be a major success. However, Microsoft will need a steady stream of titles, especially strong exclusives, if it hopes to match the kind of consistent first-party momentum Sony has built on the PlayStation side.
3. Rebuild the culture around Xbox
Finally, there’s one part of the Xbox experience that often gets overlooked: the community culture. For many fans, the Xbox 360 era still feels like the golden age of the platform. Profiles felt personal, avatars actually mattered, and the dashboard felt like a social space where gamers could hang out. It wasn’t just a storefront pushing subscriptions and ads.
Xbox 360
Over time, much of that personality has disappeared. Today, the Xbox dashboard is often criticized for feeling cluttered with Game Pass promotions and advertisements. Across communities like Reddit, ResetEra, and Xbox Insider forums, the message from players is clear: bring back the personality. Fans want things like dynamic themes, meaningful achievement rewards, deeper avatar integration, and more ways to personalize the UI so the console feels like their space again.
Billy Freeman / Unsplash
Players are also asking Xbox to double down on something it once did better than anyone else: game preservation. The Backward Compatibility program was hugely popular, and with Activision Blizzard now under Microsoft’s umbrella, fans want to see classic titles return. If Xbox can become the place where decades of gaming history remain playable on modern hardware, it could turn preservation into one of its biggest strengths.
The road back
Long story short, Xbox isn’t going anywhere anytime soon. The brand still holds enormous influence in the gaming industry, backed by Microsoft’s resources and a massive network of studios and services. However, the platform is at a turning point.
For Xbox to truly thrive again, the solution isn’t chasing every new trend. It’s about focusing on the basics: delivering great games consistently, launching a strong next-generation hardware platform, and reconnecting with the community that built the brand. If Microsoft gets these fundamentals right, the “Xbox is dying” narrative could quickly fade, and the next chapter of Xbox might end up being its most exciting yet.
MSI MEG Vision X AI 13.3-inch touchscreen doubles as a monitoring hub for creatives and professionals
GPU selection dictates performance for gaming, rendering, and professional workloads alike
Lobster-like chassis combines expandability with unconventional aesthetics
MSI has launched the MEG Vision X AI series, a barebones all-in-one PC which combines high-end gaming hardware with a strikingly unconventional design.
The system features a full-size tower measuring 299.3mm wide, 502.7mm deep, and 423.4mm tall, weighing approximately 18.3kg, and a PS3-esque appendage and protrusions that suggest both function and a distinctive aesthetic.
The device includes a 13.3-inch touchscreen intended for system monitoring, quick toggles, or dedicated status displays, allowing creatives to access software shortcuts, monitor rendering progress, or adjust project settings without switching focus from their primary display.
Interactive touchscreen enhances workflow and monitoring
The unique look of this device promoted TechRadar Pro editor Desire Athow to quip the casing resembled, “a lobster that hadn’t completely shed its hard exoskeleton to grow,” capturing the layered and almost organic appearance of the chassis, emphasizing the sense of a device that is both protective and expandable, housing high-end components while presenting a unique surface.
MSI appears to have embraced this aesthetic to showcase the interactive touchscreen while accommodating a full-size tower structure capable of housing top-tier components.
The device is larger than regular compact all-in-one PCs, suggesting the company prioritizes cooling, power delivery, and expandability over minimalism.
Advertisement
Performance is anchored by Intel’s Core Ultra 7 265K CPU on a Z890 platform, paired with 64GB of DDR5 memory.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
GPU options split the series into two clear tiers, a GeForce RTX 5080X configuration at $4,640 and a GeForce RTX 5070 Ti model at $4,082.
MSI indicates that CPU and RAM are consistent across models, meaning buyers make performance choices largely through GPU selection.
Advertisement
This ensures that professional applications like 3D rendering, video editing, and simulation software benefit from dedicated GPU acceleration alongside gaming performance.
The MEG Vision X AI supports both wired and wireless connections, with Intel Killer E5000 5GbE for the former and Wi-Fi 7 or Bluetooth 5.4 for the latter.
It also includes two Thunderbolt 4 ports, which support fast external storage, docking, or display expansion.
This connectivity allows professionals to attach high-speed NVMe drives or multi-monitor setups, which can streamline workflows for designers, animators, and video editors.
Advertisement
Power is supplied by an 850W 80 PLUS Gold PSU, providing adequate headroom for sustained GPU loads.
Although the primary audience for the device is gamers, its hardware and expandability suggest it could also serve as a versatile platform for creators who require both raw performance and reliable workstation capabilities.
Enterprise AI applications that handle large documents or long-horizon tasks face a severe memory bottleneck. As the context grows longer, so does the KV cache, the area where the model’s working memory is stored.
A new technique developed by researchers at MIT addresses this challenge with a fast compression method for the KV cache. The technique, called Attention Matching, manages to compact the context by up to 50x with very little loss in quality.
While it is not the only memory compaction technique available, Attention Matching stands out for its execution speed and impressive information-preserving capabilities.
The memory bottleneck of the KV cache
Large language models generate their responses sequentially, one token at a time. To avoid recalculating the entire conversation history from scratch for every predicted word, the model stores a mathematical representation of every previous token it has processed, also known as the key and value pairs. This critical working memory is known as the KV cache.
Advertisement
The KV cache scales with conversation length because the model is forced to retain these keys and values for all previous tokens in a given interaction. This consumes expensive hardware resources. “In practice, KV cache memory is the biggest bottleneck to serving models at ultra-long context,” Adam Zweiger, co-author of the paper, told VentureBeat. “It caps concurrency, forces smaller batches, and/or requires more aggressive offloading.”
In modern enterprise use cases, such as analyzing massive legal contracts, maintaining multi-session customer dialogues, or running autonomous coding agents, the KV cache can balloon to many gigabytes of memory for a single user request.
To solve this massive bottleneck, the AI industry has tried several strategies, but these methods fall short when deployed in enterprise environments where extreme compression is necessary. A class of technical fixes includes optimizing the KV cache by either evicting tokens the model deems less important or merging similar tokens into a single representation. These techniques work for mild compression but “degrade rapidly at high reduction ratios,” according to the authors.
Real-world applications often rely on simpler techniques, with the most common approach being to simply drop the older context once the memory limit is reached. But this approach causes the model to lose older information as the context grows long. Another alternative is context summarization, where the system pauses, writes a short text summary of the older context, and replaces the original memory with that summary. While this is an industry standard, summarization is highly lossy and heavily damages downstream performance because it might remove pertinent information from the context.
Advertisement
Recent research has proven that it is technically possible to highly compress this memory using a method called Cartridges. However, this approach requires training latent KV cache models through slow, end-to-end mathematical optimization. This gradient-based training can take several hours on expensive GPUs just to compress a single context, making it completely unviable for real-time enterprise applications.
How attention matching compresses without the cost
Attention Matching achieves high-level compaction ratios and quality while being orders of magnitude faster than gradient-based optimization. It bypasses the slow training process through clever mathematical tricks.
The researchers realized that to perfectly mimic how an AI interacts with its memory, they need to preserve two mathematical properties when compressing the original key and value vectors into a smaller footprint. The first is the “attention output,” which is the actual information the AI extracts when it queries its memory. The second is the “attention mass,” which acts as the mathematical weight that a token has relative to everything else in the model’s working memory. If the compressed memory can match these two properties, it will behave exactly like the massive, original memory, even when new, unpredictable user prompts are added later.
“Attention Matching is, in some ways, the ‘correct’ objective for doing latent context compaction in that it directly targets preserving the behavior of each attention head after compaction,” Zweiger said. While token-dropping and related heuristics can work, explicitly matching attention behavior simply leads to better results.
Advertisement
Before compressing the memory, the system generates a small set of “reference queries” that act as a proxy for the types of internal searches the model is likely to perform when reasoning about the specific context. If the compressed memory can accurately answer these reference queries, it will very likely succeed at answering the user’s actual questions later. The authors suggest various methods for generating these reference queries, including appending a hidden prompt to the document telling the model to repeat the previous context, known as the “repeat-prefill” technique. They also suggest a “self-study” approach where the model is prompted to perform a few quick synthetic tasks on the document, such as aggregating all key facts or structuring dates and numbers into a JSON format.
With these queries in hand, the system picks a set of keys to preserve in the compacted KV cache based on signals like the highest attention value. It then uses the keys and reference queries to calculate the matching values along with a scalar bias term. This bias ensures that pertinent information is preserved, allowing each retained key to represent the mass of many removed keys.
This formulation makes it possible to fit the values with simple algebraic techniques, such as ordinary least squares and nonnegative least squares, entirely avoiding compute-heavy gradient-based optimization. This is what makes Attention Matching super fast in comparison to optimization-heavy compaction methods. The researchers also apply chunked compaction, processing contiguous chunks of the input independently and concatenating them, to further improve performance on long contexts.
Attention matching in action
To understand how this method performs in the real world, the researchers ran a series of stress tests using popular open-source models like Llama 3.1 and Qwen-3 on two distinct types of enterprise datasets. The first was QuALITY, a standard reading comprehension benchmark using 5,000 to 8,000-word documents. The second, representing a true enterprise challenge, was LongHealth, a highly dense, 60,000-token dataset containing the complex medical records of multiple patients.
The key finding was the ability of Attention Matching to compact the model’s KV cache by 50x without reducing the accuracy, while taking only seconds to process the documents. To achieve that same level of quality previously, Cartridges required hours of intensive GPU computation per context.
Advertisement
Attention Matching with Qwen-3 (source: arXiv)
When dealing with the dense medical records, standard industry workarounds completely collapsed. The researchers noted that when they tried to use standard text summarization on these patient records, the model’s accuracy dropped so low that it matched the “no-context” baseline, meaning the AI performed as if it had not read the document at all.
Attention Matching drastically outperforms summarization, but enterprise architects will need to dial down the compression ratio for dense tasks compared to simpler reading comprehension tests. As Zweiger explains, “The main practical tradeoff is that if you are trying to preserve nearly everything in-context on highly information-dense tasks, you generally need a milder compaction ratio to retain strong accuracy.”
The researchers also explored what happens in cases where absolute precision isn’t necessary but extreme memory savings are. They ran Attention Matching on top of a standard text summary. This combined approach achieved 200x compression. It successfully matched the accuracy of standard summarization alone, but with a very small memory footprint.
Advertisement
One of the interesting experiments for enterprise workflows was testing online compaction, though they note that this is a proof of concept and has not been tested rigorously in production environments. The researchers tested the model on the advanced AIME math reasoning test. They forced the AI to solve a problem with a strictly capped physical memory limit. Whenever the model’s memory filled up, the system paused, instantly compressed its working memory by 50 percent using Attention Matching, and let it continue thinking. Even after hitting the memory wall and having its KV cache shrunk up to six consecutive times mid-thought, the model successfully solved the math problems. Its performance matched a model that had been given massive, unlimited memory.
There are caveats to consider. At a 50x compression ratio, Attention Matching is the clear winner in balancing speed and quality. However, if an enterprise attempts to push compression to extreme 100x limits on highly complex data, the slower, gradient-based Cartridges method actually outperforms it.
The researchers have released the code for Attention Matching. However, they note that this is not currently a simple plug-and-play software update. “I think latent compaction is best considered a model-layer technique,” Zweiger notes. “While it can be applied on top of any existing model, it requires access to model weights.” This means enterprises relying entirely on closed APIs cannot implement this themselves; they need open-weight models.
The authors note that integrating this latent-space KV compaction into existing, highly optimized commercial inference engines still requires significant effort. Modern AI infrastructure uses complex tricks like prefix caching and variable-length memory packing to keep servers running efficiently, and seamlessly weaving this new compaction technique into those existing systems will take dedicated engineering work. However, there are immediate enterprise applications. “We believe compaction after ingestion is a promising use case, where large tool call outputs or long documents are compacted right after being processed,” Zweiger said.
Advertisement
Ultimately, the shift toward mechanical, latent-space compaction aligns with the future product roadmaps of major AI players, Zweiger argues. “We are seeing compaction to shift from something enterprises implement themselves into something model providers ship,” Zweiger said. “This is even more true for latent compaction, where access to model weights is needed. For example, OpenAI now exposes a black-box compaction endpoint that returns an opaque object rather than a plain-text summary.”
Citizens and law enforcement officials alike would probably be quick to tell you that speeding drivers rank among the most dangerous issues they face on the roadways every day. While the onus of obeying speed limits on the road ultimately rests on the person in the driver’s seat, authorities are expected to help control excessive speeding by catching those drivers in the act and issuing citations as punishment.
That job is particularly tricky, as the number of officers on patrol is typically outnumbered greatly by the number of citizens at the wheel of their own vehicles. Some municipalities have, however, sought to tilt the situation in their favor by setting up speed traps. Similarly, traffic light cameras have become regular fixtures in helping monitor and control traffic patterns. Some local forces are taking matters a step further by using so-called “Speed Jeeps,” which are stationary, unmanned cruisers equipped with cameras to catch and ticket speeding drivers.
Advertisement
Commerce City, Colorado, has started rolling out such vehicles in March, with authorities in the Denver suburb looking to use them to help enforce speed limits in school zones, residential areas, and work zones. It remains to be seen how effective the move will be, as speed cameras have sometimes caused controversy in their alleged overreach. Still, according to Denver 7 News, some Commerce City residents are fully behind the use of Speed Jeeps if they help make their streets safer.
Advertisement
Here’s how Speed Jeeps actually work
Speed cameras are, of course, not legal in every city and state in the U.S. However, areas such as Montgomery County, Maryland, have effectively used them to control speeding in areas of concern. Commerce City has now joined the list of municipalities hoping to use tech to increase community safety, with its Speed Jeeps rotating between locations and adding mobility to the mix.
The fact that Speed Jeeps are designed to look like real police cruisers may make them even more effective than just their cameras, as few things will get a speeder to tap the brakes faster than the sight of a cop. The unmanned vehicles are, obviously, not designed to chase after speeders as a normal officer might. Instead, their cameras are activated when a speeding vehicle enters the range of its on-board radar gun. Once activated, the camera snaps a shot of the vehicle’s front end and driver. A separate camera then takes a shot of the rear license plate once the speeding car passes.
From there, local law enforcement will collect additional information about the alleged infraction and then decide whether to issue a citation. If deemed necessary, the citation will be mailed to the vehicle’s registered address. Upon receipt, the recipient will have a chance to either pay the fine or challenge the ruling in court.
Garry Duffy says that entrepreneurship should be taught at an undergraduate level.
Ireland is doubling down on building a strong research-to-market pipeline in the hopes of creating innovative global companies with homegrown roots.
To do this, Research Ireland has tapped leading universities across the country to deliver what its CEO, Diarmuid O’Brien, calls “one of the most proactive, imaginative and potentially disruptive programmes” in its history.
Last year, the Government announced three hubs to act as a funding mechanism, support system and testing ground for researchers attempting to commercialise their ideas.
Advertisement
Academics need this kind of support, says Garry Duffy, the director of the ARC Hub for Healthtech at the University of Galway, which officially launched just last month.
“Commercialisation is generally new to people – particularly researchers. And it’s a new language and it’s a new acumen, and you have to try and build that. And that’s what we’re really trying to do with the ARC Hub,” Duffy says. ARC, quite aptly, stands for ‘Accelerating Research to Commercialisation’.
With a backing of €34.3m from the Irish Government and the EU, the ARC health-tech hub is co-run by Atlantic Technological University and RCSI University of Medicine and Health Sciences, with other major institutions also taking part.
The Government announced two other hubs last year as well, one for therapeutics and one for ICT, boasting a combined funding that exceeded €60m.
Advertisement
The idea behind the hubs is to create a nurturing environment for entrepreneurial scientists and engineers to carry out research that will lead to commercial impact.
Duffy cites Dublin start-up ProVerum as a success story he would like to replicate in the health-tech hub he leads.
The 2016-founded Trinity College Dublin spin-out is the creator behind ‘ProVee’, a minimally invasive solution for treating benign prostatic hyperplasia.
ProVerum raised $80m in a Series B round last August. The start-up’s co-founder Ríona Ní Ghriallais is on the ARC health-tech advisory board.
Advertisement
The ARC Hub for Healthtech launched last month, with 23 projects across major areas – including sensors, implantables and AI – already in the pipeline.
Researchers, with the help of industry professionals, are creating commercial solutions for health issues such as hypertension management, ovarian cancer and falls among the elderly, Duffy says. Some projects have already generated clinical evidence to support the future impact of the various technologies.
The health-tech hub is also inviting around 22 new projects in its second call, which would give a total of around 45 projects under its remit.
Peter Power, the head of the European Commission Representation in Ireland, called the ARC Hub for Healthtech an “operation of strategic importance”, while Minister for Further and Higher Education, Research, Innovation and Science James Lawless, TD said that he believes the hub “has the potential to deliver game changing acceleration of research commercialisation”.
Advertisement
Duffy believes entrepreneurship should be taught to students early on in their higher education. Hackathons and labs that nurture students to think commercially have had a positive impact, he notes.
“I feel like we’re evolving into a nice ecosystem in Ireland where it’s becoming a bit of a norm to think of a spin-out company as an outcome for university education.”
Duffy is a professor at the University of Galway, and head of the anatomy and regenerative medicine department at RCSI University of Medicine and Health Sciences.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
BYD showcased the new platform during its Disruptive Technology event on Thursday. According to the automaker, the Flash charger is able to take the battery from 10% to 70% in just five minutes and from 10% to 97% in only nine minutes when at room temperature. In extreme cold (-30… Read Entire Article Source link
Processors today pack billions of transistors onto a single chip, and while that enables incredible performance, it also creates one persistent problem, which is heat. Rising temperatures can slow down a processor or force performance throttling. Now, researchers may have found a solution with something incredibly tiny, a new microscopic temperature sensor that’s nearly impossible to see with the naked eye.
A thermometer smaller than a human hair
Researchers at Penn State have developed an ultra-miniature thermometer that can be built directly onto computer chips. The sensor is super small, measuring just one square micrometer, which is several thousand times smaller than the width of a human hair. That tiny size lets engineers place thousands of these sensors across a processor, allowing for precise temperature monitoring across different parts of the chipset.
Chips often heat unevenly during heavy workloads, and traditional temperature sensors placed outside the processor can struggle to capture those rapid changes accurately. So these microscopic sensors could be a big deal for modern processors.
Built with ultra-thin 2D materials
What’s impressive is that the researchers built the sensor using two-dimensional materials that are only a few atoms thick. These materials allow the sensor to quickly react to any temperature changes. Additionally, the device can detect subtle fluctuations in about 100 nanoseconds, which is millions of times faster than blinking your eye. Owing to its unique structure, the tech also uses less power than traditional silicon-based thermal monitoring systems.
Advertisement
AMD
Why this matters for modern processors
Thermal management is one of the biggest challenges in chip design today. Transistors overheating during heavy workload cause processors to reduce clock speeds to protect themselves. This, in turn, leads to drops in performance. But with these embedded sensors like this, engineers could monitor temperature changes across the chip in real time and respond more effectively. Meaning, we might see smarter thermal management, better efficiency, and peak performance that is maintained for longer. With chips nearing the 1-nanometer gate, tech like this could be crucial.
Data removal services have appeared as an answer to a growing unease about the privacy of our personal information, especially online. They help people manage the visibility of their data and reduce what’s circulating in the hands of data brokers, companies that collect and sell personal data.
However, for many users, this sounds too good to be true, and so they ask themselves: Does the service actually work? Is it legitimate? How effective is it? And which is the best among so many data removal services?
Two popular options are Incogni and DeleteMe. Both promise to reduce your online footprint, but they approach the task differently. This comparison breaks down, for example, how each service works, how it actually performs, how easy it is to use, and what the differences in coverage are. This may help you decide which provider fits your privacy needs.
Incogni vs DeleteMe: Quick Comparison (2026)
Category
Incogni
DeleteMe
Prices
from $7.99/month when billed annually
from $6.97/month when billed biennially
Removal model
Fully automated, recurring opt-out requests
Manual and automated, team-assisted removals
Broker coverage
420+ data brokers, both private and public listings, custom removal, and additional sites on higher plans
85-100+ automated, 850+ with custom removals (depending on the plan)
Recurring follow-ups
60-90 days
Quarterly
Dashboard & reports
Ongoing status dashboard
Quarterly reports, updates
Independent verification
Deloitte Limited Assurance Report
No third-party verification
Customer support
Live chat, E-mail, Tickets, Phone with Unlimited Plans
Phone, Live chat, E-mail, Web form
Best for
Hands-off ongoing process
More user involvement, people-search focus
Data as of February, 2026
DeleteMe vs Incogni: How They Work
Incogni is an automated data removal system. After signing up, you need to verify your identity, and the platform starts sending deletion requests to hundreds of data brokers, covering also the “invisible” parts, i.e., information shared for marketing or profiling. Requests are repeated every 60 days for public brokers and every 90 days for private ones. The service also tracks responses, shows progress on a dashboard, and sends follow-ups when required.
DeleteMe combines automated and manual removal processes. Its team finds public listings with your information and manually sends opt-out requests periodically. DeleteMe’s focus is mainly on people-search sites that can be easily found. Users receive detailed reports on what was sent and what the answers were.
Coverage and Scope
Data removal service involves two layers:
Advertisement
Visible public listings (people-search sites)
Commercial broker databases (behind the scenes)
DeleteMe emphasizes visible listings and provides reports on those results, which can feel more tangible for users who want to be involved in the process. 85-100+ of these sites are reached automatically, but there are also an additional 850+ listings for custom removals (depending on the plan).
Incogni’s automated system reaches 420+ data brokers, including private databases that don’t appear in search results. Higher plans allow custom removals for 2,000+ sites, broadening the reach.
To sum up, Incogni focuses on broad, diverse, recurring suppression, while DeleteMe is more about systematic, manual management.
Availability and Compliance
Both services offer their services in multiple countries, but their regional reach and legal frameworks are different.
DeleteMe has been assisting users since 2010 and advertises the removal of 100+ million listings over the years. The company emphasizes its longevity in the industry. It primarily serves users in the US, with some international capability depending on listing type and broker location. Its international locations include Australia, Belgium, Brazil, Canada, France, Germany, Ireland, Italy, the Netherlands, Singapore, and the UK.
The provider maintains compliance with AICPA SOC 2 Type 2 (an audit standard by the American Institute of Certified Public Accountants), the EU’s GDPR, and various privacy laws passed in the US.
Advertisement
Incogni, on the other hand, is more widely available. As of now, it can be used by users in the US, the UK, all EU countries, Canada, Switzerland, Norway, Iceland, Liechtenstein, and the Isle of Man.
It operates under privacy laws like GDPR (EU), CCPA/CPRA (California, US), and similar, where applicable. Deloitte has verified that Incogni has processed 245+ million removal requests since 2022, up to mid-2025.
Transparency and Credibility
Incogni
Through its limited assurance report, Deloitte confirmed that Incogni’s removal process works as described by the provider. Deloitte verified key operational claims around recurring cycles, sending requests, and reaching brokers.
What’s more, PCMag and PCWorld awarded Incogni their Editors’ Choice awards, highlighting its automation, coverage, and ease of use. At the same time, user reviews on Trustpilot mainly show positive results with a gradual reduction in broker listings. Incogni’s overall rating stands at 4.4.
Advertisement
DeleteMe
DeleteMe is an already established removal service with generally warm feedback from many users over the years. It holds the rating of 4.0 on Trustpilot, the average of almost 200 reviews, with around ¾ being positive. Users praise data removal and ongoing checks, though there are some who describe slow results or incomplete removals.
When it comes to industry reviews, DeleteMe is appreciated for removing personal data from aggregators, allowing custom removal requests, and providing instructions for free DIY removal. However, they often mention that its coverage is quite narrow, and reports are too rare (they come out quarterly).
User Experience
Incogni
Users love Incogni for its simplicity and automation. After setup, which takes no more than 10 minutes, the dashboard will display all the necessary information without unnecessary, complicated details. You will be able to determine which brokers have been contacted, which requests are pending, and which sites already deleted your data.
For the most part, users don’t have to manually intervene in the process; everything happens in the background. Weekly or periodic progress summaries will come straight to your email and keep you informed without logging in daily. Removal cycles are also automated and handled every 60-90 days.
Advertisement
DeleteMe
DeleteMe’s user experience reflects its human-assisted model. After selecting a plan and uploading your personal details, the team will run periodic scans of dozens to hundreds of listing sites. Instead of following a live dashboard, users get regular reports, typically quarterly, that outline what was found and removed. These reports are more detailed, including context and explanations, such as site name, removal date, status, etc.
Some users may appreciate the clear narrative, while others prefer dashboards and less engagement.
Across both services, reviews suggest clarity. Incogni is praised for automation and ongoing protection with minimal effort from the user, while DeleteMe’s biggest strengths are human-handled processes and detailed reports.
Customer Support
Both providers offer structured support, but their channels differ.
Advertisement
Incogni offers:
Email/ticket-based support
Help Center documentation
Live chat support through its website
Phone support only for Unlimited plans
Users often describe Incogni’s support team’s responses as clear and process-oriented. They also say that the process is quite quick.
DeleteMe offers:
Phone support
Live chat
Email support
Web contact forms
User feedback praises DeleteMe’s helpful human interaction and explanations in its reports. However, many reviews mention slower response times, especially during peak periods.
Pricing
Incogni Pricing
Plan
Billing frequency
Monthly cost
Features
Standard
Annually
$7.99
Automated broker removals, dashboard tracking
Standard
Monthly
$15.98
Everything above for a higher overall cost
Unlimited
Annually
$14.99
Everything above, unlimited custom removal requests, 2,000 additional sites coverage, live phone support
Unlimited
Monthly
$29.98
Everything above for a higher overall cost
Family Standard
Annually
$15.99
Standard for up to 5 members and family account management
Family Standard
Monthly
$31.98
Everything above for a higher overall cost
Family Unlimited
Annually
$22.99
Unlimited for up to 5 members and family account management
Family Unlimited
Monthly
$45.98
Everything above for a higher overall cost
For American users, there’s an additional option – the Protect Plan, which is all-in-one data removal and identity-theft protection. For $41.48/month, you can get everything in Incogni Unlimited plus identity theft protection with NordProtect. Incogni is also included with Surfshark’s One+ plan from $4.19/month.
Both DeleteMe and Incogni have custom offers for businesses. None of them includes a free trial, but Incogni comes with a 30-day money-back guarantee for risk-free testing, while DeleteMe can be canceled with a full refund only before the first privacy report. However, you can get a free scan with DeleteMe.
Final Verdict: Two Solid Approaches for 2026
Incogni and DeleteMe are among the most popular data removal service providers, but they fit different privacy needs:
Incogni is an excellent choice for users who don’t want to get too engaged with the data removal process. It also offers wider broker coverage, with recurring cycles that keep sending removal requests. Incogni has also been independently assessed and received editorial recognition.
DeleteMe will suit users who prefer a human-assisted removal process and detailed reports about contacted brokers and their responses.
In 2026, both services can be effective, but when you prioritize automation, scale, and long-term work, Incogni’s approach will suit you better.
FAQ
Which platform provides broader data broker coverage?
Advertisement
Incogni covers over 420 brokers on its standard plan. DeleteMe claims coverage for 750+ sites, but its standard tier often defaults to around 100 high-impact brokers, with the remainder requiring manual custom requests.
Should I prioritize human-assisted removals over AI automation?
DeleteMe employs human privacy experts to handle complex manual opt-outs, which may offer higher precision for difficult cases. Incogni uses advanced AI to maintain a set-and-forget system that is generally faster and more affordable.
Advertisement
Which service is more effective for users living outside the United States?
Incogni is the stronger choice for global privacy, with a single subscription covering the US, UK, Canada, and 30+ European countries. DeleteMe historically focuses on the American market, though it has expanded to select international regions.
Advertisement
How frequently will I receive updates on my removal status?
Incogni sends progress updates every week. DeleteMe typically provides more comprehensive deep-dive reports, but issues them every quarter.
Nintendo of America is suing the US government, including the Department of Treasury, Department of Homeland Security and US Customs and Border Protection, over its tariff policy, Aftermath reports. The video game giant already raised prices on the Nintendo Switch in August 2025 in response to “market conditions,” but has so far left the price of its newer Switch 2 console unchanged.
Nintendo’s lawsuit, filed in the US Court of International Trade, cites a Supreme Court ruling from February that confirmed a lower courts’ opinion that the Trump administration’s global tariffs were illegal. Nintendo’s lawyers claim that the video game company has been “substantially harmed by the unlawful of execution and imposition” of “unauthorized Executive Orders,” and the fees Nintendo has already paid to import products into the country. In response, the company is seeking a “prompt refund, with interest” of the tariffs it has paid.
“We can confirm we filed a request,” Nintendo of America said in a statement. “We have nothing else to share on this topic.”
While taxes and other trade policies are supposed to be set by Congress, President Donald Trump implemented a collection of global tariffs over the course of his first year in office using executive orders and the International Emergency Economic Powers Act (IEEPA), a law that gives the president expanded control over trade during a global emergency. The Trump administration has positioned tariffs as a way to punish enemies and bargain with trade partners, but many companies have passed the increased price of importing goods onto customers.
Advertisement
In upholding opinions from the US District Court of the District of Columbia and the US Court of International Trade, the Supreme Court removed the Trump administration’s ability to collect tariffs using IEEPA, but didn’t clarify how the tariffs the government had illegally collected should be returned to companies. Like Nintendo, other companies have decided filing a lawsuit is the best way to get refunded.
The Guardian reports that US Customs and Border Protection is already preparing a system to process refunds for affected companies, but that might not mark the end of Trump’s tariff regime. In a press conference held after the Supreme Court released its decision, the President announced plans to introduce tariffs using other, more constrained methods. Tariffs aren’t the only obstacle Nintendo faces, either. The company could also be forced to raise the price of its consoles in response to the current RAM shortage.