Data removal services claim to remove your details from data broker databases, thereby limiting how often your information is bought and sold. They become increasingly popular as more people start to realize how widely their personal data is shared online. However, uncertainty is common. Do they work? Are they legitimate or just another online scam?
People want to know if these tools are actually effective and how they manage their data. Especially that results aren’t instant, and not every provider offers the same level of automation, coverage, or follow-up.
That brings us to Incogni, one of the most talked-about names in this industry. This in-depth review examines whether Incogni is good, how well it actually works, whether it’s trustworthy, and how it positions itself against the competition in 2026.
Incogni Overview (2026)
Category
Details
Pricing
From $7.99/month when billed annually or from $15.98/month
Service type
Automated personal data removal
Coverage
420+ data brokers (public and private listings)
Removal model
Legal opt-out and deletion requests
Follow-ups
Recurring cycles (60 days for public and 90 days for private listings)
Availability
The US, the UK, Canada, the EU, Switzerland, Norway, Iceland, Liechtenstein, the Isle of Man
Verification and recognition
Limited assurance assessment by Deloitte, Editors’ Choice Awards from PCMag and PCWorld
Free plan
No, but a 30-day money-back guarantee
Strengths
Full automation, broad data broker coverage, recurring removals, third-party verification (Deloitte), affordability
Limitations
No screenshots of removals, no exposure scan details, no free trial
What Is Incogni?
Incogni is an automated personal data removal service. It’s designed to reduce your online availability by contacting data brokers and requesting the deletion of your personal information from their databases. This way, you don’t have to chase numerous companies individually. Incogni also keeps track of answers to the requests and sends follow-ups when needed.
Advertisement
Instead of tiresome and never-ending manual opt-outs, Incogni centralizes the process, operating under privacy laws like the GDPR and CCPA.
The provider offers the following features:
Automated data removals: Incogni sends removal requests on your behalf to over 420 brokers (additional sites in higher-tier plans).
Customer data removals: You can submit specific sites or data sources for additional takedown attempts (plan-dependent feature).
Progress reports and tracking dashboard: The service offers real-time tracking of which brokers were contacted and how they responded.
Family coverage:Specific plans allow adding multiple household members under one account for wider protection.
Ongoing monitoring: Incogni doesn’t stop after one round of requests, maintaining broker coverage over time.
Its focus isn’t a one-time cleanup but ongoing data exposure management.
How Incogni Actually Works
Incogni’s process is built around automation and legal rights.
Step 1: Authorization
After you create an account, you need to verify your identity. This will allow Incogni to legally act on your behalf when contacting data brokers.
Step 2: Broker Outreach
Incogni starts sending deletion requests immediately. Its data broker list includes hundreds of brokers, both public and private listings.
Advertisement
Step 3: Tracking
Incogni’s straightforward dashboard logs all the responses, confirmations, and pending cases, so you can (but you don’t have to) oversee the process.
Step 4: Recurring Removals
As data can easily reappear some time after its removal, Incogni submits requests on a cycle. Usually, it’s every 60 days for public brokers and 90 days for private brokers. This step is essential if you want to achieve long-term effectiveness.
This repeat system is what makes this different from one-time opt-outs.
How Long Do Removals Take?
Sadly, there’s no rule, as it depends on the legal response window and broker processes, and both can be rather lengthy.
Advertisement
Under privacy regulations, e.g., GDPR and CCPA, companies usually have weeks to respond to data removal requests. Some respond quickly; others require follow-ups; some even try to ignore requests. That’s why Incogni uses recurring cycles.
Broker databases refresh on a regular basis, which makes removals an ongoing process. Over time, repeated requests reduce reappearance. According to Deloitte, since 2022, Incogni has processed 245+ million requests.
Above + priority processing and unlimited custom removals
Family
$15.99
$31.98
Standard coverage for multiple household members
Family Unlimited
$22.99
$45.98
All features listed above + family coverage
Incogni is also included in broader privacy bundles: Surfshark One+plan has Incogni alongside Surfshark’s VPN and other security tools. Moreover, there’s a bundle (Data removal & identity-theft protection, all-in-one) with Nord Protect, but it’s for US users only.
Customer Support
Incogni’s customer support is handled by its Help Center and ticket-based system. Users can easily access helpful guides and FAQs as well as submit requests for assistance. Incogni’s customer support team is known to respond quickly and to the point.
Advertisement
Email-style case support is the main channel. Live chat is available, while phone supportcomes with Unlimited plans only, giving higher-tier subscribers a more direct contact option.
User Experience
Incogni is designed to be low-maintenance, perfect for people who don’t want to bother doing it manually, as it can easily become a full-time job. As such, setup typically takes only minutes, and the service starts operating in the background. The dashboard is the main control center. It shows contacted brokers, confirmations, and pending requests or required follow-ups.
There are no spreadsheets to manage, no legal templates to send manually, and no need to track the deadlines or reappearances. Incogni was built to operate mostly in the background. So the interface focuses on visibility rather than technical complex details.
In short, Incogni provides transparency and visibility without requiring users to actively manage each step of the process.
Is Incogni Legit or a Scam?
Incogni is not a scam. Several verifiable signals from reputable sources confirm Incogni’s legitimacy.
Advertisement
Independent Limited Assessment
A limited assurance report from Deloitte examined Incogni’s removal process and concluded that it works as promised. As of now, this type of third-party assessment is pretty exceptional in the data removal industry, and external validation is essential when you want to entrust a provider with sensitive information.
Scale and Transparency
Deloitte also confirmed that Incogni’s claims that they have already processed hundreds of millions of removal requests for its customers is 100% true. The provider keeps documentation about how each request is sent and tracked. Moreover, its removal model relies on formal privacy-law opt-out requests, not simply informal takedown attempts.
Expert Recognition
Incogni has received Editors’ Choice Awards from both PCMag and PCWorld. Accompanying reviews praise the provider’s reliable performance, strong automation, broad data broker coverage, and transparency. They also highlight the handling of ongoing removal requests that require minimal effort from the user.
Public User Feedback
Incogni holds a rating of 4.4 on Trustpilot based on over 2,400 reviews. Most frequent positive mentions include:
Advertisement
easy setup and automation,
clear visibility and transparency about the removal progress,
noticeable reductions in unsolicited messages and calls over time.
Some critical notes refer to how long it takes to see results and the fact that no service can truly guarantee complete data disappearance. However, that can be expected in this industry, and no provider, including Incogni, promises 100% successful removals.
Overall sentiment confirms that the service works as described, given realistic expectations.
Final Verdict: Incogni Is a Practical Tool for Ongoing Data Exposure Reduction
Incogni is an ongoing data removal service, not a one-time fix solution. It automates legally-backed opt-out requests to 420+ data brokers. Then, it repeats them on a schedule as needed and provides users with clear reports.
Independent assessment, editorial recognition, and positive user feedback confirm that the service works in a structured and reliable way.
Users need to know, though, that the results develop gradually – legal response takes time, and many broker databases constantly refresh. In that context, Incogni appears as an excellent choice, a long-term privacy management tool focused on steadily reducing how broadly your personal data circulates online.
Advertisement
FAQ
Will using Incogni lead to a measurable drop in unwanted spam?
Yes, users typically report a significant decrease in marketing calls and emails as the service utilizes legal erasure requests to force data brokers to delete your information.
What is the expected timeframe for completing data deletions?
Advertisement
While initial requests are generally dispatched within 24 hours, brokers often take 30 to 45 days to comply. Incogni continues to monitor these brokers and resends requests if they do not respond.
Which personal details do I need to provide during account setup?
To identify your records accurately, Incogni requires your full name, email address, and physical address. You can provide multiple variations of these details to ensure older or alternate profiles are caught.
Advertisement
Can I manage my data removal via a dedicated mobile application?
Yes, Incogni offers an Android app for mobile management, though its full suite of features is primarily accessible through a standard web browser.
Advertisement
What should I know before trying to cancel a bundled subscription?
If your Incogni service is part of a package like Surfshark One+, you must manage the cancellation through that specific provider’s billing department.
TriZetto Provider Solutions, a healthcare IT company that develops software and services used by health insurers and healthcare providers, has suffered a data breach that exposed the sensitive information of over 3.4 million people.
The firm, which has been operating under the Cognizant umbrella since 2014, disclosed that it detected suspicious activity on a web portal on October 2, 2025, and launched an investigation with the help of external cybersecurity experts.
The investigation revealed that unauthorized access began nearly a year before, on November 19, 2024.
During the exposure period, the threat actors accessed records relating to insurance eligibility verification transactions, which are part of the process providers use to confirm a patient’s insurance coverage before treatment.
Advertisement
The types of data that have been exposed vary per individual, and may include one or more of the following:
Full names
Physical address
Date of birth
Social Security number
Health insurance member number
Medicare beneficiary identifier
Provider name
Health insurer name
Demographic, health, and insurance information
Affected providers were alerted on December 9, 2025, but customer notification started in early February 2026. According to a filing Maine’s Attorney General submitted today, the number of exposed individuals is 3,433,965.
TriZetto says that payment card, bank account, or other financial information was not exposed in this incident.
Also, the company is not aware of any cases where cybercriminals have attempted to misuse this information.
TriZetto says it has taken steps to strengthen cybersecurity on its systems and informed law enforcement authorities of the incident.
Advertisement
Notification recipients are offered free 12-month coverage of credit monitoring and identity protection services from Kroll to help mitigate risks arising from compromised data.
BleepingComputer has contacted TriZetto to learn more about the nature of the security breach and why the firm delayed notifications to consumers for several months, but we have not received a response by publication time.
No ransomware groups have taken responsibility for the attack yet, and no data leaks linked to TriZetto have appeared on underground forums.
Cognizant itself was rumored to have suffered a Maze ransomware breach in 2020. In June 2025, Clorox sued the IT firm for gross negligence after it allegedly let Scattered Spider operatives into its network following a social engineering attack in September 2023.
Advertisement
Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.
Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.
Ubisoft has finally confirmed what Assassin’s Creed fans have suspected for years: a remake of Assassin’s Creed IV: Black Flag is officially in the works.
The company revealed the project, titled Assassin’s Creed: Black Flag Resynced, in a new blog post outlining the future of the long-running series.
We don’t know much about the game yet, but initial reports suggest that Resynced will be a full remake rather than a simple remaster, with upgraded visuals and gameplay improvements, bringing one of the best AC games into the modern age.
It’s also suggested that new story content will be added to flesh out the world around Edward Kenway’s life – at the expense of the modern day gameplay, which has apparently been removed from the remake altogether. It’ll be interesting to see how this all works, given how the original game weaved parts of both storylines into the ending.
Advertisement
We’ve known for quite some time that Ubisoft has been thinking about breathing life into the 2013 game, but this was more or less confirmed when the name surfaced on a European ratings board listing late last year.
Advertisement
We don’t yet have a release date for the game, but we know that an unannounced game was due to arrive before the end of the current financial year. Of course, Ubisoft delayed seven games earlier this year – and Black Flag is expected to be one of them.
Whether or not we see the game before the end of 2026 remains to be seen, but for now we’ll keep our “spyglass on the horizon”.
Grayson Shor, far right, at a recent Pacific Northwest Battery Collaborative meet up at a Seattle brewery on Capitol Hill. Shor launched the organization to help the sector build connections. (PNWBC Photo)
The collaborative’s launch in October 2024 was so popular it ran out of chairs and the group now caps RSVPs because venues keep maxing out. The nonprofit has hosted 1,400 attendees at 17 different events in Washington, Oregon and online. Shor’s latest project is helping create a battery-focused mini-series he describes as a hybrid between Anthony Bourdain’s “Parts Unknown” and “Cosmos.”
Who knew that energy storage devices could generate so much enthusiasm?
“Batteries are sexy right now,” Shor said.
Batteries are making electric vehicle adoption more attractive as they’ve become increasingly powerful and quicker to recharge. They’re ubiquitous given the pervasive use of phones and consumer electronics. And as electricity demand is spiking thanks to data centers and other energy users, they’re a relatively quick, affordable way to add more power to the grid.
Advertisement
“We are installing more grid batteries in 2025 than the total amount that existed globally just two years ago,” Shor said. “This isn’t just growth, it’s a total reimagining of how our economy is powered.”
A battery ecosystem emerges
Part of the crowd at the Pacific Northwest Battery Collaborative launch party, with founder Grayson Shor in the front row in a tie. (PNWBC Photo)
Shor has spent nearly a decade working on sustainability, circular economy and battery-related issues for organizations ranging from the U.S. Department of State to Amazon to startups. When the former diplomat landed in Seattle from the other Washington more than two years ago, he was impressed by the region’s battery sector.
That included startups in electric aviation, alternative chemistries such as sodium batteries, and next-generation silicon battery materials, plus R&D resources and support at the University of Washington’s Clean Energy Institute.
But he realized the industry lacked the connections to bring together companies, academics, entrepreneurs and investors, and set out to address it. The sector welcomes his efforts.
“I’ve paid attention to folks trying to knit together community, and for the Northwest battery innovation and application ecosystem, Grayson Shor has been an unrelenting force seeking to build and amplify our unique strengths,” said Dan Schwartz, founding director of the Clean Energy Institute.
Advertisement
Tom Gurski, founder of the plug-in hybrid vehicle startup Blue Dot Motorworks, has attended the group’s functions. “In a region famous for introverted personalities their events and happy hours are invaluable for breaking down silos and getting people to connect,” Gurski said.
Beyond building community, Shor is lobbying for support for local and state policies that promote the industry and get more batteries deployed in the state. The energy storage devices have important societal benefits, he said, including better electrical grid performance and helping meet power needs during peak demand.
‘The Battery Life’
Shor speaking at a Pacific Northwest Battery Collaborative event in Seattle during 2025 PNW Climate Week. (PNBC Photo)
Shor is also the co-founder and chief product officer for Buckstop, an “urban mining” startup helping recover critical minerals from waste electronics. He also volunteers as the policy and government affairs director for the Volta Foundation, the world’s largest battery industry association.
And there’s the TV series, called “The Battery Life.” Crews recently spent three days in the Seattle area filming the first episode, visiting the battery materials company Group14 Technologies and interviewing startups at the UW’s Clean Energy Test Beds.
“We’re doing walks through factories. We’re meeting with the CEOs and the inventors, diving deep into their technology,” Shor said. But the series also has “the ‘Carl Sagan vibe,’” he added, explaining “how does this technology actually impact humanity, and why does it matter to the average person?”
Advertisement
Additional episodes will be shot in Portland and Vancouver, B.C. The plan is to air the series later this year at energy events in Oregon and Las Vegas, plus other area venues.
Future Pacific Northwest Battery Collaborative plans include a job fair and fundraising gala. Shor also envisions a convention where the entrepreneurs and innovators could set up booths to show off their technologies. The ideas keep coming.
“This is playing my little role in trying to tackle climate change, to try to advance the energy transition,” he said. “It helps with equity, it helps with economic opportunity …. It makes me happy.”
Clocks come in many styles and sizes, with perhaps the most visually pleasing ones involving marbles. Watching these little spheres obey gravity and form clearly readable numbers on a clock has strong mesmerizing qualities. If you’re not into really big marble clocks, or cannot quite find the space for a desk-sized clock, then the tiny marble clock by [Jens] may be an option.
While he totally loved the massive marble clock that [Ivan Miranda] built, it is a massive contraption that’s hard to justify as a permanent installation. His take on the concept thus makes it as small as possible, by using a pick-and-place style arm to place the marbles instead. Although the marbles don’t do a lot of rolling this way, it’s decidedly more quiet, and replace the rumbling and click-clacking of marbles with the smooth motion of a robotic arm.
Another benefit of this clock is that it’s cheap to make, with a price tag of less than $23. A big part of this is the use of cheap SG90 micro servos, and a permanent magnet along with a mechanism that pushes the marble off said magnet. Perhaps the biggest issue with this clock is that the arm somewhat obscures the time while it’s moving around, but it’s definitely another interesting addition to the gallery of marble clocks.
Advertisement
We have previously seen such clocks built out of wood and brass as well as 3D-printed using pendulum mechanisms, which can be made pretty compact as well, albeit with a more analog vibe.
Scenario Modeling and Array Design for Non-Terrestrial Networks (NTNs)
Non-terrestrial networks (NTNs) using low earth orbit (LEO) satellites present unique technical challenges, from managing large satellite constellations to ensuring reliable communication links. In this webinar, we’ll explore how to address these complexities using comprehensive modeling and simulation techniques. Discover how to model and analyze satellite orbits, onboard antennas and arrays, transmitter power amplifiers (PAs), signal propagation channels, and the RF and digital receiver segments—all within an integrated workflow. Learn the importance of including every link component to achieve accurate, reliable system performance.
Highlights include:
Modeling large satellite constellations
Analyzing and visualizing time-varying visibility and link closure
Using graphical apps for antenna analysis and RF component design
Modeling PAs and digital predistortion
Simulating interference effects in communication links
Republican lawmakers in Utah have long been on the cutting edge of shitty policymaking when it comes to regulating the internet. The latest chapter in that legacy is a proposed tax on porn and adult content purchased in the state’s digital space.
Originally proposed by a pair of Republican lawmakers in the Utah state legislature earlier this year, Senate Bill (SB) 73 would levy a so-called “material harmful to minors” tax at 2 percent on revenues generated by the sale of online porn (it was originally 7 percent). Having been amended and passed through the state Senate with considerable support, SB 73 is on track to clear the hurdles of the House of Representatives and be signed into law by Gov. Spencer Cox, a Republican and staunch anti-pornography activist like the bill’s sponsors.
This activism from Gov. Cox and the sponsors of porn tax bill—Republican state Sen. Calvin R. Musselman and state Rep. Steve Eliason—could presage a far more corrosive and expansive campaign against civil liberties and key freedom of expression protections that cover sexually-related speech.
First off, SB 73 would fund a variety of efforts for Utah’s state government. Such efforts benefiting from the funds under the proposal would include enforcement efforts for the state’s social media and pornography age verification laws.
Advertisement
But the bill goes further, especially after several rounds of being amended in the Senate and the House to include the mention of web traffic sourced from virtual private networks (VPNs) and other proxies. This bill would make it illegal to circumvent content blocks implemented by platforms due to local age verification laws, making it punishable by a bevy of civil penalties. Nonetheless, what goes well beyond extreme is that there is a provision in the bill that would also make it illegal for websites covered by age verification laws (e.g., a porn site) to offer Utah-based users information about using VPNs to get around any content blocks securely.
Consider the following language in the current form of Senate Bill 73 regarding VPN “facilitation”:
“A commercial entity that operates a website that contains a substantial portion of material harmful to minors may not facilitate or encourage the use of a virtual private network, proxy server, or other means to circumvent age verification requirements, including by providing: (a) instructions on how to use a virtual private network or proxy server to access the website; or (b) means for individuals in this state to circumvent geofencing or blocking.”
Utah’s bill doesn’t go that far on the concerns of records, but it certainly conjures up civil liberties concerns. Aside from the glaring privacy concerns related to age verification tech, Utah has no right to restrict the communications of a private company to its customers. This goes double for attempts to supersede interstate commerce on a category of products and services that are lawful. And don’t forget the dimensions of the porn tax. SB 73’s approach is expansive and blatantly violates the First Amendment rights of millions of people, not just those who live within the state boundaries of Utah.
Advertisement
The tax is a textbook “sin tax” a jurisdiction would levy on something like alcohol, tobacco, and gambling. But what is different between the purchase of a six-pack of beer versus wanking off alone in your home is that buying that beer from the liquor store isn’t necessarily considered expressive in its nature. Producing, selling, and consuming pornography are matters of protected sexual speech so long nothing illegal and criminal occur. Porn taxes like the one proposed in SB 73 explicitly outline “covered entities,” to include all entities that sell adult content through clip sales, subscriptions, and fan sites. And with total Utah sales, revenues are then taxed at the 2 percent levy and then paid to the state each year.
This might be an incidental bump in the road for many of the larger platforms, like Pornhub or OnlyFans, but this type of policymaking is a vindictive ploy to make operating a small and medium business in this space excruciatingly harder. I do see the Utah bill passing this legislative session, which would lead to a potential legal standoff in a federal courthouse. But I am not holding my breath for anything more beyond that.
Michael McGrady covers the tech and legal sides of theonlinepornbusiness.
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
The functional replication and actuation of complex structures inspired by nature is a longstanding goal for humanity. Creating such complex structures combining soft and rigid features and actuating them with artificial muscles would further our understanding of natural kinematic structures. We printed a biomimetic hand in a single print process comprised of a rigid skeleton, soft joint capsules, tendons, and printed touch sensors.
This is our latest work on the trajectory planning method for floating-based articulated robots, enabling the global path searching in complex and cluttered environments.
OmniPlanner is a unified solution for exploration and inspection path planning (as well as target reach) across aerial, ground, and underwater robots. It has been verified through extensive simulations and a multitude of field tests, including in underground mines, ballast water tanks, forests, university buildings, and submarine bunkers.
In the ARISE project, the FZI Research Center for Information Technology and its international partners ETH Zurich, University of Zurich, University of Bern, and University of Basel took a major step toward future lunar missions by testing cooperative autonomous multi-robot teams under outdoor conditions.
Gugusse and the Automaton’ is a 1897 French film by Georges Méliès featuring a humanoid robot in nearly as realistic of a way as some of the humanoid promo videos we’ve seen lately.
Anca Dragan is no stranger to Waymo. She worked with us for six years while also at UC Berkeley and now, GoogleDeepMind. Her focus on making AI safer helped Waymo as it launched commercially. In this final episode of our season, Anca describes how her work enables AI agents to work fluently with people, based on human goals and values.
This UPenn GRASP SFI Seminar is by Junyao Shi, on “Unlocking Generalist Robots with Human Data and Foundation Models.”
Advertisement
Building general-purpose robots remains fundamentally constrained by data scarcity and labor-intensive engineering. Unlike vision and language, robotics lacks large, diverse datasets spanning tasks, environments, and embodiments, limiting both scalability and generalization. This talk explores how human data and foundation models trained at scale can help overcome these bottlenecks.
If you’ve spent any time following gaming news in early 2026, you might think the end of Xbox is right around the corner. Between reports of a 32% year-over-year drop in hardware revenue, the sudden departure of longtime Xbox boss Phil Spencer, and wild speculation that Microsoft might pivot the entire gaming division toward AI, the internet has been flooded with dramatic takes about the “death of Xbox.”
Official Xbox Podcast
But the eulogies are premature. Despite the noise, Xbox still sits on one of the most powerful portfolios in gaming, including Halo, Forza, Gears of War, Call of Duty, Minecraft, and more. Microsoft also has the financial backing, infrastructure, and studio network to remain a major player for decades. The real issue isn’t survival, but identity.
You see, for several years, Xbox leadership pushed an ambitious idea that “every screen is an Xbox.” The strategy expanded the brand through cloud gaming, PC integration, and Game Pass across multiple platforms. While that approach broadened reach, it also created confusion about what Xbox actually is. Now, under the new leadership of Microsoft Gaming CEO Asha Sharma, the company appears to be acknowledging that confusion and attempting a course correction.
Sharma recently confirmed Project Helix, the codename for Xbox’s next-generation hardware, promising a device that will “lead in performance and play your Xbox and PC games.” That announcement alone signals a shift in direction. Xbox isn’t ending, but it is entering a critical rebuilding phase. And if the company wants to return to its former glory, experts and players alike largely agree that three major changes are essential.
1. Nail the execution of Project Helix
One of the biggest challenges Xbox faces today is simple: many players aren’t sure why they should buy an Xbox console anymore.
If the same games appear on PC, and sometimes even on rival platforms, what makes the Xbox console special? That’s where Project Helix could become the most important product Microsoft has released in years. Rumored for a 2027 launch, Helix is expected to be a hybrid system, essentially a powerful AMD-powered console running a “console-ized” version of Windows. The promise is compelling: the simplicity of a traditional console combined with the flexibility of a gaming PC.
Imagine a device that boots straight into a controller-friendly interface but also lets players access platforms like Steam or Epic from the living room. If done right, Helix could blur the line between PC and console in a way no competitor currently offers. But execution will determine everything. Helix must never feel like a desktop computer awkwardly connected to a TV. Instead, it needs to launch into a seamless controller-first experience, as the “Xbox Full Screen Experience” we saw on the ROG Xbox Ally, preserving the plug-and-play simplicity that console players expect.
If Microsoft can successfully merge the PC and console ecosystems without sacrificing ease of use, Helix won’t just save Xbox hardware, but it could redefine what a console is. Yes, it’s likely going to be expensive, with rumors suggesting a price tag that could cross the $1,000 mark. But Xbox could still justify that premium if it delivers on the other two pillars that matter just as much.
2. Let the studios deliver the games
The second major fix is both obvious and unavoidable: Xbox needs more great games, more consistently.
Advertisement
Over the past decade, Microsoft has spent nearly $100 billion acquiring studios, including Bethesda and Activision Blizzard. On paper, that gives Xbox one of the strongest first-party lineups in gaming history. Yet the results have been uneven. Franchises like Halo, Gears of War, and Forza, once the backbone of the platform, have seen long development gaps. Meanwhile, studio closures, layoffs, and shifting corporate priorities have created uncertainty inside Microsoft’s gaming division.
Halo
To further add to the injury, when Sharma took over, some players worried that her background in AI-driven tech companies might push Xbox toward algorithm-generated content. Thankfully, she has quickly pushed back on that idea, stating that Microsoft will not “chase short-term efficiency or flood our ecosystem with soulless AI slop.” Now the company needs to prove it.
Xbox
Microsoft now owns some of the most talented developers in the world. What they need most is stability. Fewer shifting mandates, fewer corporate interruptions, and enough time to create the kind of system-defining games that drive entire console generations. Because ultimately, subscriptions and hardware don’t sell themselves. Great games do. The upcoming Forza Horizon 6 is already generating plenty of buzz and appears well on track to be a major success. However, Microsoft will need a steady stream of titles, especially strong exclusives, if it hopes to match the kind of consistent first-party momentum Sony has built on the PlayStation side.
3. Rebuild the culture around Xbox
Finally, there’s one part of the Xbox experience that often gets overlooked: the community culture. For many fans, the Xbox 360 era still feels like the golden age of the platform. Profiles felt personal, avatars actually mattered, and the dashboard felt like a social space where gamers could hang out. It wasn’t just a storefront pushing subscriptions and ads.
Xbox 360
Over time, much of that personality has disappeared. Today, the Xbox dashboard is often criticized for feeling cluttered with Game Pass promotions and advertisements. Across communities like Reddit, ResetEra, and Xbox Insider forums, the message from players is clear: bring back the personality. Fans want things like dynamic themes, meaningful achievement rewards, deeper avatar integration, and more ways to personalize the UI so the console feels like their space again.
Billy Freeman / Unsplash
Players are also asking Xbox to double down on something it once did better than anyone else: game preservation. The Backward Compatibility program was hugely popular, and with Activision Blizzard now under Microsoft’s umbrella, fans want to see classic titles return. If Xbox can become the place where decades of gaming history remain playable on modern hardware, it could turn preservation into one of its biggest strengths.
The road back
Long story short, Xbox isn’t going anywhere anytime soon. The brand still holds enormous influence in the gaming industry, backed by Microsoft’s resources and a massive network of studios and services. However, the platform is at a turning point.
For Xbox to truly thrive again, the solution isn’t chasing every new trend. It’s about focusing on the basics: delivering great games consistently, launching a strong next-generation hardware platform, and reconnecting with the community that built the brand. If Microsoft gets these fundamentals right, the “Xbox is dying” narrative could quickly fade, and the next chapter of Xbox might end up being its most exciting yet.
MSI MEG Vision X AI 13.3-inch touchscreen doubles as a monitoring hub for creatives and professionals
GPU selection dictates performance for gaming, rendering, and professional workloads alike
Lobster-like chassis combines expandability with unconventional aesthetics
MSI has launched the MEG Vision X AI series, a barebones all-in-one PC which combines high-end gaming hardware with a strikingly unconventional design.
The system features a full-size tower measuring 299.3mm wide, 502.7mm deep, and 423.4mm tall, weighing approximately 18.3kg, and a PS3-esque appendage and protrusions that suggest both function and a distinctive aesthetic.
The device includes a 13.3-inch touchscreen intended for system monitoring, quick toggles, or dedicated status displays, allowing creatives to access software shortcuts, monitor rendering progress, or adjust project settings without switching focus from their primary display.
Interactive touchscreen enhances workflow and monitoring
The unique look of this device promoted TechRadar Pro editor Desire Athow to quip the casing resembled, “a lobster that hadn’t completely shed its hard exoskeleton to grow,” capturing the layered and almost organic appearance of the chassis, emphasizing the sense of a device that is both protective and expandable, housing high-end components while presenting a unique surface.
MSI appears to have embraced this aesthetic to showcase the interactive touchscreen while accommodating a full-size tower structure capable of housing top-tier components.
The device is larger than regular compact all-in-one PCs, suggesting the company prioritizes cooling, power delivery, and expandability over minimalism.
Advertisement
Performance is anchored by Intel’s Core Ultra 7 265K CPU on a Z890 platform, paired with 64GB of DDR5 memory.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
GPU options split the series into two clear tiers, a GeForce RTX 5080X configuration at $4,640 and a GeForce RTX 5070 Ti model at $4,082.
MSI indicates that CPU and RAM are consistent across models, meaning buyers make performance choices largely through GPU selection.
Advertisement
This ensures that professional applications like 3D rendering, video editing, and simulation software benefit from dedicated GPU acceleration alongside gaming performance.
The MEG Vision X AI supports both wired and wireless connections, with Intel Killer E5000 5GbE for the former and Wi-Fi 7 or Bluetooth 5.4 for the latter.
It also includes two Thunderbolt 4 ports, which support fast external storage, docking, or display expansion.
This connectivity allows professionals to attach high-speed NVMe drives or multi-monitor setups, which can streamline workflows for designers, animators, and video editors.
Advertisement
Power is supplied by an 850W 80 PLUS Gold PSU, providing adequate headroom for sustained GPU loads.
Although the primary audience for the device is gamers, its hardware and expandability suggest it could also serve as a versatile platform for creators who require both raw performance and reliable workstation capabilities.
Enterprise AI applications that handle large documents or long-horizon tasks face a severe memory bottleneck. As the context grows longer, so does the KV cache, the area where the model’s working memory is stored.
A new technique developed by researchers at MIT addresses this challenge with a fast compression method for the KV cache. The technique, called Attention Matching, manages to compact the context by up to 50x with very little loss in quality.
While it is not the only memory compaction technique available, Attention Matching stands out for its execution speed and impressive information-preserving capabilities.
The memory bottleneck of the KV cache
Large language models generate their responses sequentially, one token at a time. To avoid recalculating the entire conversation history from scratch for every predicted word, the model stores a mathematical representation of every previous token it has processed, also known as the key and value pairs. This critical working memory is known as the KV cache.
Advertisement
The KV cache scales with conversation length because the model is forced to retain these keys and values for all previous tokens in a given interaction. This consumes expensive hardware resources. “In practice, KV cache memory is the biggest bottleneck to serving models at ultra-long context,” Adam Zweiger, co-author of the paper, told VentureBeat. “It caps concurrency, forces smaller batches, and/or requires more aggressive offloading.”
In modern enterprise use cases, such as analyzing massive legal contracts, maintaining multi-session customer dialogues, or running autonomous coding agents, the KV cache can balloon to many gigabytes of memory for a single user request.
To solve this massive bottleneck, the AI industry has tried several strategies, but these methods fall short when deployed in enterprise environments where extreme compression is necessary. A class of technical fixes includes optimizing the KV cache by either evicting tokens the model deems less important or merging similar tokens into a single representation. These techniques work for mild compression but “degrade rapidly at high reduction ratios,” according to the authors.
Real-world applications often rely on simpler techniques, with the most common approach being to simply drop the older context once the memory limit is reached. But this approach causes the model to lose older information as the context grows long. Another alternative is context summarization, where the system pauses, writes a short text summary of the older context, and replaces the original memory with that summary. While this is an industry standard, summarization is highly lossy and heavily damages downstream performance because it might remove pertinent information from the context.
Advertisement
Recent research has proven that it is technically possible to highly compress this memory using a method called Cartridges. However, this approach requires training latent KV cache models through slow, end-to-end mathematical optimization. This gradient-based training can take several hours on expensive GPUs just to compress a single context, making it completely unviable for real-time enterprise applications.
How attention matching compresses without the cost
Attention Matching achieves high-level compaction ratios and quality while being orders of magnitude faster than gradient-based optimization. It bypasses the slow training process through clever mathematical tricks.
The researchers realized that to perfectly mimic how an AI interacts with its memory, they need to preserve two mathematical properties when compressing the original key and value vectors into a smaller footprint. The first is the “attention output,” which is the actual information the AI extracts when it queries its memory. The second is the “attention mass,” which acts as the mathematical weight that a token has relative to everything else in the model’s working memory. If the compressed memory can match these two properties, it will behave exactly like the massive, original memory, even when new, unpredictable user prompts are added later.
“Attention Matching is, in some ways, the ‘correct’ objective for doing latent context compaction in that it directly targets preserving the behavior of each attention head after compaction,” Zweiger said. While token-dropping and related heuristics can work, explicitly matching attention behavior simply leads to better results.
Advertisement
Before compressing the memory, the system generates a small set of “reference queries” that act as a proxy for the types of internal searches the model is likely to perform when reasoning about the specific context. If the compressed memory can accurately answer these reference queries, it will very likely succeed at answering the user’s actual questions later. The authors suggest various methods for generating these reference queries, including appending a hidden prompt to the document telling the model to repeat the previous context, known as the “repeat-prefill” technique. They also suggest a “self-study” approach where the model is prompted to perform a few quick synthetic tasks on the document, such as aggregating all key facts or structuring dates and numbers into a JSON format.
With these queries in hand, the system picks a set of keys to preserve in the compacted KV cache based on signals like the highest attention value. It then uses the keys and reference queries to calculate the matching values along with a scalar bias term. This bias ensures that pertinent information is preserved, allowing each retained key to represent the mass of many removed keys.
This formulation makes it possible to fit the values with simple algebraic techniques, such as ordinary least squares and nonnegative least squares, entirely avoiding compute-heavy gradient-based optimization. This is what makes Attention Matching super fast in comparison to optimization-heavy compaction methods. The researchers also apply chunked compaction, processing contiguous chunks of the input independently and concatenating them, to further improve performance on long contexts.
Attention matching in action
To understand how this method performs in the real world, the researchers ran a series of stress tests using popular open-source models like Llama 3.1 and Qwen-3 on two distinct types of enterprise datasets. The first was QuALITY, a standard reading comprehension benchmark using 5,000 to 8,000-word documents. The second, representing a true enterprise challenge, was LongHealth, a highly dense, 60,000-token dataset containing the complex medical records of multiple patients.
The key finding was the ability of Attention Matching to compact the model’s KV cache by 50x without reducing the accuracy, while taking only seconds to process the documents. To achieve that same level of quality previously, Cartridges required hours of intensive GPU computation per context.
Advertisement
Attention Matching with Qwen-3 (source: arXiv)
When dealing with the dense medical records, standard industry workarounds completely collapsed. The researchers noted that when they tried to use standard text summarization on these patient records, the model’s accuracy dropped so low that it matched the “no-context” baseline, meaning the AI performed as if it had not read the document at all.
Attention Matching drastically outperforms summarization, but enterprise architects will need to dial down the compression ratio for dense tasks compared to simpler reading comprehension tests. As Zweiger explains, “The main practical tradeoff is that if you are trying to preserve nearly everything in-context on highly information-dense tasks, you generally need a milder compaction ratio to retain strong accuracy.”
The researchers also explored what happens in cases where absolute precision isn’t necessary but extreme memory savings are. They ran Attention Matching on top of a standard text summary. This combined approach achieved 200x compression. It successfully matched the accuracy of standard summarization alone, but with a very small memory footprint.
Advertisement
One of the interesting experiments for enterprise workflows was testing online compaction, though they note that this is a proof of concept and has not been tested rigorously in production environments. The researchers tested the model on the advanced AIME math reasoning test. They forced the AI to solve a problem with a strictly capped physical memory limit. Whenever the model’s memory filled up, the system paused, instantly compressed its working memory by 50 percent using Attention Matching, and let it continue thinking. Even after hitting the memory wall and having its KV cache shrunk up to six consecutive times mid-thought, the model successfully solved the math problems. Its performance matched a model that had been given massive, unlimited memory.
There are caveats to consider. At a 50x compression ratio, Attention Matching is the clear winner in balancing speed and quality. However, if an enterprise attempts to push compression to extreme 100x limits on highly complex data, the slower, gradient-based Cartridges method actually outperforms it.
The researchers have released the code for Attention Matching. However, they note that this is not currently a simple plug-and-play software update. “I think latent compaction is best considered a model-layer technique,” Zweiger notes. “While it can be applied on top of any existing model, it requires access to model weights.” This means enterprises relying entirely on closed APIs cannot implement this themselves; they need open-weight models.
The authors note that integrating this latent-space KV compaction into existing, highly optimized commercial inference engines still requires significant effort. Modern AI infrastructure uses complex tricks like prefix caching and variable-length memory packing to keep servers running efficiently, and seamlessly weaving this new compaction technique into those existing systems will take dedicated engineering work. However, there are immediate enterprise applications. “We believe compaction after ingestion is a promising use case, where large tool call outputs or long documents are compacted right after being processed,” Zweiger said.
Advertisement
Ultimately, the shift toward mechanical, latent-space compaction aligns with the future product roadmaps of major AI players, Zweiger argues. “We are seeing compaction to shift from something enterprises implement themselves into something model providers ship,” Zweiger said. “This is even more true for latent compaction, where access to model weights is needed. For example, OpenAI now exposes a black-box compaction endpoint that returns an opaque object rather than a plain-text summary.”