Connect with us

Tech

Why This Is the Worst Crypto Winter Ever

Published

on

Bitcoin has fallen roughly 44% from its October peak, and while the drawdown isn’t crypto’s deepest ever on a percentage basis, Bloomberg’s Odd Lots newsletter lays out a case that this is the industry’s worst winter yet. The macro backdrop was supposed to favor Bitcoin: public confidence in the dollar is shaky, the Trump administration has been crypto-friendly, and fiat currencies are under perceived stress globally. Yet gold, not Bitcoin, has been the safe haven of choice.

The “we’re so early” narrative is dead — crypto ETFs exist, barriers to entry are zero, and the online community that once rallied holders through downturns has largely hollowed out. Institutional adoption arrived but hasn’t lifted existing tokens like ETH or SOL; Wall Street cares about stablecoins and tokenization, not the coins themselves. AI is pulling both talent and miners toward data centers. Quantum computing advances threaten Bitcoin’s encryption. And MicroStrategy and other Bitcoin treasury companies, once steady buyers during the bull run, are now large holders who may eventually become forced sellers.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Artificial Muscles, Boston Dynamics, and More Videos

Published

on

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2026: 1–5 June 2026, VIENNA

Enjoy today’s videos!

The functional replication and actuation of complex structures inspired by nature is a longstanding goal for humanity. Creating such complex structures combining soft and rigid features and actuating them with artificial muscles would further our understanding of natural kinematic structures. We printed a biomimetic hand in a single print process comprised of a rigid skeleton, soft joint capsules, tendons, and printed touch sensors.

[ Paper ] via [ SRL ]

Advertisement

Two Boston Dynamics product managers talk about their favorite classic BD robots, and then I talk about mine.

And this is Boston Dynamics’ LittleDog, doing legged locomotion research 16 or so years ago in what I’m pretty sure is Katie Byl’s lab at UCSB.

Advertisement

[ Boston Dynamics ]

This is our latest work on the trajectory planning method for floating-based articulated robots, enabling the global path searching in complex and cluttered environments.

[ DRAGON Lab ]

Thanks, Moju!

Advertisement

OmniPlanner is a unified solution for exploration and inspection path planning (as well as target reach) across aerial, ground, and underwater robots. It has been verified through extensive simulations and a multitude of field tests, including in underground mines, ballast water tanks, forests, university buildings, and submarine bunkers.

[ NTNU ]

Thanks, Kostas!

In the ARISE project, the FZI Research Center for Information Technology and its international partners ETH Zurich, University of Zurich, University of Bern, and University of Basel took a major step toward future lunar missions by testing cooperative autonomous multi-robot teams under outdoor conditions.

Advertisement

[ FZI ]

Welcome to the future, where there are no other humans.

[ Zhejiang Humanoid ]

Advertisement

This is our latest work on robotic fish, and is also the first underwater robot of DRAGON Lab.

[ DRAGON Lab ]

Thanks, Moju!

Watch this one simple trick to make humanoid robots cheaper and safer!

Advertisement

[ Zhejiang Humanoid ]

Gugusse and the Automaton’ is a 1897 French film by Georges Méliès featuring a humanoid robot in nearly as realistic of a way as some of the humanoid promo videos we’ve seen lately.

Advertisement

[ Library of Congress ] via [ Gizmodo ]

At Agility, we create automated solutions for the hardest work. We’re incredibly proud of how far we’ve come, and can’t wait to show you what’s next.

[ Agility ]

Advertisement

[ Humanoids Summit ]

Anca Dragan is no stranger to Waymo. She worked with us for six years while also at UC Berkeley and now, Google DeepMind. Her focus on making AI safer helped Waymo as it launched commercially. In this final episode of our season, Anca describes how her work enables AI agents to work fluently with people, based on human goals and values.

[ Waymo Podcast ]

This UPenn GRASP SFI Seminar is by Junyao Shi, on “Unlocking Generalist Robots with Human Data and Foundation Models.”

Advertisement

Building general-purpose robots remains fundamentally constrained by data scarcity and labor-intensive engineering. Unlike vision and language, robotics lacks large, diverse datasets spanning tasks, environments, and embodiments, limiting both scalability and generalization. This talk explores how human data and foundation models trained at scale can help overcome these bottlenecks.

[ UPenn ]

From Your Site Articles

Related Articles Around the Web

Advertisement

Source link

Continue Reading

Tech

The Xbox isn’t ending, but it needs these 3 changes to return to glory

Published

on

If you’ve spent any time following gaming news in early 2026, you might think the end of Xbox is right around the corner. Between reports of a 32% year-over-year drop in hardware revenue, the sudden departure of longtime Xbox boss Phil Spencer, and wild speculation that Microsoft might pivot the entire gaming division toward AI, the internet has been flooded with dramatic takes about the “death of Xbox.”

But the eulogies are premature. Despite the noise, Xbox still sits on one of the most powerful portfolios in gaming, including Halo, Forza, Gears of War, Call of Duty, Minecraft, and more. Microsoft also has the financial backing, infrastructure, and studio network to remain a major player for decades. The real issue isn’t survival, but identity.

You see, for several years, Xbox leadership pushed an ambitious idea that “every screen is an Xbox.” The strategy expanded the brand through cloud gaming, PC integration, and Game Pass across multiple platforms. While that approach broadened reach, it also created confusion about what Xbox actually is. Now, under the new leadership of Microsoft Gaming CEO Asha Sharma, the company appears to be acknowledging that confusion and attempting a course correction.

Sharma recently confirmed Project Helix, the codename for Xbox’s next-generation hardware, promising a device that will “lead in performance and play your Xbox and PC games.” That announcement alone signals a shift in direction. Xbox isn’t ending, but it is entering a critical rebuilding phase. And if the company wants to return to its former glory, experts and players alike largely agree that three major changes are essential.

1. Nail the execution of Project Helix

One of the biggest challenges Xbox faces today is simple: many players aren’t sure why they should buy an Xbox console anymore.

Advertisement

If the same games appear on PC, and sometimes even on rival platforms, what makes the Xbox console special? That’s where Project Helix could become the most important product Microsoft has released in years. Rumored for a 2027 launch, Helix is expected to be a hybrid system, essentially a powerful AMD-powered console running a “console-ized” version of Windows. The promise is compelling: the simplicity of a traditional console combined with the flexibility of a gaming PC.

Imagine a device that boots straight into a controller-friendly interface but also lets players access platforms like Steam or Epic from the living room. If done right, Helix could blur the line between PC and console in a way no competitor currently offers. But execution will determine everything. Helix must never feel like a desktop computer awkwardly connected to a TV. Instead, it needs to launch into a seamless controller-first experience, as the “Xbox Full Screen Experience” we saw on the ROG Xbox Ally, preserving the plug-and-play simplicity that console players expect.

If Microsoft can successfully merge the PC and console ecosystems without sacrificing ease of use, Helix won’t just save Xbox hardware, but it could redefine what a console is. Yes, it’s likely going to be expensive, with rumors suggesting a price tag that could cross the $1,000 mark. But Xbox could still justify that premium if it delivers on the other two pillars that matter just as much.

2. Let the studios deliver the games

The second major fix is both obvious and unavoidable: Xbox needs more great games, more consistently.

Advertisement

Over the past decade, Microsoft has spent nearly $100 billion acquiring studios, including Bethesda and Activision Blizzard. On paper, that gives Xbox one of the strongest first-party lineups in gaming history. Yet the results have been uneven. Franchises like Halo, Gears of War, and Forza, once the backbone of the platform, have seen long development gaps. Meanwhile, studio closures, layoffs, and shifting corporate priorities have created uncertainty inside Microsoft’s gaming division.

To further add to the injury, when Sharma took over, some players worried that her background in AI-driven tech companies might push Xbox toward algorithm-generated content. Thankfully, she has quickly pushed back on that idea, stating that Microsoft will not “chase short-term efficiency or flood our ecosystem with soulless AI slop.” Now the company needs to prove it.

Microsoft now owns some of the most talented developers in the world. What they need most is stability. Fewer shifting mandates, fewer corporate interruptions, and enough time to create the kind of system-defining games that drive entire console generations. Because ultimately, subscriptions and hardware don’t sell themselves. Great games do. The upcoming Forza Horizon 6 is already generating plenty of buzz and appears well on track to be a major success. However, Microsoft will need a steady stream of titles, especially strong exclusives, if it hopes to match the kind of consistent first-party momentum Sony has built on the PlayStation side.

3. Rebuild the culture around Xbox

Finally, there’s one part of the Xbox experience that often gets overlooked: the community culture. For many fans, the Xbox 360 era still feels like the golden age of the platform. Profiles felt personal, avatars actually mattered, and the dashboard felt like a social space where gamers could hang out. It wasn’t just a storefront pushing subscriptions and ads.

Over time, much of that personality has disappeared. Today, the Xbox dashboard is often criticized for feeling cluttered with Game Pass promotions and advertisements. Across communities like Reddit, ResetEra, and Xbox Insider forums, the message from players is clear: bring back the personality. Fans want things like dynamic themes, meaningful achievement rewards, deeper avatar integration, and more ways to personalize the UI so the console feels like their space again.

Players are also asking Xbox to double down on something it once did better than anyone else: game preservation. The Backward Compatibility program was hugely popular, and with Activision Blizzard now under Microsoft’s umbrella, fans want to see classic titles return. If Xbox can become the place where decades of gaming history remain playable on modern hardware, it could turn preservation into one of its biggest strengths.

The road back

Long story short, Xbox isn’t going anywhere anytime soon. The brand still holds enormous influence in the gaming industry, backed by Microsoft’s resources and a massive network of studios and services. However, the platform is at a turning point.

For Xbox to truly thrive again, the solution isn’t chasing every new trend. It’s about focusing on the basics: delivering great games consistently, launching a strong next-generation hardware platform, and reconnecting with the community that built the brand. If Microsoft gets these fundamentals right, the “Xbox is dying” narrative could quickly fade, and the next chapter of Xbox might end up being its most exciting yet.

Advertisement

Source link

Continue Reading

Tech

MSI unveils a lobster-like PC with a 13.3-inch touchscreen, RTX 5080X, and a quirky design that defies all conventions

Published

on


  • MSI MEG Vision X AI 13.3-inch touchscreen doubles as a monitoring hub for creatives and professionals
  • GPU selection dictates performance for gaming, rendering, and professional workloads alike
  • Lobster-like chassis combines expandability with unconventional aesthetics

MSI has launched the MEG Vision X AI series, a barebones all-in-one PC which combines high-end gaming hardware with a strikingly unconventional design.

The system features a full-size tower measuring 299.3mm wide, 502.7mm deep, and 423.4mm tall, weighing approximately 18.3kg, and a PS3-esque appendage and protrusions that suggest both function and a distinctive aesthetic.

Source link

Advertisement
Continue Reading

Tech

New KV cache compaction technique cuts LLM memory 50x without accuracy loss

Published

on

Enterprise AI applications that handle large documents or long-horizon tasks face a severe memory bottleneck. As the context grows longer, so does the KV cache, the area where the model’s working memory is stored.

A new technique developed by researchers at MIT addresses this challenge with a fast compression method for the KV cache. The technique, called Attention Matching, manages to compact the context by up to 50x with very little loss in quality.

While it is not the only memory compaction technique available, Attention Matching stands out for its execution speed and impressive information-preserving capabilities.

The memory bottleneck of the KV cache

Large language models generate their responses sequentially, one token at a time. To avoid recalculating the entire conversation history from scratch for every predicted word, the model stores a mathematical representation of every previous token it has processed, also known as the key and value pairs. This critical working memory is known as the KV cache.

Advertisement

The KV cache scales with conversation length because the model is forced to retain these keys and values for all previous tokens in a given interaction. This consumes expensive hardware resources. “In practice, KV cache memory is the biggest bottleneck to serving models at ultra-long context,” Adam Zweiger, co-author of the paper, told VentureBeat. “It caps concurrency, forces smaller batches, and/or requires more aggressive offloading.”

In modern enterprise use cases, such as analyzing massive legal contracts, maintaining multi-session customer dialogues, or running autonomous coding agents, the KV cache can balloon to many gigabytes of memory for a single user request.

To solve this massive bottleneck, the AI industry has tried several strategies, but these methods fall short when deployed in enterprise environments where extreme compression is necessary. A class of technical fixes includes optimizing the KV cache by either evicting tokens the model deems less important or merging similar tokens into a single representation. These techniques work for mild compression but “degrade rapidly at high reduction ratios,” according to the authors.

Real-world applications often rely on simpler techniques, with the most common approach being to simply drop the older context once the memory limit is reached. But this approach causes the model to lose older information as the context grows long. Another alternative is context summarization, where the system pauses, writes a short text summary of the older context, and replaces the original memory with that summary. While this is an industry standard, summarization is highly lossy and heavily damages downstream performance because it might remove pertinent information from the context.

Advertisement

Recent research has proven that it is technically possible to highly compress this memory using a method called Cartridges. However, this approach requires training latent KV cache models through slow, end-to-end mathematical optimization. This gradient-based training can take several hours on expensive GPUs just to compress a single context, making it completely unviable for real-time enterprise applications.

How attention matching compresses without the cost

Attention Matching achieves high-level compaction ratios and quality while being orders of magnitude faster than gradient-based optimization. It bypasses the slow training process through clever mathematical tricks.

The researchers realized that to perfectly mimic how an AI interacts with its memory, they need to preserve two mathematical properties when compressing the original key and value vectors into a smaller footprint. The first is the “attention output,” which is the actual information the AI extracts when it queries its memory. The second is the “attention mass,” which acts as the mathematical weight that a token has relative to everything else in the model’s working memory. If the compressed memory can match these two properties, it will behave exactly like the massive, original memory, even when new, unpredictable user prompts are added later. 

“Attention Matching is, in some ways, the ‘correct’ objective for doing latent context compaction in that it directly targets preserving the behavior of each attention head after compaction,” Zweiger said. While token-dropping and related heuristics can work, explicitly matching attention behavior simply leads to better results.

Advertisement
Attention matching

Before compressing the memory, the system generates a small set of “reference queries” that act as a proxy for the types of internal searches the model is likely to perform when reasoning about the specific context. If the compressed memory can accurately answer these reference queries, it will very likely succeed at answering the user’s actual questions later. The authors suggest various methods for generating these reference queries, including appending a hidden prompt to the document telling the model to repeat the previous context, known as the “repeat-prefill” technique. They also suggest a “self-study” approach where the model is prompted to perform a few quick synthetic tasks on the document, such as aggregating all key facts or structuring dates and numbers into a JSON format.

With these queries in hand, the system picks a set of keys to preserve in the compacted KV cache based on signals like the highest attention value. It then uses the keys and reference queries to calculate the matching values along with a scalar bias term. This bias ensures that pertinent information is preserved, allowing each retained key to represent the mass of many removed keys.

This formulation makes it possible to fit the values with simple algebraic techniques, such as ordinary least squares and nonnegative least squares, entirely avoiding compute-heavy gradient-based optimization. This is what makes Attention Matching super fast in comparison to optimization-heavy compaction methods. The researchers also apply chunked compaction, processing contiguous chunks of the input independently and concatenating them, to further improve performance on long contexts.

Attention matching in action

To understand how this method performs in the real world, the researchers ran a series of stress tests using popular open-source models like Llama 3.1 and Qwen-3 on two distinct types of enterprise datasets. The first was QuALITY, a standard reading comprehension benchmark using 5,000 to 8,000-word documents. The second, representing a true enterprise challenge, was LongHealth, a highly dense, 60,000-token dataset containing the complex medical records of multiple patients.

The key finding was the ability of Attention Matching to compact the model’s KV cache by 50x without reducing the accuracy, while taking only seconds to process the documents. To achieve that same level of quality previously, Cartridges required hours of intensive GPU computation per context.

Advertisement
Attention matching on Qwen 3

Attention Matching with Qwen-3 (source: arXiv)

When dealing with the dense medical records, standard industry workarounds completely collapsed. The researchers noted that when they tried to use standard text summarization on these patient records, the model’s accuracy dropped so low that it matched the “no-context” baseline, meaning the AI performed as if it had not read the document at all. 

Attention Matching drastically outperforms summarization, but enterprise architects will need to dial down the compression ratio for dense tasks compared to simpler reading comprehension tests. As Zweiger explains, “The main practical tradeoff is that if you are trying to preserve nearly everything in-context on highly information-dense tasks, you generally need a milder compaction ratio to retain strong accuracy.”

The researchers also explored what happens in cases where absolute precision isn’t necessary but extreme memory savings are. They ran Attention Matching on top of a standard text summary. This combined approach achieved 200x compression. It successfully matched the accuracy of standard summarization alone, but with a very small memory footprint.

Advertisement

One of the interesting experiments for enterprise workflows was testing online compaction, though they note that this is a proof of concept and has not been tested rigorously in production environments. The researchers tested the model on the advanced AIME math reasoning test. They forced the AI to solve a problem with a strictly capped physical memory limit. Whenever the model’s memory filled up, the system paused, instantly compressed its working memory by 50 percent using Attention Matching, and let it continue thinking. Even after hitting the memory wall and having its KV cache shrunk up to six consecutive times mid-thought, the model successfully solved the math problems. Its performance matched a model that had been given massive, unlimited memory.

There are caveats to consider. At a 50x compression ratio, Attention Matching is the clear winner in balancing speed and quality. However, if an enterprise attempts to push compression to extreme 100x limits on highly complex data, the slower, gradient-based Cartridges method actually outperforms it.

The researchers have released the code for Attention Matching. However, they note that this is not currently a simple plug-and-play software update. “I think latent compaction is best considered a model-layer technique,” Zweiger notes. “While it can be applied on top of any existing model, it requires access to model weights.” This means enterprises relying entirely on closed APIs cannot implement this themselves; they need open-weight models. 

The authors note that integrating this latent-space KV compaction into existing, highly optimized commercial inference engines still requires significant effort. Modern AI infrastructure uses complex tricks like prefix caching and variable-length memory packing to keep servers running efficiently, and seamlessly weaving this new compaction technique into those existing systems will take dedicated engineering work. However, there are immediate enterprise applications. “We believe compaction after ingestion is a promising use case, where large tool call outputs or long documents are compacted right after being processed,” Zweiger said.

Advertisement

Ultimately, the shift toward mechanical, latent-space compaction aligns with the future product roadmaps of major AI players, Zweiger argues. “We are seeing compaction to shift from something enterprises implement themselves into something model providers ship,” Zweiger said. “This is even more true for latent compaction, where access to model weights is needed. For example, OpenAI now exposes a black-box compaction endpoint that returns an opaque object rather than a plain-text summary.”

Source link

Continue Reading

Tech

Unmanned ‘Speed Jeep’ Dishes Out Tickets With No Officer Behind The Wheel

Published

on





Citizens and law enforcement officials alike would probably be quick to tell you that speeding drivers rank among the most dangerous issues they face on the roadways every day. While the onus of obeying speed limits on the road ultimately rests on the person in the driver’s seat, authorities are expected to help control excessive speeding by catching those drivers in the act and issuing citations as punishment.

That job is particularly tricky, as the number of officers on patrol is typically outnumbered greatly by the number of citizens at the wheel of their own vehicles. Some municipalities have, however, sought to tilt the situation in their favor by setting up speed traps. Similarly, traffic light cameras have become regular fixtures in helping monitor and control traffic patterns. Some local forces are taking matters a step further by using so-called “Speed Jeeps,” which are stationary, unmanned cruisers equipped with cameras to catch and ticket speeding drivers. 

Advertisement

Commerce City, Colorado, has started rolling out such vehicles in March, with authorities in the Denver suburb looking to use them to help enforce speed limits in school zones, residential areas, and work zones. It remains to be seen how effective the move will be, as speed cameras have sometimes caused controversy in their alleged overreach. Still, according to Denver 7 News, some Commerce City residents are fully behind the use of Speed Jeeps if they help make their streets safer. 

Advertisement

Here’s how Speed Jeeps actually work

Speed cameras are, of course, not legal in every city and state in the U.S. However, areas such as Montgomery County, Maryland, have effectively used them to control speeding in areas of concern. Commerce City has now joined the list of municipalities hoping to use tech to increase community safety, with its Speed Jeeps rotating between locations and adding mobility to the mix.

The fact that Speed Jeeps are designed to look like real police cruisers may make them even more effective than just their cameras, as few things will get a speeder to tap the brakes faster than the sight of a cop. The unmanned vehicles are, obviously, not designed to chase after speeders as a normal officer might. Instead, their cameras are activated when a speeding vehicle enters the range of its on-board radar gun. Once activated, the camera snaps a shot of the vehicle’s front end and driver. A separate camera then takes a shot of the rear license plate once the speeding car passes.

From there, local law enforcement will collect additional information about the alleged infraction and then decide whether to issue a citation. If deemed necessary, the citation will be mailed to the vehicle’s registered address. Upon receipt, the recipient will have a chance to either pay the fine or challenge the ruling in court.

Advertisement



Source link

Advertisement
Continue Reading

Tech

Researchers should learn to be entrepreneurial, says ARC hub lead

Published

on

Garry Duffy says that entrepreneurship should be taught at an undergraduate level.

Ireland is doubling down on building a strong research-to-market pipeline in the hopes of creating innovative global companies with homegrown roots.

To do this, Research Ireland has tapped leading universities across the country to deliver what its CEO, Diarmuid O’Brien, calls “one of the most proactive, imaginative and potentially disruptive programmes” in its history.

Last year, the Government announced three hubs to act as a funding mechanism, support system and testing ground for researchers attempting to commercialise their ideas.

Advertisement

Academics need this kind of support, says Garry Duffy, the director of the ARC Hub for Healthtech at the University of Galway, which officially launched just last month.

“Commercialisation is generally new to people – particularly researchers. And it’s a new language and it’s a new acumen, and you have to try and build that. And that’s what we’re really trying to do with the ARC Hub,” Duffy says. ARC, quite aptly, stands for ‘Accelerating Research to Commercialisation’.

With a backing of €34.3m from the Irish Government and the EU, the ARC health-tech hub is co-run by Atlantic Technological University and RCSI University of Medicine and Health Sciences, with other major institutions also taking part.

The Government announced two other hubs last year as well, one for therapeutics and one for ICT, boasting a combined funding that exceeded €60m.

Advertisement

The idea behind the hubs is to create a nurturing environment for entrepreneurial scientists and engineers to carry out research that will lead to commercial impact.

Duffy cites Dublin start-up ProVerum as a success story he would like to replicate in the health-tech hub he leads.

The 2016-founded Trinity College Dublin spin-out is the creator behind ‘ProVee’, a minimally invasive solution for treating benign prostatic hyperplasia.

ProVerum raised $80m in a Series B round last August. The start-up’s co-founder Ríona Ní Ghriallais is on the ARC health-tech advisory board.

Advertisement

The ARC Hub for Healthtech launched last month, with 23 projects across major areas – including sensors, implantables and AI – already in the pipeline.

Researchers, with the help of industry professionals, are creating commercial solutions for health issues such as hypertension management, ovarian cancer and falls among the elderly, Duffy says. Some projects have already generated clinical evidence to support the future impact of the various technologies.

The health-tech hub is also inviting around 22 new projects in its second call, which would give a total of around 45 projects under its remit.

Peter Power, the head of the European Commission Representation in Ireland, called the ARC Hub for Healthtech an “operation of strategic importance”, while Minister for Further and Higher Education, Research, Innovation and Science James Lawless, TD said that he believes the hub “has the potential to deliver game changing acceleration of research commercialisation”.

Advertisement

Duffy believes entrepreneurship should be taught to students early on in their higher education. Hackathons and labs that nurture students to think commercially have had a positive impact, he notes.

“I feel like we’re evolving into a nice ecosystem in Ireland where it’s becoming a bit of a norm to think of a spin-out company as an outcome for university education.”

Duffy is a professor at the University of Galway, and head of the anatomy and regenerative medicine department at RCSI University of Medicine and Health Sciences.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

BYD showcases Blade EV battery with ultra-fast 9-minute recharging

Published

on


BYD showcased the new platform during its Disruptive Technology event on Thursday. According to the automaker, the Flash charger is able to take the battery from 10% to 70% in just five minutes and from 10% to 97% in only nine minutes when at room temperature. In extreme cold (-30…
Read Entire Article
Source link

Continue Reading

Tech

You can’t see this tiny sensor with your eyes, but it can solve processor heating woes

Published

on

Processors today pack billions of transistors onto a single chip, and while that enables incredible performance, it also creates one persistent problem, which is heat. Rising temperatures can slow down a processor or force performance throttling. Now, researchers may have found a solution with something incredibly tiny, a new microscopic temperature sensor that’s nearly impossible to see with the naked eye.

A thermometer smaller than a human hair

Researchers at Penn State have developed an ultra-miniature thermometer that can be built directly onto computer chips. The sensor is super small, measuring just one square micrometer, which is several thousand times smaller than the width of a human hair. That tiny size lets engineers place thousands of these sensors across a processor, allowing for precise temperature monitoring across different parts of the chipset.

Chips often heat unevenly during heavy workloads, and traditional temperature sensors placed outside the processor can struggle to capture those rapid changes accurately. So these microscopic sensors could be a big deal for modern processors.

Built with ultra-thin 2D materials

What’s impressive is that the researchers built the sensor using two-dimensional materials that are only a few atoms thick. These materials allow the sensor to quickly react to any temperature changes. Additionally, the device can detect subtle fluctuations in about 100 nanoseconds, which is millions of times faster than blinking your eye. Owing to its unique structure, the tech also uses less power than traditional silicon-based thermal monitoring systems.

Advertisement

Why this matters for modern processors

Thermal management is one of the biggest challenges in chip design today. Transistors overheating during heavy workload cause processors to reduce clock speeds to protect themselves. This, in turn, leads to drops in performance. But with these embedded sensors like this, engineers could monitor temperature changes across the chip in real time and respond more effectively. Meaning, we might see smarter thermal management, better efficiency, and peak performance that is maintained for longer. With chips nearing the 1-nanometer gate, tech like this could be crucial.

Source link

Continue Reading

Tech

Incogni vs DeleteMe (2026): Which Data Removal Service Works Better?

Published

on

Data removal services have appeared as an answer to a growing unease about the privacy of our personal information, especially online. They help people manage the visibility of their data and reduce what’s circulating in the hands of data brokers, companies that collect and sell personal data.

However, for many users, this sounds too good to be true, and so they ask themselves: Does the service actually work? Is it legitimate? How effective is it? And which is the best among so many data removal services?

Two popular options are Incogni and DeleteMe. Both promise to reduce your online footprint, but they approach the task differently. This comparison breaks down, for example, how each service works, how it actually performs, how easy it is to use, and what the differences in coverage are. This may help you decide which provider fits your privacy needs.

Incogni vs DeleteMe: Quick Comparison (2026)

Category Incogni DeleteMe
Prices from $7.99/month when billed annually from $6.97/month when billed biennially
Removal model Fully automated, recurring opt-out requests Manual and automated, team-assisted removals
Broker coverage 420+ data brokers, both private and public listings, custom removal, and additional sites on higher plans  85-100+ automated, 850+ with custom removals (depending on the plan)
Recurring follow-ups 60-90 days Quarterly
Dashboard & reports Ongoing status dashboard Quarterly reports, updates
Independent verification Deloitte Limited Assurance Report No third-party verification
Customer support Live chat, E-mail, Tickets, Phone with Unlimited Plans Phone, Live chat, E-mail, Web form
Best for Hands-off ongoing process More user involvement, people-search focus
Data as of February, 2026

DeleteMe vs Incogni: How They Work

Incogni is an automated data removal system. After signing up, you need to verify your identity, and the platform starts sending deletion requests to hundreds of data brokers, covering also the “invisible” parts, i.e., information shared for marketing or profiling. Requests are repeated every 60 days for public brokers and every 90 days for private ones. The service also tracks responses, shows progress on a dashboard, and sends follow-ups when required.

DeleteMe combines automated and manual removal processes. Its team finds public listings with your information and manually sends opt-out requests periodically. DeleteMe’s focus is mainly on people-search sites that can be easily found. Users receive detailed reports on what was sent and what the answers were.

Coverage and Scope

Data removal service involves two layers:

Advertisement
  1. Visible public listings (people-search sites)
  2. Commercial broker databases (behind the scenes)

DeleteMe emphasizes visible listings and provides reports on those results, which can feel more tangible for users who want to be involved in the process. 85-100+ of these sites are reached automatically, but there are also an additional 850+ listings for custom removals (depending on the plan).

Incogni’s automated system reaches 420+ data brokers, including private databases that don’t appear in search results. Higher plans allow custom removals for 2,000+ sites, broadening the reach.

To sum up, Incogni focuses on broad, diverse, recurring suppression, while DeleteMe is more about systematic, manual management.

Availability and Compliance

Both services offer their services in multiple countries, but their regional reach and legal frameworks are different.

DeleteMe has been assisting users since 2010 and advertises the removal of 100+ million listings over the years. The company emphasizes its longevity in the industry. It primarily serves users in the US, with some international capability depending on listing type and broker location. Its international locations include Australia, Belgium, Brazil, Canada, France, Germany, Ireland, Italy, the Netherlands, Singapore, and the UK.

The provider maintains compliance with AICPA SOC 2 Type 2 (an audit standard by the American Institute of Certified Public Accountants), the EU’s GDPR, and various privacy laws passed in the US.

Advertisement

Incogni, on the other hand, is more widely available. As of now, it can be used by users in the US, the UK, all EU countries, Canada, Switzerland, Norway, Iceland, Liechtenstein, and the Isle of Man.

It operates under privacy laws like GDPR (EU), CCPA/CPRA (California, US), and similar, where applicable. Deloitte has verified that Incogni has processed 245+ million removal requests since 2022, up to mid-2025.

Transparency and Credibility

Incogni

Through its limited assurance report, Deloitte confirmed that Incogni’s removal process works as described by the provider. Deloitte verified key operational claims around recurring cycles, sending requests, and reaching brokers. 

What’s more, PCMag and PCWorld awarded Incogni their Editors’ Choice awards, highlighting its automation, coverage, and ease of use. At the same time, user reviews on Trustpilot mainly show positive results with a gradual reduction in broker listings. Incogni’s overall rating stands at 4.4.

Advertisement

DeleteMe

DeleteMe is an already established removal service with generally warm feedback from many users over the years. It holds the rating of 4.0 on Trustpilot, the average of almost 200 reviews, with around ¾ being positive. Users praise data removal and ongoing checks, though there are some who describe slow results or incomplete removals.

When it comes to industry reviews, DeleteMe is appreciated for removing personal data from aggregators, allowing custom removal requests, and providing instructions for free DIY removal. However, they often mention that its coverage is quite narrow, and reports are too rare (they come out quarterly).

User Experience

Incogni

Users love Incogni for its simplicity and automation. After setup, which takes no more than 10 minutes, the dashboard will display all the necessary information without unnecessary, complicated details. You will be able to determine which brokers have been contacted, which requests are pending, and which sites already deleted your data. 

For the most part, users don’t have to manually intervene in the process; everything happens in the background. Weekly or periodic progress summaries will come straight to your email and keep you informed without logging in daily. Removal cycles are also automated and handled every 60-90 days

Advertisement

DeleteMe

DeleteMe’s user experience reflects its human-assisted model. After selecting a plan and uploading your personal details, the team will run periodic scans of dozens to hundreds of listing sites. Instead of following a live dashboard, users get regular reports, typically quarterly, that outline what was found and removed. These reports are more detailed, including context and explanations, such as site name, removal date, status, etc.

Some users may appreciate the clear narrative, while others prefer dashboards and less engagement.

Across both services, reviews suggest clarity. Incogni is praised for automation and ongoing protection with minimal effort from the user, while DeleteMe’s biggest strengths are human-handled processes and detailed reports.

Customer Support

Both providers offer structured support, but their channels differ.

Advertisement

Incogni offers:

  • Email/ticket-based support
  • Help Center documentation
  • Live chat support through its website
  • Phone support only for Unlimited plans

Users often describe Incogni’s support team’s responses as clear and process-oriented. They also say that the process is quite quick.

DeleteMe offers:

  • Phone support
  • Live chat
  • Email support
  • Web contact forms

User feedback praises DeleteMe’s helpful human interaction and explanations in its reports. However, many reviews mention slower response times, especially during peak periods.

Pricing

Incogni Pricing

Plan Billing frequency Monthly cost Features
Standard Annually $7.99 Automated broker removals, dashboard tracking
Standard Monthly $15.98 Everything above for a higher overall cost
Unlimited Annually $14.99 Everything above, unlimited custom removal requests, 2,000 additional sites coverage, live phone support
Unlimited Monthly $29.98 Everything above for a higher overall cost
Family Standard Annually $15.99 Standard for up to 5 members and family account management
Family Standard Monthly $31.98 Everything above for a higher overall cost
Family Unlimited Annually $22.99 Unlimited for up to 5 members and family account management
Family Unlimited Monthly $45.98 Everything above for a higher overall cost

For American users, there’s an additional option – the Protect Plan, which is all-in-one data removal and identity-theft protection. For $41.48/month, you can get everything in Incogni Unlimited plus identity theft protection with NordProtect. Incogni is also included with Surfshark’s One+ plan from $4.19/month.

DeleteMe Pricing

Plan Monthly cost when billed biannually Monthly cost when billed annually Features
1 person $6.97 $8.60 Quarterly privacy reports, A+ BBB rating, email, chat, and phone support, custom removal requests
2 people $13.93 $17.20 The same as above
Family $27.87 $34.40 The same as above for 4 people

Both DeleteMe and Incogni have custom offers for businesses. None of them includes a free trial, but Incogni comes with a 30-day money-back guarantee for risk-free testing, while DeleteMe can be canceled with a full refund only before the first privacy report. However, you can get a free scan with DeleteMe.

Final Verdict: Two Solid Approaches for 2026

Incogni and DeleteMe are among the most popular data removal service providers, but they fit different privacy needs:

  • Incogni is an excellent choice for users who don’t want to get too engaged with the data removal process. It also offers wider broker coverage, with recurring cycles that keep sending removal requests. Incogni has also been independently assessed and received editorial recognition.
  • DeleteMe will suit users who prefer a human-assisted removal process and detailed reports about contacted brokers and their responses.

In 2026, both services can be effective, but when you prioritize automation, scale, and long-term work, Incogni’s approach will suit you better.

FAQ

Which platform provides broader data broker coverage?
Advertisement

Incogni covers over 420 brokers on its standard plan. DeleteMe claims coverage for 750+ sites, but its standard tier often defaults to around 100 high-impact brokers, with the remainder requiring manual custom requests.

Should I prioritize human-assisted removals over AI automation?

DeleteMe employs human privacy experts to handle complex manual opt-outs, which may offer higher precision for difficult cases. Incogni uses advanced AI to maintain a set-and-forget system that is generally faster and more affordable.

Advertisement
Which service is more effective for users living outside the United States?

Incogni is the stronger choice for global privacy, with a single subscription covering the US, UK, Canada, and 30+ European countries. DeleteMe historically focuses on the American market, though it has expanded to select international regions.

Advertisement
How frequently will I receive updates on my removal status?

Incogni sends progress updates every week. DeleteMe typically provides more comprehensive deep-dive reports, but issues them every quarter.

Advertisement

Source link

Continue Reading

Tech

Nintendo is suing the US government over Trump’s tariffs

Published

on

Nintendo of America is suing the US government, including the Department of Treasury, Department of Homeland Security and US Customs and Border Protection, over its tariff policy, Aftermath reports. The video game giant already raised prices on the Nintendo Switch in August 2025 in response to “market conditions,” but has so far left the price of its newer Switch 2 console unchanged.

Nintendo’s lawsuit, filed in the US Court of International Trade, cites a Supreme Court ruling from February that confirmed a lower courts’ opinion that the Trump administration’s global tariffs were illegal. Nintendo’s lawyers claim that the video game company has been “substantially harmed by the unlawful of execution and imposition” of “unauthorized Executive Orders,” and the fees Nintendo has already paid to import products into the country. In response, the company is seeking a “prompt refund, with interest” of the tariffs it has paid.

“We can confirm we filed a request,” Nintendo of America said in a statement. “We have nothing else to share on this topic.”

While taxes and other trade policies are supposed to be set by Congress, President Donald Trump implemented a collection of global tariffs over the course of his first year in office using executive orders and the International Emergency Economic Powers Act (IEEPA), a law that gives the president expanded control over trade during a global emergency. The Trump administration has positioned tariffs as a way to punish enemies and bargain with trade partners, but many companies have passed the increased price of importing goods onto customers.

Advertisement

In upholding opinions from the US District Court of the District of Columbia and the US Court of International Trade, the Supreme Court removed the Trump administration’s ability to collect tariffs using IEEPA, but didn’t clarify how the tariffs the government had illegally collected should be returned to companies. Like Nintendo, other companies have decided filing a lawsuit is the best way to get refunded.

The Guardian reports that US Customs and Border Protection is already preparing a system to process refunds for affected companies, but that might not mark the end of Trump’s tariff regime. In a press conference held after the Supreme Court released its decision, the President announced plans to introduce tariffs using other, more constrained methods. Tariffs aren’t the only obstacle Nintendo faces, either. The company could also be forced to raise the price of its consoles in response to the current RAM shortage.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025