Connect with us

Tech

Ebike Charges At Car Charging Stations

Published

on

Electric vehicles are everywhere these days, and with them comes along a whole slew of charging infrastructure. The fastest of these are high-power machines that can deliver enough energy to charge a car in well under an hour, but there are plenty of slower chargers available that take much longer. These don’t tend to require any specialized equipment which makes them easier to install in homes and other places where there isn’t as much power available. In fact, these chargers generally amount to fancy extension cords, and [Matt Gray] realized he could use these to do other things like charge his electric bicycle.

To begin the build, [Matt] started with an electric car charging socket and designed a housing for it with CAD software. The housing also holds the actual battery charger for his VanMoof bicycle, connected internally directly to the car charging socket. These lower powered chargers don’t require any communication from the vehicle either, which simplifies the process considerably. They do still need to be turned on via a smartphone app so the energy can be metered and billed, but with all that out of the way [Matt] was able to take his test rig out to a lamppost charger and boil a kettle of water.

After the kettle experiment, he worked on miniaturizing his project so it fits more conveniently inside the 3D-printed enclosure on the rear rack of his bicycle. The only real inconvenience of this project, though, is that since these chargers are meant for passenger vehicles they’re a bit bulky for smaller vehicles like e-bikes. But this will greatly expand [Matt]’s ability to use his ebike for longer trips, and car charging infrastructure like this has started being used in all kinds of other novel ways as well.

Advertisement

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

MSI unveils a lobster-like PC with a 13.3-inch touchscreen, RTX 5080X, and a quirky design that defies all conventions

Published

on


  • MSI MEG Vision X AI 13.3-inch touchscreen doubles as a monitoring hub for creatives and professionals
  • GPU selection dictates performance for gaming, rendering, and professional workloads alike
  • Lobster-like chassis combines expandability with unconventional aesthetics

MSI has launched the MEG Vision X AI series, a barebones all-in-one PC which combines high-end gaming hardware with a strikingly unconventional design.

The system features a full-size tower measuring 299.3mm wide, 502.7mm deep, and 423.4mm tall, weighing approximately 18.3kg, and a PS3-esque appendage and protrusions that suggest both function and a distinctive aesthetic.

Source link

Advertisement
Continue Reading

Tech

New KV cache compaction technique cuts LLM memory 50x without accuracy loss

Published

on

Enterprise AI applications that handle large documents or long-horizon tasks face a severe memory bottleneck. As the context grows longer, so does the KV cache, the area where the model’s working memory is stored.

A new technique developed by researchers at MIT addresses this challenge with a fast compression method for the KV cache. The technique, called Attention Matching, manages to compact the context by up to 50x with very little loss in quality.

While it is not the only memory compaction technique available, Attention Matching stands out for its execution speed and impressive information-preserving capabilities.

The memory bottleneck of the KV cache

Large language models generate their responses sequentially, one token at a time. To avoid recalculating the entire conversation history from scratch for every predicted word, the model stores a mathematical representation of every previous token it has processed, also known as the key and value pairs. This critical working memory is known as the KV cache.

Advertisement

The KV cache scales with conversation length because the model is forced to retain these keys and values for all previous tokens in a given interaction. This consumes expensive hardware resources. “In practice, KV cache memory is the biggest bottleneck to serving models at ultra-long context,” Adam Zweiger, co-author of the paper, told VentureBeat. “It caps concurrency, forces smaller batches, and/or requires more aggressive offloading.”

In modern enterprise use cases, such as analyzing massive legal contracts, maintaining multi-session customer dialogues, or running autonomous coding agents, the KV cache can balloon to many gigabytes of memory for a single user request.

To solve this massive bottleneck, the AI industry has tried several strategies, but these methods fall short when deployed in enterprise environments where extreme compression is necessary. A class of technical fixes includes optimizing the KV cache by either evicting tokens the model deems less important or merging similar tokens into a single representation. These techniques work for mild compression but “degrade rapidly at high reduction ratios,” according to the authors.

Real-world applications often rely on simpler techniques, with the most common approach being to simply drop the older context once the memory limit is reached. But this approach causes the model to lose older information as the context grows long. Another alternative is context summarization, where the system pauses, writes a short text summary of the older context, and replaces the original memory with that summary. While this is an industry standard, summarization is highly lossy and heavily damages downstream performance because it might remove pertinent information from the context.

Advertisement

Recent research has proven that it is technically possible to highly compress this memory using a method called Cartridges. However, this approach requires training latent KV cache models through slow, end-to-end mathematical optimization. This gradient-based training can take several hours on expensive GPUs just to compress a single context, making it completely unviable for real-time enterprise applications.

How attention matching compresses without the cost

Attention Matching achieves high-level compaction ratios and quality while being orders of magnitude faster than gradient-based optimization. It bypasses the slow training process through clever mathematical tricks.

The researchers realized that to perfectly mimic how an AI interacts with its memory, they need to preserve two mathematical properties when compressing the original key and value vectors into a smaller footprint. The first is the “attention output,” which is the actual information the AI extracts when it queries its memory. The second is the “attention mass,” which acts as the mathematical weight that a token has relative to everything else in the model’s working memory. If the compressed memory can match these two properties, it will behave exactly like the massive, original memory, even when new, unpredictable user prompts are added later. 

“Attention Matching is, in some ways, the ‘correct’ objective for doing latent context compaction in that it directly targets preserving the behavior of each attention head after compaction,” Zweiger said. While token-dropping and related heuristics can work, explicitly matching attention behavior simply leads to better results.

Advertisement
Attention matching

Before compressing the memory, the system generates a small set of “reference queries” that act as a proxy for the types of internal searches the model is likely to perform when reasoning about the specific context. If the compressed memory can accurately answer these reference queries, it will very likely succeed at answering the user’s actual questions later. The authors suggest various methods for generating these reference queries, including appending a hidden prompt to the document telling the model to repeat the previous context, known as the “repeat-prefill” technique. They also suggest a “self-study” approach where the model is prompted to perform a few quick synthetic tasks on the document, such as aggregating all key facts or structuring dates and numbers into a JSON format.

With these queries in hand, the system picks a set of keys to preserve in the compacted KV cache based on signals like the highest attention value. It then uses the keys and reference queries to calculate the matching values along with a scalar bias term. This bias ensures that pertinent information is preserved, allowing each retained key to represent the mass of many removed keys.

This formulation makes it possible to fit the values with simple algebraic techniques, such as ordinary least squares and nonnegative least squares, entirely avoiding compute-heavy gradient-based optimization. This is what makes Attention Matching super fast in comparison to optimization-heavy compaction methods. The researchers also apply chunked compaction, processing contiguous chunks of the input independently and concatenating them, to further improve performance on long contexts.

Attention matching in action

To understand how this method performs in the real world, the researchers ran a series of stress tests using popular open-source models like Llama 3.1 and Qwen-3 on two distinct types of enterprise datasets. The first was QuALITY, a standard reading comprehension benchmark using 5,000 to 8,000-word documents. The second, representing a true enterprise challenge, was LongHealth, a highly dense, 60,000-token dataset containing the complex medical records of multiple patients.

The key finding was the ability of Attention Matching to compact the model’s KV cache by 50x without reducing the accuracy, while taking only seconds to process the documents. To achieve that same level of quality previously, Cartridges required hours of intensive GPU computation per context.

Advertisement
Attention matching on Qwen 3

Attention Matching with Qwen-3 (source: arXiv)

When dealing with the dense medical records, standard industry workarounds completely collapsed. The researchers noted that when they tried to use standard text summarization on these patient records, the model’s accuracy dropped so low that it matched the “no-context” baseline, meaning the AI performed as if it had not read the document at all. 

Attention Matching drastically outperforms summarization, but enterprise architects will need to dial down the compression ratio for dense tasks compared to simpler reading comprehension tests. As Zweiger explains, “The main practical tradeoff is that if you are trying to preserve nearly everything in-context on highly information-dense tasks, you generally need a milder compaction ratio to retain strong accuracy.”

The researchers also explored what happens in cases where absolute precision isn’t necessary but extreme memory savings are. They ran Attention Matching on top of a standard text summary. This combined approach achieved 200x compression. It successfully matched the accuracy of standard summarization alone, but with a very small memory footprint.

Advertisement

One of the interesting experiments for enterprise workflows was testing online compaction, though they note that this is a proof of concept and has not been tested rigorously in production environments. The researchers tested the model on the advanced AIME math reasoning test. They forced the AI to solve a problem with a strictly capped physical memory limit. Whenever the model’s memory filled up, the system paused, instantly compressed its working memory by 50 percent using Attention Matching, and let it continue thinking. Even after hitting the memory wall and having its KV cache shrunk up to six consecutive times mid-thought, the model successfully solved the math problems. Its performance matched a model that had been given massive, unlimited memory.

There are caveats to consider. At a 50x compression ratio, Attention Matching is the clear winner in balancing speed and quality. However, if an enterprise attempts to push compression to extreme 100x limits on highly complex data, the slower, gradient-based Cartridges method actually outperforms it.

The researchers have released the code for Attention Matching. However, they note that this is not currently a simple plug-and-play software update. “I think latent compaction is best considered a model-layer technique,” Zweiger notes. “While it can be applied on top of any existing model, it requires access to model weights.” This means enterprises relying entirely on closed APIs cannot implement this themselves; they need open-weight models. 

The authors note that integrating this latent-space KV compaction into existing, highly optimized commercial inference engines still requires significant effort. Modern AI infrastructure uses complex tricks like prefix caching and variable-length memory packing to keep servers running efficiently, and seamlessly weaving this new compaction technique into those existing systems will take dedicated engineering work. However, there are immediate enterprise applications. “We believe compaction after ingestion is a promising use case, where large tool call outputs or long documents are compacted right after being processed,” Zweiger said.

Advertisement

Ultimately, the shift toward mechanical, latent-space compaction aligns with the future product roadmaps of major AI players, Zweiger argues. “We are seeing compaction to shift from something enterprises implement themselves into something model providers ship,” Zweiger said. “This is even more true for latent compaction, where access to model weights is needed. For example, OpenAI now exposes a black-box compaction endpoint that returns an opaque object rather than a plain-text summary.”

Source link

Continue Reading

Tech

Unmanned ‘Speed Jeep’ Dishes Out Tickets With No Officer Behind The Wheel

Published

on





Citizens and law enforcement officials alike would probably be quick to tell you that speeding drivers rank among the most dangerous issues they face on the roadways every day. While the onus of obeying speed limits on the road ultimately rests on the person in the driver’s seat, authorities are expected to help control excessive speeding by catching those drivers in the act and issuing citations as punishment.

That job is particularly tricky, as the number of officers on patrol is typically outnumbered greatly by the number of citizens at the wheel of their own vehicles. Some municipalities have, however, sought to tilt the situation in their favor by setting up speed traps. Similarly, traffic light cameras have become regular fixtures in helping monitor and control traffic patterns. Some local forces are taking matters a step further by using so-called “Speed Jeeps,” which are stationary, unmanned cruisers equipped with cameras to catch and ticket speeding drivers. 

Advertisement

Commerce City, Colorado, has started rolling out such vehicles in March, with authorities in the Denver suburb looking to use them to help enforce speed limits in school zones, residential areas, and work zones. It remains to be seen how effective the move will be, as speed cameras have sometimes caused controversy in their alleged overreach. Still, according to Denver 7 News, some Commerce City residents are fully behind the use of Speed Jeeps if they help make their streets safer. 

Advertisement

Here’s how Speed Jeeps actually work

Speed cameras are, of course, not legal in every city and state in the U.S. However, areas such as Montgomery County, Maryland, have effectively used them to control speeding in areas of concern. Commerce City has now joined the list of municipalities hoping to use tech to increase community safety, with its Speed Jeeps rotating between locations and adding mobility to the mix.

The fact that Speed Jeeps are designed to look like real police cruisers may make them even more effective than just their cameras, as few things will get a speeder to tap the brakes faster than the sight of a cop. The unmanned vehicles are, obviously, not designed to chase after speeders as a normal officer might. Instead, their cameras are activated when a speeding vehicle enters the range of its on-board radar gun. Once activated, the camera snaps a shot of the vehicle’s front end and driver. A separate camera then takes a shot of the rear license plate once the speeding car passes.

From there, local law enforcement will collect additional information about the alleged infraction and then decide whether to issue a citation. If deemed necessary, the citation will be mailed to the vehicle’s registered address. Upon receipt, the recipient will have a chance to either pay the fine or challenge the ruling in court.

Advertisement



Source link

Advertisement
Continue Reading

Tech

Researchers should learn to be entrepreneurial, says ARC hub lead

Published

on

Garry Duffy says that entrepreneurship should be taught at an undergraduate level.

Ireland is doubling down on building a strong research-to-market pipeline in the hopes of creating innovative global companies with homegrown roots.

To do this, Research Ireland has tapped leading universities across the country to deliver what its CEO, Diarmuid O’Brien, calls “one of the most proactive, imaginative and potentially disruptive programmes” in its history.

Last year, the Government announced three hubs to act as a funding mechanism, support system and testing ground for researchers attempting to commercialise their ideas.

Advertisement

Academics need this kind of support, says Garry Duffy, the director of the ARC Hub for Healthtech at the University of Galway, which officially launched just last month.

“Commercialisation is generally new to people – particularly researchers. And it’s a new language and it’s a new acumen, and you have to try and build that. And that’s what we’re really trying to do with the ARC Hub,” Duffy says. ARC, quite aptly, stands for ‘Accelerating Research to Commercialisation’.

With a backing of €34.3m from the Irish Government and the EU, the ARC health-tech hub is co-run by Atlantic Technological University and RCSI University of Medicine and Health Sciences, with other major institutions also taking part.

The Government announced two other hubs last year as well, one for therapeutics and one for ICT, boasting a combined funding that exceeded €60m.

Advertisement

The idea behind the hubs is to create a nurturing environment for entrepreneurial scientists and engineers to carry out research that will lead to commercial impact.

Duffy cites Dublin start-up ProVerum as a success story he would like to replicate in the health-tech hub he leads.

The 2016-founded Trinity College Dublin spin-out is the creator behind ‘ProVee’, a minimally invasive solution for treating benign prostatic hyperplasia.

ProVerum raised $80m in a Series B round last August. The start-up’s co-founder Ríona Ní Ghriallais is on the ARC health-tech advisory board.

Advertisement

The ARC Hub for Healthtech launched last month, with 23 projects across major areas – including sensors, implantables and AI – already in the pipeline.

Researchers, with the help of industry professionals, are creating commercial solutions for health issues such as hypertension management, ovarian cancer and falls among the elderly, Duffy says. Some projects have already generated clinical evidence to support the future impact of the various technologies.

The health-tech hub is also inviting around 22 new projects in its second call, which would give a total of around 45 projects under its remit.

Peter Power, the head of the European Commission Representation in Ireland, called the ARC Hub for Healthtech an “operation of strategic importance”, while Minister for Further and Higher Education, Research, Innovation and Science James Lawless, TD said that he believes the hub “has the potential to deliver game changing acceleration of research commercialisation”.

Advertisement

Duffy believes entrepreneurship should be taught to students early on in their higher education. Hackathons and labs that nurture students to think commercially have had a positive impact, he notes.

“I feel like we’re evolving into a nice ecosystem in Ireland where it’s becoming a bit of a norm to think of a spin-out company as an outcome for university education.”

Duffy is a professor at the University of Galway, and head of the anatomy and regenerative medicine department at RCSI University of Medicine and Health Sciences.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

BYD showcases Blade EV battery with ultra-fast 9-minute recharging

Published

on


BYD showcased the new platform during its Disruptive Technology event on Thursday. According to the automaker, the Flash charger is able to take the battery from 10% to 70% in just five minutes and from 10% to 97% in only nine minutes when at room temperature. In extreme cold (-30…
Read Entire Article
Source link

Continue Reading

Tech

You can’t see this tiny sensor with your eyes, but it can solve processor heating woes

Published

on

Processors today pack billions of transistors onto a single chip, and while that enables incredible performance, it also creates one persistent problem, which is heat. Rising temperatures can slow down a processor or force performance throttling. Now, researchers may have found a solution with something incredibly tiny, a new microscopic temperature sensor that’s nearly impossible to see with the naked eye.

A thermometer smaller than a human hair

Researchers at Penn State have developed an ultra-miniature thermometer that can be built directly onto computer chips. The sensor is super small, measuring just one square micrometer, which is several thousand times smaller than the width of a human hair. That tiny size lets engineers place thousands of these sensors across a processor, allowing for precise temperature monitoring across different parts of the chipset.

Chips often heat unevenly during heavy workloads, and traditional temperature sensors placed outside the processor can struggle to capture those rapid changes accurately. So these microscopic sensors could be a big deal for modern processors.

Built with ultra-thin 2D materials

What’s impressive is that the researchers built the sensor using two-dimensional materials that are only a few atoms thick. These materials allow the sensor to quickly react to any temperature changes. Additionally, the device can detect subtle fluctuations in about 100 nanoseconds, which is millions of times faster than blinking your eye. Owing to its unique structure, the tech also uses less power than traditional silicon-based thermal monitoring systems.

Advertisement

Why this matters for modern processors

Thermal management is one of the biggest challenges in chip design today. Transistors overheating during heavy workload cause processors to reduce clock speeds to protect themselves. This, in turn, leads to drops in performance. But with these embedded sensors like this, engineers could monitor temperature changes across the chip in real time and respond more effectively. Meaning, we might see smarter thermal management, better efficiency, and peak performance that is maintained for longer. With chips nearing the 1-nanometer gate, tech like this could be crucial.

Source link

Continue Reading

Tech

Incogni vs DeleteMe (2026): Which Data Removal Service Works Better?

Published

on

Data removal services have appeared as an answer to a growing unease about the privacy of our personal information, especially online. They help people manage the visibility of their data and reduce what’s circulating in the hands of data brokers, companies that collect and sell personal data.

However, for many users, this sounds too good to be true, and so they ask themselves: Does the service actually work? Is it legitimate? How effective is it? And which is the best among so many data removal services?

Two popular options are Incogni and DeleteMe. Both promise to reduce your online footprint, but they approach the task differently. This comparison breaks down, for example, how each service works, how it actually performs, how easy it is to use, and what the differences in coverage are. This may help you decide which provider fits your privacy needs.

Incogni vs DeleteMe: Quick Comparison (2026)

Category Incogni DeleteMe
Prices from $7.99/month when billed annually from $6.97/month when billed biennially
Removal model Fully automated, recurring opt-out requests Manual and automated, team-assisted removals
Broker coverage 420+ data brokers, both private and public listings, custom removal, and additional sites on higher plans  85-100+ automated, 850+ with custom removals (depending on the plan)
Recurring follow-ups 60-90 days Quarterly
Dashboard & reports Ongoing status dashboard Quarterly reports, updates
Independent verification Deloitte Limited Assurance Report No third-party verification
Customer support Live chat, E-mail, Tickets, Phone with Unlimited Plans Phone, Live chat, E-mail, Web form
Best for Hands-off ongoing process More user involvement, people-search focus
Data as of February, 2026

DeleteMe vs Incogni: How They Work

Incogni is an automated data removal system. After signing up, you need to verify your identity, and the platform starts sending deletion requests to hundreds of data brokers, covering also the “invisible” parts, i.e., information shared for marketing or profiling. Requests are repeated every 60 days for public brokers and every 90 days for private ones. The service also tracks responses, shows progress on a dashboard, and sends follow-ups when required.

DeleteMe combines automated and manual removal processes. Its team finds public listings with your information and manually sends opt-out requests periodically. DeleteMe’s focus is mainly on people-search sites that can be easily found. Users receive detailed reports on what was sent and what the answers were.

Coverage and Scope

Data removal service involves two layers:

Advertisement
  1. Visible public listings (people-search sites)
  2. Commercial broker databases (behind the scenes)

DeleteMe emphasizes visible listings and provides reports on those results, which can feel more tangible for users who want to be involved in the process. 85-100+ of these sites are reached automatically, but there are also an additional 850+ listings for custom removals (depending on the plan).

Incogni’s automated system reaches 420+ data brokers, including private databases that don’t appear in search results. Higher plans allow custom removals for 2,000+ sites, broadening the reach.

To sum up, Incogni focuses on broad, diverse, recurring suppression, while DeleteMe is more about systematic, manual management.

Availability and Compliance

Both services offer their services in multiple countries, but their regional reach and legal frameworks are different.

DeleteMe has been assisting users since 2010 and advertises the removal of 100+ million listings over the years. The company emphasizes its longevity in the industry. It primarily serves users in the US, with some international capability depending on listing type and broker location. Its international locations include Australia, Belgium, Brazil, Canada, France, Germany, Ireland, Italy, the Netherlands, Singapore, and the UK.

The provider maintains compliance with AICPA SOC 2 Type 2 (an audit standard by the American Institute of Certified Public Accountants), the EU’s GDPR, and various privacy laws passed in the US.

Advertisement

Incogni, on the other hand, is more widely available. As of now, it can be used by users in the US, the UK, all EU countries, Canada, Switzerland, Norway, Iceland, Liechtenstein, and the Isle of Man.

It operates under privacy laws like GDPR (EU), CCPA/CPRA (California, US), and similar, where applicable. Deloitte has verified that Incogni has processed 245+ million removal requests since 2022, up to mid-2025.

Transparency and Credibility

Incogni

Through its limited assurance report, Deloitte confirmed that Incogni’s removal process works as described by the provider. Deloitte verified key operational claims around recurring cycles, sending requests, and reaching brokers. 

What’s more, PCMag and PCWorld awarded Incogni their Editors’ Choice awards, highlighting its automation, coverage, and ease of use. At the same time, user reviews on Trustpilot mainly show positive results with a gradual reduction in broker listings. Incogni’s overall rating stands at 4.4.

Advertisement

DeleteMe

DeleteMe is an already established removal service with generally warm feedback from many users over the years. It holds the rating of 4.0 on Trustpilot, the average of almost 200 reviews, with around ¾ being positive. Users praise data removal and ongoing checks, though there are some who describe slow results or incomplete removals.

When it comes to industry reviews, DeleteMe is appreciated for removing personal data from aggregators, allowing custom removal requests, and providing instructions for free DIY removal. However, they often mention that its coverage is quite narrow, and reports are too rare (they come out quarterly).

User Experience

Incogni

Users love Incogni for its simplicity and automation. After setup, which takes no more than 10 minutes, the dashboard will display all the necessary information without unnecessary, complicated details. You will be able to determine which brokers have been contacted, which requests are pending, and which sites already deleted your data. 

For the most part, users don’t have to manually intervene in the process; everything happens in the background. Weekly or periodic progress summaries will come straight to your email and keep you informed without logging in daily. Removal cycles are also automated and handled every 60-90 days

Advertisement

DeleteMe

DeleteMe’s user experience reflects its human-assisted model. After selecting a plan and uploading your personal details, the team will run periodic scans of dozens to hundreds of listing sites. Instead of following a live dashboard, users get regular reports, typically quarterly, that outline what was found and removed. These reports are more detailed, including context and explanations, such as site name, removal date, status, etc.

Some users may appreciate the clear narrative, while others prefer dashboards and less engagement.

Across both services, reviews suggest clarity. Incogni is praised for automation and ongoing protection with minimal effort from the user, while DeleteMe’s biggest strengths are human-handled processes and detailed reports.

Customer Support

Both providers offer structured support, but their channels differ.

Advertisement

Incogni offers:

  • Email/ticket-based support
  • Help Center documentation
  • Live chat support through its website
  • Phone support only for Unlimited plans

Users often describe Incogni’s support team’s responses as clear and process-oriented. They also say that the process is quite quick.

DeleteMe offers:

  • Phone support
  • Live chat
  • Email support
  • Web contact forms

User feedback praises DeleteMe’s helpful human interaction and explanations in its reports. However, many reviews mention slower response times, especially during peak periods.

Pricing

Incogni Pricing

Plan Billing frequency Monthly cost Features
Standard Annually $7.99 Automated broker removals, dashboard tracking
Standard Monthly $15.98 Everything above for a higher overall cost
Unlimited Annually $14.99 Everything above, unlimited custom removal requests, 2,000 additional sites coverage, live phone support
Unlimited Monthly $29.98 Everything above for a higher overall cost
Family Standard Annually $15.99 Standard for up to 5 members and family account management
Family Standard Monthly $31.98 Everything above for a higher overall cost
Family Unlimited Annually $22.99 Unlimited for up to 5 members and family account management
Family Unlimited Monthly $45.98 Everything above for a higher overall cost

For American users, there’s an additional option – the Protect Plan, which is all-in-one data removal and identity-theft protection. For $41.48/month, you can get everything in Incogni Unlimited plus identity theft protection with NordProtect. Incogni is also included with Surfshark’s One+ plan from $4.19/month.

DeleteMe Pricing

Plan Monthly cost when billed biannually Monthly cost when billed annually Features
1 person $6.97 $8.60 Quarterly privacy reports, A+ BBB rating, email, chat, and phone support, custom removal requests
2 people $13.93 $17.20 The same as above
Family $27.87 $34.40 The same as above for 4 people

Both DeleteMe and Incogni have custom offers for businesses. None of them includes a free trial, but Incogni comes with a 30-day money-back guarantee for risk-free testing, while DeleteMe can be canceled with a full refund only before the first privacy report. However, you can get a free scan with DeleteMe.

Final Verdict: Two Solid Approaches for 2026

Incogni and DeleteMe are among the most popular data removal service providers, but they fit different privacy needs:

  • Incogni is an excellent choice for users who don’t want to get too engaged with the data removal process. It also offers wider broker coverage, with recurring cycles that keep sending removal requests. Incogni has also been independently assessed and received editorial recognition.
  • DeleteMe will suit users who prefer a human-assisted removal process and detailed reports about contacted brokers and their responses.

In 2026, both services can be effective, but when you prioritize automation, scale, and long-term work, Incogni’s approach will suit you better.

FAQ

Which platform provides broader data broker coverage?
Advertisement

Incogni covers over 420 brokers on its standard plan. DeleteMe claims coverage for 750+ sites, but its standard tier often defaults to around 100 high-impact brokers, with the remainder requiring manual custom requests.

Should I prioritize human-assisted removals over AI automation?

DeleteMe employs human privacy experts to handle complex manual opt-outs, which may offer higher precision for difficult cases. Incogni uses advanced AI to maintain a set-and-forget system that is generally faster and more affordable.

Advertisement
Which service is more effective for users living outside the United States?

Incogni is the stronger choice for global privacy, with a single subscription covering the US, UK, Canada, and 30+ European countries. DeleteMe historically focuses on the American market, though it has expanded to select international regions.

Advertisement
How frequently will I receive updates on my removal status?

Incogni sends progress updates every week. DeleteMe typically provides more comprehensive deep-dive reports, but issues them every quarter.

Advertisement

Source link

Continue Reading

Tech

Nintendo is suing the US government over Trump’s tariffs

Published

on

Nintendo of America is suing the US government, including the Department of Treasury, Department of Homeland Security and US Customs and Border Protection, over its tariff policy, Aftermath reports. The video game giant already raised prices on the Nintendo Switch in August 2025 in response to “market conditions,” but has so far left the price of its newer Switch 2 console unchanged.

Nintendo’s lawsuit, filed in the US Court of International Trade, cites a Supreme Court ruling from February that confirmed a lower courts’ opinion that the Trump administration’s global tariffs were illegal. Nintendo’s lawyers claim that the video game company has been “substantially harmed by the unlawful of execution and imposition” of “unauthorized Executive Orders,” and the fees Nintendo has already paid to import products into the country. In response, the company is seeking a “prompt refund, with interest” of the tariffs it has paid.

“We can confirm we filed a request,” Nintendo of America said in a statement. “We have nothing else to share on this topic.”

While taxes and other trade policies are supposed to be set by Congress, President Donald Trump implemented a collection of global tariffs over the course of his first year in office using executive orders and the International Emergency Economic Powers Act (IEEPA), a law that gives the president expanded control over trade during a global emergency. The Trump administration has positioned tariffs as a way to punish enemies and bargain with trade partners, but many companies have passed the increased price of importing goods onto customers.

Advertisement

In upholding opinions from the US District Court of the District of Columbia and the US Court of International Trade, the Supreme Court removed the Trump administration’s ability to collect tariffs using IEEPA, but didn’t clarify how the tariffs the government had illegally collected should be returned to companies. Like Nintendo, other companies have decided filing a lawsuit is the best way to get refunded.

The Guardian reports that US Customs and Border Protection is already preparing a system to process refunds for affected companies, but that might not mark the end of Trump’s tariff regime. In a press conference held after the Supreme Court released its decision, the President announced plans to introduce tariffs using other, more constrained methods. Tariffs aren’t the only obstacle Nintendo faces, either. The company could also be forced to raise the price of its consoles in response to the current RAM shortage.

Source link

Advertisement
Continue Reading

Tech

Microsoft, Google, Amazon say Anthropic Claude remains available to non-defense customers

Published

on

Enterprises and startups that use Anthropic Claude through Microsoft and Google products need not fear that the model will be ripped from their reach, Microsoft and Google confirmed to TechCrunch. AWS customers and partners can also reportedly continue to use Claude for their non-defense associated workloads.

Microsoft was the first big tech company to offer assurance that Anthropic’s models will remain available to its customers even though the Trump administration’s Department of War — formally known as the Department of Defense — has escalated its feud with Anthropic.

The Defense Department officially designated the American AI startup as a supply-chain risk on Thursday after the AI company refused to give it unrestricted access to its tech for applications the company said its AI could not safely support, such as mass surveillance and fully autonomous weapons.

The supply-chain risk designation is typically reserved for foreign adversaries. For Anthropic, the designation means that the Pentagon won’t be able to use the company’s products once it transitions Claude off its systems. It also requires any company or agency that works with the Pentagon to certify that they don’t use Anthropic’s models, either. Anthropic has vowed to fight the designation in court.

Advertisement

Microsoft sells an array of products, from Office to its cloud, to many federal agencies, including the Defense Department. A Microsoft spokesperson said that the company will continue making Anthropic’s models available within its own products and to Microsoft customers.

“Our lawyers have studied the designation and have concluded that Anthropic products, including Claude, can remain available to our customers — other than the Department of War — through platforms such as M365, GitHub, and Microsoft’s AI Foundry, and that we can continue to work with Anthropic on non-defense related projects,” the spokesperson said in an email. CNBC first reported on the comment.

Google, which sells cloud computing, AI, and productivity tools federal agencies, has also confirmed that it will continue to make Claude available to its customers.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

“We understand that the Determination does not preclude us from working with Anthropic on non-defense related projects, and their products remain available through our platforms, like Google Cloud,” a Google spokesperson said.

Advertisement

CNBC also reported that AWS customers and partners can keep using Claude for their non-defense workloads.

This echoes what Anthropic CEO Dario Amodei said in his statement vowing to fight the designation.

“With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts,” Amodei said, adding, “Even for Department of War contractors, the supply chain risk designation doesn’t (and can’t) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts.”

In the meantime, Claude’s consumer growth surge has continued after Anthropic refused to give in to the department’s demands.

Advertisement

 

Source link

Continue Reading

Tech

Portable Tow Rope Batman Would Be Proud Of

Published

on

Out of all of Batman’s massive array of tools which turn a relatively ordinary person into a superhero, perhaps his most utilitarian is his grappling gun — allowing him the ability to soar around his city like Spiderman or Superman. [John Boss] isn’t typically fighting crime, but he did develop his own grappling gun of sorts which gives him another superpower: the ability to easily scale snowy hills to quickly get back to the top.

The grappling gun takes inspiration from a commonly used tool called a power ascender, which is often used in industry applications where climbing is required. This one is held in the hand and uses a brushless motor with a belt-driven 3:1 reduction for increased torque. The pulley system, bearings, and motor are all housed in a 3D printed enclosure and are powered by rechargeable Milwaukee power tool batteries. During prototyping the rope intake and output feed locations had to be moved to increase the pulley’s grabbing ability, and with a working prototype he swapped a lot of the plastic 3D printed parts out for metal to increase the sturdiness of the device.

The grappling gun was originally designed for a smaller child to get hoisted up a hill on a sled, but when stress testing the device [John] found out that it actually has more than enough capability to haul even an adult up a hill on skis. As an added bonus, the outfeed for the rope can be put into a bag and used to automatically coil the rope up when he’s done at the hill. Although this is a great solution for a portable rope tow, for something more permanent and more powerful take a look at this backyard rope tow that was built from spare parts.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025