Connect with us

Tech

Gene Amdahl gambled on wafer-scale silicon decades before the AI era made it viable

Published

on


Long before wafer-scale processors became associated with AI accelerators and ultra-large chips, Gene Amdahl was already trying to turn an entire silicon wafer into a single processor.

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Is Age Verification a Trap?

Published

on

Social media is going the way of alcohol, gambling, and other social sins: Societies are deciding it’s no longer kid stuff. Lawmakers point to compulsive use, exposure to harmful content, and mounting concerns about adolescent mental health. So, many propose to set a minimum age, usually 13 or 16.

In cases when regulators demand real enforcement rather than symbolic rules, platforms run into a basic technical problem. The only way to prove that someone is old enough to use a site is to collect personal data about who they are. And the only way to prove that you checked is to keep the data indefinitely. Age-restriction laws push platforms toward intrusive verification systems that often directly conflict with modern data-privacy law.

This is the age-verification trap. Strong enforcement of age rules undermines data privacy.

How Does Age Enforcement Actually Work?

Most age-restriction laws follow a familiar pattern. They set a minimum age and require platforms to take “reasonable steps” or “effective measures” to prevent underage access. What these laws rarely spell out is how platforms are supposed to tell who is actually over the line. At the technical level, companies have only two tools.

Advertisement

The first is identity-based verification. Companies ask users to upload a government ID, link a digital identity, or provide documents that prove their age. Yet in many jurisdictions, 16-year-olds do not have IDs. In others, IDs exist but are not digital, not widely held, or not trustworthy. Storing copies of identity documents also creates security and misuse risks.

The second option is inference. Platforms try to guess age based on behavior, device signals, or biometric analysis, most commonly facial age estimation from selfies or videos. This avoids formal ID collection, but it replaces certainty with probability and error.

In practice, companies combine both. Self-declared ages are backed by inference systems. When confidence drops, or regulators ask for proof of effort, inference escalates to ID checks. What starts as a light-touch checkpoint turns into layered verification that follows users over time.

What Are Platforms Doing Now?

This pattern is already visible on major platforms.

Advertisement

Meta has deployed facial age estimation on Instagram in multiple markets, using video-selfie checks through third-party partners. When the system flags users as possibly underaged, it prompts them to record a short selfie video. An AI system estimates their age and, if it decides they are under the threshold, restricts or locks the account. Appeals often trigger additional checks, and misclassifications are common.

TikTok has confirmed that it also scans public videos to infer users’ ages. Google and YouTube rely heavily on behavioral signals tied to viewing history and account activity to infer age, then ask for government ID or a credit card when the system is unsure. A credit card functions as a proxy for adulthood, even though it says nothing about who is actually using the account. The Roblox games site, which recently launched a new age-estimate system, is already suffering from users selling child-aged accounts to adult predators seeking entry to age-restricted areas, Wired reports.

For a typical user, age is no longer a one-time declaration. It becomes a recurring test. A new phone, a change in behavior, or a false signal can trigger another check. Passing once does not end the process.

How Do Age-Verification Systems Fail?

These systems fail in predictable ways.

Advertisement

False positives are common. Platforms identify as minors adults with youthful faces, or adults who are sharing family devices, or have otherwise unusual usage. They lock accounts, sometimes for days. False negatives also persist. Teenagers learn quickly how to evade checks by borrowing IDs, cycling accounts, or using VPNs.

The appeal process itself creates new privacy risks. Platforms must store biometric data, ID images, and verification logs long enough to defend their decisions to regulators. So if an adult who is tired of submitting selfies to verify their age finally uploads an ID, the system must now secure that stored ID. Each retained record becomes a potential breach target.

Scale that experience across millions of users, and you bake the privacy risk into how platforms work.

Is Age Verification Compatible With Privacy Law?

This is where emerging age-restriction policy collides with existing privacy law.

Advertisement

Modern data-protection regimes all rest on similar ideas: Collect only what you need, use it only for a defined purpose, and keep it only as long as necessary.

Age enforcement undermines all three.

To prove they are following age-verification rules, platforms must log verification attempts, retain evidence, and monitor users over time. When regulators or courts ask whether a platform took reasonable steps, “We collected less data” is rarely persuasive. For companies, defending themselves against accusations of neglecting to properly verify age supersedes defending themselves against accusations of inappropriate data collection.

It is not an explicit choice by voters or policymakers, but instead a reaction to enforcement pressure and how companies perceive their litigation risk.

Advertisement

Less Developed Countries, Deeper Surveillance

Outside wealthy democracies, the trade-off is even starker.

Brazil’s Statute of Child-rearing and Adolescents (ECA in Portuguese) imposes strong child-protection duties online, while its data-protection law restricts data collection and processing. Now providers operating in Brazil must adopt effective age-verification mechanisms and can no longer rely on self-declaration alone for high-risk services. Yet they also face uneven identity infrastructure and widespread device sharing. To compensate, they rely more heavily on facial estimation and third-party verification vendors.

In Nigeria many users lack formal IDs. Digital service providers fill the gap with behavioral analysis, biometric inference, and offshore verification services, often with limited oversight. Audit logs grow, data flows expand, and the practical ability of users to understand or contest how companies infer their age shrinks accordingly. Where identity systems are weak, companies do not protect privacy. They bypass it.

The paradox is clear. In countries with less administrative capacity, age enforcement often produces more surveillance, not less, because inference fills the void of missing documents.

Advertisement

How Do Enforcement Priorities Change Expectations?

Some policymakers assume that vague standards preserve flexibility. In the U.K., then–Digital Secretary Michelle Donelan, argued in 2023 that requiring certain online safety outcomes without specifying the means would avoid mandating particular technologies. Experience suggests the opposite.

When disputes reach regulators or courts, the question is simple: Can minors still access the platform easily? If the answer is yes, authorities tell companies to do more. Over time, “reasonable steps” become more invasive.

Repeated facial scans, escalating ID checks, and long-term logging become the norm. Platforms that collect less data start to look reckless by comparison. Privacy-preserving designs lose out to defensible ones.

This pattern is familiar, including online sales-tax enforcement. After courts settled that large platforms had an obligation to collect and remit sales taxes, companies began continuous tracking and storage of transaction destinations and customer location signals. That tracking is not abusive, but once enforcement requires proof over time, companies build systems to log, retain, and correlate more data. Age verification is moving the same way. What begins as a one-time check becomes an ongoing evidentiary system, with pressure to monitor, retain, and justify user-level data.

Advertisement

The Choice We Are Avoiding

None of this is an argument against protecting children online. It is an argument against pretending there is no trade-off.

Some observers present privacy-preserving age proofs involving a third party, such as the government, as a solution, but they inherit the same structural flaw: Many users who are legally old enough to use a platform do not have government ID. In countries where the minimum age for social media is lower than the age at which ID is issued, platforms face a choice between excluding lawful users and monitoring everyone. Right now, companies are making that choice quietly, after building systems and normalizing behavior that protects them from the greater legal risks. Age-restriction laws are not just about kids and screens. They are reshaping how identity, privacy, and access work on the Internet for everyone.

The age-verification trap is not a glitch. It is what you get when regulators treat age enforcement as mandatory and privacy as optional.

From Your Site Articles

Advertisement

Related Articles Around the Web

Source link

Continue Reading

Tech

UK data hoarder flies to America to buy hard drives, saves $2,000

Published

on


In sharing the story, Redditor cgtechuk said he was running out of space after upgrading to four 16TB drives purchased from Amazon UK in 2020. Unfortunately, the 28TB replacement drives he had been eyeing were skyrocketing in price locally. Having previously purchased hardware in the US before, he decided to…
Read Entire Article
Source link

Continue Reading

Tech

Google clamps down on Antigravity ‘malicious usage’, cutting off OpenClaw users in sweeping ToS enforcement move

Published

on

Google caused controversy among some developers this weekend and today, Monday, February 23rd, after restricting their usage of its new Antigravity “vibe coding” platform, alleging “maliciously usage.” 

Some users who had been using the open source autonomous AI agent OpenClaw in conjunction with agents built on Antigravity, as well as those who had connected OpenClaw agents to their Gmails, claimed on social media that they lost access to their Google accounts. 

According to Google, said users had been using Antigravity to access a larger number of Gemini tokens via third-party platforms like OpenClaw, which overwhelmed the system for other Antigravity customers. 

This move has cut off several users, underscoring the architectural and trust issues that can arise with OpenClaw. The timing of Google’s crackdown is particularly pointed. Just one week ago, on February 15, OpenAI CEO Sam Altman announced that OpenClaw creator Peter Steinberger had joined OpenAI to lead its “next generation of personal agents.” While OpenClaw remains an open-source project under an independent foundation, it is now financially backed and strategically guided by Google’s primary rival.

Advertisement

By cutting off OpenClaw’s access to Antigravity, Google isn’t just protecting its server load; it is effectively severing a pipeline that allows an OpenAI-adjacent tool to leverage Google’s most advanced Gemini models.

Google DeepMind engineer and former CEO and founder of Windsurf, Varun Mohan, said in an X post that the company noticed “malicious usage” that led to service degradation.

“We’ve been seeing a massive increase in malicious usage of the Antigravity backend that has tremendously degraded the quality of service for our users. We needed to find a path to quickly shut off access to these users that are not using the product as intended. We understand that a subset of these users were not aware that this was against our ToS [Terms of Service] and will get a path for them to come back on but we have limited capacity and want to be fair to our actual users,” the post said. 

A Google DeepMind spokesperson told VentureBeat that the move is not to permanently ban the use of Antigravity to access third-party platforms, but to align its use with the platform’s terms of service.   

Advertisement

Unsurprisingly, Google’s move has caused a furor among OpenClaw users, including from OpenClaw creator Peter Steinberger, who announced that OpenClaw will remove Google support as a result. 

Infrastructure and connection uncertainty

OpenClaw emerged as a way for individual users to run shell commands and access local files, fulfilling a major promise of AI agents: efficiently running workflows for users.

But, as VentureBeat has frequently pointed out, it can often run into security and guardrail issues. There are companies building ways for enterprise customers to access OpenClaw securely and with a governance layer, though OpenClaw is so new that we should expect more announcements soon.

However, Google’s move was not framed as a security issue but rather as one of access and runtime, further showing that there is still significant uncertainty when users want to bring in something like OpenClaw into their workflow. 

Advertisement

This is not the first time developers and power users of agentic AI found their access curtailed. Last year, Anthropic throttled access to Claude Code after the company claimed some users were abusing the system by running it 24/7. 

What this does highlight is the disconnect between companies like Google and OpenClaw users. OpenClaw offered many interesting possibilities for creating workflows with agents. However, because it is continually evolving, users may inadvertently run afoul of ToS or rate limits. 

Mohan said Google is working to bring the banned users back, but whether this means the company will amend its ToS or figure out a secure connection between OpenClaw agents and Antigravity models remains to be seen. 

For developers, the message is clear: the era of “bring your own agent” to a frontier model is ending. Providers are now prioritizing vertically integrated experiences where they can capture 100% of the telemetry and subscription revenue, often at the expense of the open-source interoperability that defined the early days of the LLM boom.

Advertisement

Affected users

Several users said on both the Y Combinator chat boards and X that they no longer had access to their Google accounts after running OpenClaw instances for certain Google products.

Google’s move mirrors a broader industry shift toward “walled garden” agent ecosystems. Earlier this year, Anthropic introduced “client fingerprinting” to ensure that its Claude Code environment remains the exclusive interface for its models, effectively locking out third-party wrappers like OpenClaw. For developers, the message is clear: the era of “bring your own agent” to a frontier model is ending. Providers are now prioritizing vertically integrated experiences where they can capture 100% of the telemetry and subscription revenue, often at the expense of the open-source interoperability that defined the early days of the LLM boom.

Some have said they will no longer use Google or Gemini for their projects. Right now, people who still want to keep using Antigravity will need to wait until Google figures out a way for them to use OpenClaw and access Gemini tokens in a manner Google deems “fair.” 

Google DeepMind reiterated that it had only cut access to Antigravity, not to other Google applications. 

Advertisement

Conclusion: the enterprise takeaway

For enterprise technical decision-makers, the “Antigravity Ban” serves as a definitive case study in the risks of agentic dependency. As the industry moves from chatbots to autonomous agents, the following realities must now dictate strategy:

  • Platform fragility is the new normal: The sudden lockout of $250/month “Ultra” users proves that even high-paying enterprise customers have little leverage when a provider decides to change its “fair use” definitions. Relying on OAuth-based third-party wrappers for core business logic is now a high-risk gamble.

  • The rise of local-first governance: With OpenClaw moving toward an OpenAI-backed foundation and Google/Anthropic tightening their clouds, enterprises should prioritize agent frameworks that can run “local-first” or within VPCs. The “token loophole” that OpenClaw exploited is being closed; future agentic scale will require direct, high-cost API contracts rather than subsidized consumer seats.

  • Account portability as a requirement: The fact that users “lost access to their Google accounts” underscores the danger of bundling development environments with primary identity providers. Decision-makers should decouple AI development from core corporate identity (SSO) where possible to avoid a single ToS violation paralyzing an entire team’s communications.

Ultimately, the Antigravity incident marks the end of the “Wild West” for AI agents. As Google and OpenAI stake their claims, the enterprise must choose between the stability of the walled garden or the complexity (and cost) of truly independent, self-hosted infrastructure.

Source link

Advertisement
Continue Reading

Tech

Viral Doomsday Report Lays Bare Wall Street’s Deep Anxiety About AI Future

Published

on

A 7,000-word “doomsday” thought experiment from Citrini Research helped trigger an 800-point drop in the Dow, “painting a dark portrait of a future in which technological change inspires a race to the bottom in white-collar knowledge work,” reports the Wall Street Journal. From the report: Concerns of hyperscalers overspending are out. Worries of software-industry disruption don’t go far enough. The “global intelligence crisis” is about to hit. The new, broader question: What if AI is so bullish for the economy that it is actually bearish? “For the entirety of modern economic history, human intelligence has been the scarce input,” Citrini wrote in a post it described as a scenario dated June 2028, not a prediction. “We are now experiencing the unwind of that premium.”

Many of Monday’s moves roughly aligned with the situation outlined by Citrini, in which fast-advancing AI tools allow spending cuts across industries, sparking mass white-collar unemployment and in turn leading to financial contagion. Software firms DataDog, CrowdStrike and Zscaler each plunged more than 9%. International Business Machines’ 13% decline was its worst one-day performance since 2000. American Express, KKR and Blackstone — all name-checked by Citrini — tumbled. That anxiety, coupled with renewed uncertainty about trade policy from Washington, weighed down major indexes Monday. The Dow Jones Industrial Average led declines, falling 1.7%, or 822 points. The S&P 500 shed 1%, while the Nasdaq composite retreated 1.1%.

[…] Monday’s market swings extended a run of AI-linked volatility. A small research outfit that has garnered a huge Substack following for macro and thematic stock research, Citrini said in its new post that software firms, payment processors and other companies formed “one long daisy chain of correlated bets on white-collar productivity growth” that AI is poised to disrupt. […] Shares in DoorDash also veered 6.6% lower Monday after Citrini’s Substack note called the delivery app a “poster child” for how new tools would upend companies that monetize interpersonal friction. In the research firm’s scenario, AI agents would help both drivers and customers navigate food deliveries at much lower costs.

Source link

Advertisement
Continue Reading

Tech

Group alleges fake sign-ins used to pad apparent opposition to Washington state ‘millionaires tax’

Published

on

Washington state Sen. Victoria Hunt, a co-sponsor of SB 6346, speaks during a virtual news conference on Monday about how she learned that her name had been fraudulently signed in as “con” over the weekend on a public comment page ahead of a House Committee on Finance hearing on the millionaires tax. (Screen grab via Invest in Washington Now)

Invest in Washington Now, a Washington state-based advocacy group focused on progressive revenue reform, is alleging that widespread fraud in the Legislature’s public comment system has been used to pad apparent opposition to the so-called “millionaires tax.”

In a news release and virtual press conference on Monday, Invest in Washington Now said there have been tens of thousands of duplicate names used as sign-ins for hearings on Senate Bill 6346 and House Bill 2724. The group said more than 100 sign-ins marked “con” were confirmed as fraudulent over the weekend and ahead of Tuesday’s public hearing in the House Committee on Finance.

The Seattle Times reported on the allegations on Monday.

Among those who were allegedly impersonated: Sen. Victoria Hunt (D-Issaquah), a co-sponsor of the millionaires tax; former Rep. Derek Kilmer; SEIU 775 Secretary Treasurer Adam Glickman; and WEA President Larry Delaney.

Invest In Washington Now shared a letter it sent to Attorney General Nick Brown and House Chief Clerk Bernard Dean calling for an investigation into the scale of the alleged fraud and who is behind it.

Advertisement

“This is a clearly fraudulent effort to mislead legislators and the public about the level of opposition to the millionaires tax, and the ability to commit this type of fraud could undermine the integrity of legislative process on this and other issues,” the letter said.

The millionaires tax, which was approved by the Senate last week, would create a 9.9% tax applied to taxable, personal annual income that exceeds $1 million. The legislation marks the first time in decades that state lawmakers have pursued a personal income tax aimed at high‑income residents.

The bill has drawn opposition from some tech leaders and entrepreneurs who worry it could undermine the sector by souring Washington’s relatively favorable tax laws for startup founders, investors and high-wage earners.

Opponents of the tax have been pointing to what they call the “most unpopular bill in state history,” citing the many thousands of Washington residents who have signed on in opposition.

Advertisement

“More than 60,000 people signed in against SB 6346 when it received a rushed hearing in the Senate,” Sen. John Braun (R-Centralia) said in a news release last week. “That is so impressive that Democrats have tried to say bots are responsible, even though the Legislature blocks bots. We know better.”

The legislative sign-in page does require CAPTCHA, a security mechanism used to prevent bots from abusing websites. But Invest in Washington Now pointed to the frequency and high number of duplicate names, many signed in within seconds of each other, that suggested the possible use of automated sign-in tools.

Hunt, who represents the 5th Legislative District, said she was signed in fraudulently twice.

“I did not sign in ‘con,’ I’m not sure who is doing this,” Hunt said. “I don’t know why a senator would sign into a House hearing in any event. It was not me.”

Advertisement

SEIU’s Glickman said he strongly supports the millionaires tax, so he was surprised to learn of his own apparent opposition to the bill.

“I was shocked to say the least, to learn that at 4:32 a.m. Thursday morning while I was home fast asleep, somebody apparently put my name and organization into the official testimony record as against the millionaires tax,” Glickman said. “I was even more appalled to learn that I wasn’t the only one that happened to over the weekend.”

Related:

Source link

Advertisement
Continue Reading

Tech

Anthropic accuses three Chinese AI labs of abusing Claude to improve their own models

Published

on

Anthropic is issuing a call to action against AI “distillation attacks,” after accusing three AI companies of misusing its Claude chatbot. On its website, Anthropic claimed that DeepSeek, Moonshot and MiniMax have been conducting “industrial-scale campaigns…to illicitly extract Claude’s capabilities to improve their own models.”

Distillation in the AI world refers to when less capable models lean on the responses of more powerful ones to train themselves. While distillation isn’t a bad thing across the board, Anthropic said that these types of attacks can be used in a more nefarious way. According to Anthropic, these three Chinese AI firms were responsible for more than “16 million exchanges with Claude through approximately 24,000 fraudulent accounts.” From Anthropic’s perspective, these competing companies were using Claude as a shortcut to develop more advanced AI models, which could also lead to circumventing certain safeguards.

Anthropic said in its post that it was able to link each of these distilling attack campaigns to the specific companies with “high confidence” thanks to IP address correlation, metadata requests and infrastructure indicators, along with corroborating with others in the AI industry who have noticed similar behaviors.

Early last year, OpenAI made similar claims of rival firms distilling its models and banned suspected accounts in response. As for Anthropic, the company behind Claude said it would upgrade its system to make distillation attacks harder to do and easier to identify. While Anthropic is pointing fingers at these other firms, it’s also facing a lawsuit from music publishers who accused the AI company of using illegal copies of songs to train its Claude chatbot.

Advertisement

Source link

Continue Reading

Tech

A cash bounty is daring hackers to stop Ring cameras from sharing data with Amazon

Published

on

The Fulu Foundation is offering a cash bounty to anyone who can break Ring cameras free from Amazon’s data ecosystem. The goal isn’t breaking into devices for misuse or surveillance.

It is about giving owners control over devices already installed in their homes, without forcing those cameras to constantly send data back to Amazon.

The @Ring Super Bowl Ad highlighted the inescapable reality that true privacy requires ownership.

Consumers should be able to modify their @Ring devices to maintain that privacy, which is why our newest bounty works to ensure consumer control over Ring cameras and to allow…

— FULU (@FuluFoundation) February 20, 2026

Advertisement

The bounty targets Ring’s video doorbell cameras, which are deeply tied to Amazon’s cloud services. Participants are being asked to find a way to prevent those devices from sending data to Amazon servers, without disabling the cameras themselves.

For many involved, the project is a response to growing discomfort with how Ring devices can be used beyond simple home security.

Inside the bounty and what hackers are being asked to do

The bounty is being offered by Fulu, which is a privacy-focused non-profit organization. Fulu cofounder Kevin O’Reilly told Wired, “People who install security cameras are looking for more security, not less. At the end of the day, control is at the heart of security. If we don’t control our data, we don’t control our devices.”

The challenge pays at least $10,000, with more pledged, to anyone who can modify a Ring camera so it works locally, blocks Amazon data sharing, and keeps features like motion detection and night vision intact.

The solution must rely on readily available and inexpensive tools, and the steps must be clear enough that a moderately technical user could complete the modification in under an hour. The winner will not be required to publish their methods.

Advertisement

Doing so could expose them to legal risk under Section 1201 of the Digital Millennium Copyright Act, which restricts the circumvention of digital locks. O’Reilly says that, as with other Fulu bounties, the decision to publish or keep the work private will be left to the winner.

Why Ring cameras are under scrutiny

Concern has intensified after Ring expanded its Search Party feature, which lets anyone using the Neighbors app help locate lost pets and items through nearby cameras. However, critics argue that personal devices are quietly becoming part of a surveillance network.

That unease has only grown as Ring’s ambitions have become clearer. CEO Jamie Siminoff has spoken about using Ring’s massive camera network to “zero out crime,” positioning the platform as a tool for large-scale crime prevention rather than just personal safety.

These concerns exist against a longer backdrop of skepticism toward Amazon’s handling of user data. A previous Wired investigation revealed internal warnings about weak data safeguards, deepening public concern over potential data misuse.

Recent reports have added to those concerns, including findings that Ring’s Android app allows undisclosed third parties to track users and how your next walk past a Ring camera could turn into a biometric scan.

Whether the bounty succeeds or not, it highlights a growing demand for transparency and autonomy in connected home devices. Meanwhile, if you are not interested in sharing data, Ring does allow users to opt out, and here’s how to disable the Search Party feature.

Advertisement

Source link

Continue Reading

Tech

Razer Blackshark V3 X review: a barebones but sensible budget version of the best gaming headset on the market

Published

on

Why you can trust TechRadar


We spend hours testing every product or service we review, so you can be sure you’re buying the best. Find out more about how we test.

Razer BlackShark V3 X review: One-minute review

The bells-and-whistles version of Razer’s latest BlackShark V3, the V3 Pro, is one of the best wireless gaming headsets on the market. This model might share the name and the basic chassis design, but it’s available at a very different price, and that means the feature spec sheet looks wildly different too.

Razer’s positioning this as an esports model, based on the low-latency wireless connection its Hyperspeed 2.4GHz wireless dongle offers, and the impressively svelte 9.6oz / 270g weight. In reality, as welcome as those attributes are, they’re probably more relevant to a non-professional gamer who wants to save some cash, stay comfortable while they play, and avoid connection dropouts more than a professional player in a stadium.

Advertisement

Source link

Continue Reading

Tech

Whoop vs Garmin: Comparing the wearable brands

Published

on

Deciding between a Whoop strap and a Garmin smartwatch can feel like a tough job, as both promise accurate tracking capabilities. That’s where we at Trusted Reviews come in.

We’ve reviewed countless Garmin smartwatches alongside many Whoop straps, and we’ve compared our experiences and answered key questions about both brands below.

Think an Apple Watch might suit you better? Our Whoop vs Apple Watch and Garmin vs Apple Watch guides can help iPhone users decide which wearable to go for. Otherwise, our list of the best Garmin watches and best fitness trackers is bound to have an option for you.

Price and Subscription

Buying a Whoop is a totally different experience from buying a Garmin tracker. Whoop operates as a subscription model, whereby to access the app and supporting features, you’ll need to pay an annual membership fee. 

Advertisement

There are three memberships to choose from: One, Peak and Life. We’ll discuss the overall differences here, but for a more in-depth look, visit our Life vs Peak vs One guide.

Advertisement

One is the cheapest with a starting price of £169/$169 for a 12-month subscription, and includes a Whoop 5.0 device, charger and a Jet Black CoreKnit band. 

Peak is the mid-range offering, with a starting RRP of £229/$229 for 12 months. This plan also comes equipped with a Whoop 5.0 device but also includes a wireless PowerPack and an Obsidian SuperKnit band.

Advertisement

The most expensive of the three subscriptions is Life, which starts at £349/$349 for 12 months and comes with an upgraded Whoop MG device, a wireless PowerPack, and a Titanium SuperKnit Luxe band. We’ll mention both straps throughout this article, but for more information, check out Whoop 5 vs Whoop MG.

Finally, at the time of writing, Whoop is offering a one-month trial for anyone who wants to try the service before committing to a full 12 months. This trial includes a new or certified pre-owned Whoop 4.0 device, wireless battery pack and a new SuperKnit band. 

Although there is a subscription available for Garmin, Connect Plus, it isn’t a necessity when using any smartwatch. Otherwise, Garmin offers a huge range of different smartwatches and fitness trackers, starting from £129.99/$149.99 up to £1099.99/$1199.99.

Advertisement

Advertisement

SQUIRREL_PLAYLIST_10207858

What does Whoop give you that Garmin doesn’t?

Whoop bands are lacking one significant feature that’s found in all other Garmin smartwatches and trackers: a screen. While this may seem like a surprising omission, Whoop explains this design should mean you can focus on your health and not get distracted by constant notifications. 

Whoop MGWhoop MG
Whoop MG. Image Credit (Trusted Reviews)

Garmin smartwatches boast some of the best battery life found in wearables, with the likes of the Fenix 8 boasting up to 29 days of life in smartwatch mode, while others like the Instinct 3 come equipped with a Solar display that keeps the device topped up with solar power. However, once it does come time to fully recharge the device, you’ll need to take it off, which means you’re losing some data tracking. 

garminforerunner970reviewtimefacegarminforerunner970reviewtimeface
Garmin Forerunner 970. Image Credit (Trusted Reviews)

Whoop is different. Instead, you can use the wireless Power Pack (which is either sold separately or comes with both Peak and Life subscriptions) to recharge your device without removing the strap. This means you won’t miss a minute of data collection, giving you a truly uninterrupted tracking experience. 

Advertisement

Is a Garmin better than a Whoop?

Whether a Garmin is better than a Whoop, or vice versa, boils down to your individual needs and wants from a wearable. If you don’t want to be distracted by a screen showing endless notifications while working out, then the Whoop is an easy recommendation as it’s completely screen-less. On the other hand, if you want a smartwatch that’s almost an extension of your smartphone, then you’ll definitely prefer a Garmin. 

Advertisement

Aside from design, there are many factors that could determine whether a Garmin or Whoop strap is better for you. Firstly, Whoop is designed to provide a more in-depth look at your health and fitness, while offering your personalised insight into your data. However, Whoop isn’t equipped with built-in GPS, which could be an issue for those who want to accurately track their routes without needing to have their phone to hand. That’s where Garmin shines.

Garmin Venu 4 walking appGarmin Venu 4 walking app
Garmin route tracking. Image Credit (Trusted Reviews)

Many of the latest Garmin wearables are equipped with multi-band GPS which offers accurate route tracking, regardless of whether you’re surrounded by skyscrapers or in an open field. With this in mind, Garmin is likely the better choice for runners, hikers, mountain bikers and the like. 

Otherwise, remember you can track most sports and workouts with either a Garmin or Whoop. It’s worth noting that although Whoop is generally an accurate tracker, and provides useful insights, it doesn’t always pick up when you’ve done a lighter workout. Of course, you can manually start an activity, and add one after the fact, it would be better if it was more reliable for lighter exercises.

Whoop app Add Activity and Start Activity optionsWhoop app Add Activity and Start Activity options
You can manually add or start an activity via the Whoop smartphone app

Advertisement

Is Whoop or Garmin more accurate?

It’s worth noting that we’ve found both Whoop and Garmin trackers to be impressively accurate when providing data. However, the lack of screen with the Whoop might be an issue for some, as you can’t see your real-time data without looking at your phone.

We also found with Whoop MG in particular that, while it does offer automatic exercising tracking, it can be rather hit-and-miss, as we noted that it often misses periods of low- to mid-effort exercise. 

Advertisement

Plus, as we touched upon earlier, remember that Whoop doesn’t have built-in GPS, so all location tracking is down to your paired smartphone. Considering Garmin’s latest multi-band GPS, which is found in the likes of the Forerunner 970 and Instinct 3, was hailed by us as being the best and most accurate tracking performance available on a smartwatch, the lack of GPS on a Whoop seems a shame.

Maps on Garmin Fenix EMaps on Garmin Fenix E
Garmin Fenix E map. Image Credit (Trusted Reviews)

Otherwise, when it comes to receiving general health, wellbeing and sleep metrics, both Whoop and Garmin trackers do a great job at providing accurate measurements. 

Is Whoop the most accurate tracker?

In terms of accuracy, both Whoop and Garmin have proved themselves to offer reliable tracking results across the board. However, both ranges offer a different approach to such tracking. 

Advertisement

Whoop offers three scores: sleep, recovery and strain, all of which assess your metrics and give you a score that correlates with how you’ve performed during the day. For example, with the recovery score, factors such as HRV, body temperature and even your daily habits can all contribute to your score.

Advertisement
Sleep, Recovery, Strain Data and Stress and Health Monitor on Whoop appSleep, Recovery, Strain Data and Stress and Health Monitor on Whoop app
Whoop Sleep, Recovery and Strain data. Image Credit (Trusted Reviews)

This is similar to Garmin’s morning and newly introduced evening report, which provides wearers with a general yet reliable overview of their sleep, recovery and HRV status while advising whether they should prioritise a workout or rest. 

We found Whoop’s sleep tracking capabilities to be among the most accurate, as the score directly correlates to how we felt the following morning. Plus, unlike other trackers, it automatically tracks when we’ve fallen asleep, rather than just lying down in bed. 

Specifically with the Whoop MG, there’s also the ability to take blood pressure readings from the device; however, it infers its readings based on heart rate data, which means it isn’t quite as accurate as a traditional blood pressure monitor.

Whoop MG Blood Pressure Insights on smartphone appWhoop MG Blood Pressure Insights on smartphone app
Whoop Blood Pressure reading. Image Credit (Trusted Reviews)

Advertisement

Where Whoop isn’t quite as accurate or reliable is with exercise tracking. As mentioned earlier, we found the automatic exercise tracker was hit-and-miss, while overall functionality is pretty basic for such a pricey tracker. 

In fact, many Whoop users, including us during our review, wear Whoop alongside another smartwatch which offers more exercise functionality and advanced metric tracking.

Advertisement
Garmin Vivoactive 6Garmin Vivoactive 6
Garmin Vivoactive 6. Image Credit (Trusted Reviews)

With all of this in mind, it’s probably not as clear-cut to hail Whoop as the most accurate tracker, as there are undoubtedly shortcomings to keep in mind. If you’d prefer both health and fitness tracking tools, then we’d suggest a Garmin smartwatch, even one of the cheaper options like the Vivoactive 6, which is “capable of delivering reliable continuous data”.

Having said that, Whoop is still a solid health tracker, so if this is more of a concern for you, then a Whoop band remains a good choice.

Advertisement

How accurate is Whoop’s VO2 max?

VO2 Max is a measure of the maximum amount of oxygen your body can take in and move through your bloodstream during exercise, with the higher the number, the better your cardiovascular fitness is. 

Whoop is able to estimate VO2 Max levels through a “proprietary algorithm” that includes a wide range of data points, from physiological metrics, activity and demographic information. Whoop explains that an individual’s results are a “highly personalised estimate that is tailored to your unique physiology and lifestyle”.

Advertisement

While it’s difficult to determine just how accurate Whoop is, it explains that it has developed an algorithm to ensure a VO2 Max reading meets “stringent accuracy requirements”.

Similarly, many more premium Garmin smartwatches, like the Instinct 2, also offer VO2 Max readings, which provide an estimate by analysing performance data during activities like running and walking.

Garmin Instinct 2Garmin Instinct 2
Garmin Instinct 2 VO2 Max reading. Image Credit (Trusted Reviews)

Verdict

Essentially, we’d advise that before you splurge on either a Whoop or one of the best Garmin watches, you should seriously consider what you actually want from a wearable. If you’re looking for a smartwatch that allows you to keep on top of your notifications and has built-in (and extremely accurate) GPS, then a Garmin watch is one for you. Plus, as Garmin has such a varied range of devices, there’s bound to be one that best suits your needs. For example, if you don’t like big and bulky watches on your wrist, then opt for a sleek Venu 4 rather than the rugged Fenix 8 Pro

Advertisement

On the other hand, if you want a simple yet seriously clever wearable that may not sport the bells and whistles of some of the best smartwatches but is easily one of the best fitness trackers you can buy, then a Whoop has our vote.

Advertisement

The lack of screen allows you to quietly track and monitor your health and fitness, without getting distracted or bogged down by notifications. However, if you do want to check how you’re performing in real-time, then you can simply check your smartphone app instead. Personally, I think it’s better to not constantly track your movement and metrics when exercising, however I know that comes down to individual preference. 

Source link

Advertisement
Continue Reading

Tech

Dell, Lenovo, and others will launch Copilot+ laptops with Nvidia Arm CPU in H1 2026

Published

on


According to The Wall Street Journal, Nvidia is collaborating with MediaTek to develop its N1 and N1X PC SoCs, which integrate CPU, GPU, and NPU components into a single chip. Major PC manufacturers such as Dell and Lenovo are reportedly working on several laptops powered by the new processors, with…
Read Entire Article
Source link

Continue Reading

Trending

Copyright © 2025