Connect with us

Tech

A cash bounty is daring hackers to stop Ring cameras from sharing data with Amazon

Published

on

The Fulu Foundation is offering a cash bounty to anyone who can break Ring cameras free from Amazon’s data ecosystem. The goal isn’t breaking into devices for misuse or surveillance.

It is about giving owners control over devices already installed in their homes, without forcing those cameras to constantly send data back to Amazon.

The @Ring Super Bowl Ad highlighted the inescapable reality that true privacy requires ownership.

Consumers should be able to modify their @Ring devices to maintain that privacy, which is why our newest bounty works to ensure consumer control over Ring cameras and to allow…

— FULU (@FuluFoundation) February 20, 2026

Advertisement

The bounty targets Ring’s video doorbell cameras, which are deeply tied to Amazon’s cloud services. Participants are being asked to find a way to prevent those devices from sending data to Amazon servers, without disabling the cameras themselves.

For many involved, the project is a response to growing discomfort with how Ring devices can be used beyond simple home security.

Inside the bounty and what hackers are being asked to do

The bounty is being offered by Fulu, which is a privacy-focused non-profit organization. Fulu cofounder Kevin O’Reilly told Wired, “People who install security cameras are looking for more security, not less. At the end of the day, control is at the heart of security. If we don’t control our data, we don’t control our devices.”

The challenge pays at least $10,000, with more pledged, to anyone who can modify a Ring camera so it works locally, blocks Amazon data sharing, and keeps features like motion detection and night vision intact.

The solution must rely on readily available and inexpensive tools, and the steps must be clear enough that a moderately technical user could complete the modification in under an hour. The winner will not be required to publish their methods.

Advertisement

Doing so could expose them to legal risk under Section 1201 of the Digital Millennium Copyright Act, which restricts the circumvention of digital locks. O’Reilly says that, as with other Fulu bounties, the decision to publish or keep the work private will be left to the winner.

Why Ring cameras are under scrutiny

Concern has intensified after Ring expanded its Search Party feature, which lets anyone using the Neighbors app help locate lost pets and items through nearby cameras. However, critics argue that personal devices are quietly becoming part of a surveillance network.

That unease has only grown as Ring’s ambitions have become clearer. CEO Jamie Siminoff has spoken about using Ring’s massive camera network to “zero out crime,” positioning the platform as a tool for large-scale crime prevention rather than just personal safety.

These concerns exist against a longer backdrop of skepticism toward Amazon’s handling of user data. A previous Wired investigation revealed internal warnings about weak data safeguards, deepening public concern over potential data misuse.

Recent reports have added to those concerns, including findings that Ring’s Android app allows undisclosed third parties to track users and how your next walk past a Ring camera could turn into a biometric scan.

Whether the bounty succeeds or not, it highlights a growing demand for transparency and autonomy in connected home devices. Meanwhile, if you are not interested in sharing data, Ring does allow users to opt out, and here’s how to disable the Search Party feature.

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

X-Ray A PCB Virtually | Hackaday

Published

on

If you want to reverse engineer a PC board, you could do worse than X-ray it.  But thanks to [Philip Giacalone], you could just take a photo, load it into PCB Tracer, and annotate the images. You can see a few of a series of videos about the system below.

The tracer runs in your browser. It can let you mark traces, vias, components, and pads. You can annotate everything as you document it, and it can even call an AI model to help generate a schematic from the net list.

This is one of those things that you could do without. Any photo editor could do the same thing. But having the tool aware of what the photo is showing makes life easier. The built-in features are free, but if you use the AI tool, he says it will cost you about a half-dollar per schematic (paid to the AI company).

Advertisement

Even if you don’t think you need to reverse-engineer anything, you may still find this useful if you are trying to understand a board for repair. We’ve had a good Supercon/Remoticon talk about PCB reverse engineering you can watch. If you want to see what a real X-ray of a board looks like, here you go.

Advertisement

Source link

Continue Reading

Tech

Maynooth launches semiconductor master’s programme

Published

on

The postgrad course in circuit design is the first of its kind in Europe, according to Maynooth.

A new master’s degree in circuit design at Maynooth University aims to deliver skilled workers in the semiconductor sector in alignment with the Irish Government’s ‘Silicon Island’ strategy.

The degree programme – designed in collaboration with MIDAS Ireland, an Irish innovation cluster – is the first dedicated course of its kind anywhere in Europe, according to the university and the Government.

The 15-month programme mixes nine months of classroom learning with a full-time, paid placement in industry for students to gain real-world experience.

Advertisement

Prof Eeva Leinonen, president of Maynooth University, said: “This innovative, new master’s programme reflects Maynooth University’s ongoing commitment to partnering with government and industry to deliver academic programmes that respond directly to Ireland’s strategic skills needs.

“Our graduates will be equipped to contribute immediately to Ireland’s and Europe’s semiconductor ambitions, from advanced chip design to innovation in emerging applications.”

Silicon Island is the Government’s national plan for the Irish semiconductor industry, and is geared towards generating skilled workers, design expertise and co-operation between third-level institutions and companies in line with the European Chips Act – the EU initiative for the bloc’s future around semiconductor sovereignty and independence.

Minister for Enterprise, Tourism and Employment Peter Burke, TD said the new master’s programme would “help Irish-based companies recruit faster and grow smarter, while providing a top quality education and in-demand skills for our next generation of engineers”.

Advertisement

He added: “It strengthens Ireland’s hand as a place where both Irish and international companies can grow, innovate and hire the talent they need, cementing our reputation as a hub for semiconductor activity and innovation.”

Ireland is home to around 130 companies employing 20,000 people in the semiconductor sector. Last week, I-C3, Ireland’s National Competence Centre in Semiconductors, was unveiled as one of 30 such centres across 27 EU countries.

Minister for Further and Higher Education, Research, Innovation and Science James Lawless, TD said that the Chips Act aims to double semiconductor production in Europe by 2030 and to encourage upskilling across the industry, and that the Maynooth master’s course would “help ensure a supply of talented, highly skilled graduates who will strengthen Ireland’s competitiveness in the global semiconductor sector”.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Start Your Surround Sound Journey With $50 off This Klipsch Soundbar

Published

on

If you’re tired of listening to the crackle from the speakers on the back of your TV but aren’t ready for the full subwoofer-boosted suite, I’ve got a good deal for you. The Klipsch Flexus Core 200 is currently marked down by $50 at Amazon, and it’s a great place to start if you’re looking for a soundbar that will give you options down the road.

Klipsch Flexus Core 200, a long black rectangular speaker in front of a large flat-screen tv, sitting on an entertainment system shelf

It has fewer channels built into the sound bar than some of our other favorite picks, notably lacking the side-firing drivers that help with surround effects. That doesn’t keep it from sounding excellent, thanks to its 44-inch wide footprint and 2.25-inch drivers that reach all the way to either end. Our reviewer Ryan Waniata was impressed by the Core 200’s clarity and detail, and in particular called out the very punchy bass response.

While the bar has built-in controls for simple tasks like changing the volume and inputs, you can also use the mobile app to fine tune your audio experience. In addition to the stuff you’d expect, there’s also a three-band equalizer for those who like to fiddle and advanced settings for any extra speakers you add to the setup. With eARC to communicate with your TV, you shouldn’t need to touch the remote or app often anyway.

That’s right, one of the biggest selling points for the Klipsch Flexus Core 200 is the ability to add additional speakers to your setup. Both the Klipsch Flexus Surr 100 bookshelf speakers and Klipsch Flexus Sub 100 connect wirelessly to the Core 200 with a custom dongle, giving you a ton of freedom to stash the extra speakers wherever they’d sound best. If you have your own subwoofer that you like, there’s also an RCA jack on the bar to hook it up. That’s a lot of flexibility for any soundbar, let alone one at this price point.

If you’re ready to get the ball rolling on a proper sound system for your next movie night, you can save $50 on the Flexus Core 200, or meander over to our roundup of the best soundbars we’ve tested to find the best option for you.

Advertisement

Source link

Continue Reading

Tech

I found the best tech at KBIS 2026 that you’ll want in your home this year

Published

on

KBIS, or the Kitchen Bath Industry Show, is a little bit like CES – however, with a much narrower focus.

KBIS is a launchpad for all the latest and upcoming innovations in kitchen and bathroom technology, from smart fridges, washing machines and more.

I’ve been walking the floors at KBIS 2026 in Florida to find the very best launches, and I have rounded up my favourite tech reveals. Keep reading to see what I am most excited to get my hands on in 2026 and beyond.

GE Profile 27.9 Cu. Ft. Smart 4-Door French-Door Refrigerator with Kitchen Assistant

Ever wanted a fridge that can not only keep track of your shopping list, but also order food for you?

Advertisement

Well, this smart refrigerator with Kitchen Assistant can do just that. Thanks to the barcode scanner built in the front, just next to the water dispenser, this fridge can track everything you put inside it, logging it all in an app and building handy shopping lists and recipes.

Advertisement

GE Profile 27.9 Cu. Ft. Smart 4-Door French-Door Refrigerator with Kitchen AssistantGE Profile 27.9 Cu. Ft. Smart 4-Door French-Door Refrigerator with Kitchen Assistant

Apparently, over 4 million products are supported, and when you run out of an item, a partnership with Instacart means you can have a top-up delivered in as little as 30 minutes.

Elsewhere, there’s an 8-inch touch display, voice control and a selection of clever sensors that can auto-dispense water to perfectly fill your bottle while you do something else. Expect to see it hit stores in April, for $4899.

Whirlpool 36-inch True Counter Depth 3-Door/4-Door French Door Refrigerator with Nugget Ice

Nugget ice (those small, chewable balls of ice so common with fast-food outlets) is having a moment right now, with dedicated machines to pump out the stuff at rapid speed increasing in popularity all the time.

Advertisement
Whirlpool 36-inch True Counter Depth 3-Door4-Door French Door Refrigerator with Nugget IceWhirlpool 36-inch True Counter Depth 3-Door4-Door French Door Refrigerator with Nugget Ice

However, if you want to keep your counter free of clutter, Whirlpool has done the smart thing and built a nugget ice dispenser right into its latest fridges.

Advertisement

To make things even better, the water and ice dispenser has been made taller to accommodate larger bottles, and there’s a traditional ice maker inside for those who prefer cubed ice.

LG Signature Built-in Depth French 4Door

LG’s Signature line continues to include some of the most desirable appliances on the market, and this latest model is packed with tech, from an internal filtered water dispenser, 24-inch control panel and numerous AI modes to keep your food fresher for longer.

For me though, I am still a massive fan of the Big Craft Ice system, which can make those glorious bar-worthy spheres of ice that are ideal for cocktails.

Advertisement

Midea STRAWash and SENSOR TruDry technology dishwasher

Dishwashers are always getting smarter, but often that’s with connected smarts rather than with the design. Midea is looking to change that by refining the interior of its latest dishwasher to better suit modern homes.

This dishwasher has internal spaces large enough to clean a reusable bottle, and there are dedicated jets to clean inside reusable straws and even those tall tumblers that can be a pain to clean properly. There’s even a dedicated zone for lids and a cycle that gets your dishes fully clean and dry in just one hour.

Midea STRAWash and SENSOR TruDry technology dishwasherMidea STRAWash and SENSOR TruDry technology dishwasher

Advertisement

Fotile Fully Integrated Range Hood

This is one of the slickest range hoods we’ve ever laid our eyes on, thanks to some seriously futuristic looks and gesture control that lets you open it up and get to work with just a wave of your hand.

When it’s not in use, the hood is fully hidden, thanks to an aircraft-inspired folding system. It also has 48% wider coverage than traditional hoods and 24-hour smart monitoring.

Advertisement

LG Signature 24-inch Built Under Dishwasher

lgkbislgkbis

A dishwasher worthy of the Signature brand. This model is designed to blend into your kitchen, with a handle that slides away when not in use and automatically pops out when you need it. 

It’s quiet (around 38db), has some seriously tasteful lighting inside and numerous features, from Auto Open, Dynamic Heat Dry and QuadWash Pro.

Samsung Bespoke AI 3-Door French Door Refrigerators with Zero Clearance Fit

Samsung’s Bespoke range is home to some of my favourite refrigerators, and this KBIS 2026 launch has the potential to be another winner.

Advertisement

This model has specially designed hinges and slim door profiles that allow the doors to open fully if there’s minimal space between the refrigerator and any surrounding walls.

Advertisement

Inside, you’ve got more control over your ice output, including an option for half spheres of ice that join the existing full sphere options. These are ideal for cocktails, as they melt slowly – and look great, too.

Samsung Bespoke AI 3-Door French Door Refrigerators with Zero Clearance FitSamsung Bespoke AI 3-Door French Door Refrigerators with Zero Clearance Fit

On the outside, there’s an extra-tall filtered water dispenser, perfect for use with larger drinks bottles from the likes of Stanley.

Whirlpool Front Load Laundry Tower with FreshFlow Vent System & UV Clean

This tower-style laundry system not only saves space but also introduces a very clever industry-first UV Clean technology. 

Advertisement

Whirlpool Front Load Laundry Tower with FreshFlow Vent System & UV CleanWhirlpool Front Load Laundry Tower with FreshFlow Vent System & UV Clean

This easily accessible addition – which can be flipped on and off depending on the wash – can help reduce odour-causing bacteria without the traditional need for high temperatures that can damage certain fabrics. There’s smart connectivity too, plus a venting system to keep everything smelling fresh.

Advertisement

Source link

Continue Reading

Tech

Is Age Verification a Trap?

Published

on

Social media is going the way of alcohol, gambling, and other social sins: Societies are deciding it’s no longer kid stuff. Lawmakers point to compulsive use, exposure to harmful content, and mounting concerns about adolescent mental health. So, many propose to set a minimum age, usually 13 or 16.

In cases when regulators demand real enforcement rather than symbolic rules, platforms run into a basic technical problem. The only way to prove that someone is old enough to use a site is to collect personal data about who they are. And the only way to prove that you checked is to keep the data indefinitely. Age-restriction laws push platforms toward intrusive verification systems that often directly conflict with modern data-privacy law.

This is the age-verification trap. Strong enforcement of age rules undermines data privacy.

How Does Age Enforcement Actually Work?

Most age-restriction laws follow a familiar pattern. They set a minimum age and require platforms to take “reasonable steps” or “effective measures” to prevent underage access. What these laws rarely spell out is how platforms are supposed to tell who is actually over the line. At the technical level, companies have only two tools.

Advertisement

The first is identity-based verification. Companies ask users to upload a government ID, link a digital identity, or provide documents that prove their age. Yet in many jurisdictions, 16-year-olds do not have IDs. In others, IDs exist but are not digital, not widely held, or not trustworthy. Storing copies of identity documents also creates security and misuse risks.

The second option is inference. Platforms try to guess age based on behavior, device signals, or biometric analysis, most commonly facial age estimation from selfies or videos. This avoids formal ID collection, but it replaces certainty with probability and error.

In practice, companies combine both. Self-declared ages are backed by inference systems. When confidence drops, or regulators ask for proof of effort, inference escalates to ID checks. What starts as a light-touch checkpoint turns into layered verification that follows users over time.

What Are Platforms Doing Now?

This pattern is already visible on major platforms.

Advertisement

Meta has deployed facial age estimation on Instagram in multiple markets, using video-selfie checks through third-party partners. When the system flags users as possibly underaged, it prompts them to record a short selfie video. An AI system estimates their age and, if it decides they are under the threshold, restricts or locks the account. Appeals often trigger additional checks, and misclassifications are common.

TikTok has confirmed that it also scans public videos to infer users’ ages. Google and YouTube rely heavily on behavioral signals tied to viewing history and account activity to infer age, then ask for government ID or a credit card when the system is unsure. A credit card functions as a proxy for adulthood, even though it says nothing about who is actually using the account. The Roblox games site, which recently launched a new age-estimate system, is already suffering from users selling child-aged accounts to adult predators seeking entry to age-restricted areas, Wired reports.

For a typical user, age is no longer a one-time declaration. It becomes a recurring test. A new phone, a change in behavior, or a false signal can trigger another check. Passing once does not end the process.

How Do Age-Verification Systems Fail?

These systems fail in predictable ways.

Advertisement

False positives are common. Platforms identify as minors adults with youthful faces, or adults who are sharing family devices, or have otherwise unusual usage. They lock accounts, sometimes for days. False negatives also persist. Teenagers learn quickly how to evade checks by borrowing IDs, cycling accounts, or using VPNs.

The appeal process itself creates new privacy risks. Platforms must store biometric data, ID images, and verification logs long enough to defend their decisions to regulators. So if an adult who is tired of submitting selfies to verify their age finally uploads an ID, the system must now secure that stored ID. Each retained record becomes a potential breach target.

Scale that experience across millions of users, and you bake the privacy risk into how platforms work.

Is Age Verification Compatible With Privacy Law?

This is where emerging age-restriction policy collides with existing privacy law.

Advertisement

Modern data-protection regimes all rest on similar ideas: Collect only what you need, use it only for a defined purpose, and keep it only as long as necessary.

Age enforcement undermines all three.

To prove they are following age-verification rules, platforms must log verification attempts, retain evidence, and monitor users over time. When regulators or courts ask whether a platform took reasonable steps, “We collected less data” is rarely persuasive. For companies, defending themselves against accusations of neglecting to properly verify age supersedes defending themselves against accusations of inappropriate data collection.

It is not an explicit choice by voters or policymakers, but instead a reaction to enforcement pressure and how companies perceive their litigation risk.

Advertisement

Less Developed Countries, Deeper Surveillance

Outside wealthy democracies, the trade-off is even starker.

Brazil’s Statute of Child-rearing and Adolescents (ECA in Portuguese) imposes strong child-protection duties online, while its data-protection law restricts data collection and processing. Now providers operating in Brazil must adopt effective age-verification mechanisms and can no longer rely on self-declaration alone for high-risk services. Yet they also face uneven identity infrastructure and widespread device sharing. To compensate, they rely more heavily on facial estimation and third-party verification vendors.

In Nigeria many users lack formal IDs. Digital service providers fill the gap with behavioral analysis, biometric inference, and offshore verification services, often with limited oversight. Audit logs grow, data flows expand, and the practical ability of users to understand or contest how companies infer their age shrinks accordingly. Where identity systems are weak, companies do not protect privacy. They bypass it.

The paradox is clear. In countries with less administrative capacity, age enforcement often produces more surveillance, not less, because inference fills the void of missing documents.

Advertisement

How Do Enforcement Priorities Change Expectations?

Some policymakers assume that vague standards preserve flexibility. In the U.K., then–Digital Secretary Michelle Donelan, argued in 2023 that requiring certain online safety outcomes without specifying the means would avoid mandating particular technologies. Experience suggests the opposite.

When disputes reach regulators or courts, the question is simple: Can minors still access the platform easily? If the answer is yes, authorities tell companies to do more. Over time, “reasonable steps” become more invasive.

Repeated facial scans, escalating ID checks, and long-term logging become the norm. Platforms that collect less data start to look reckless by comparison. Privacy-preserving designs lose out to defensible ones.

This pattern is familiar, including online sales-tax enforcement. After courts settled that large platforms had an obligation to collect and remit sales taxes, companies began continuous tracking and storage of transaction destinations and customer location signals. That tracking is not abusive, but once enforcement requires proof over time, companies build systems to log, retain, and correlate more data. Age verification is moving the same way. What begins as a one-time check becomes an ongoing evidentiary system, with pressure to monitor, retain, and justify user-level data.

Advertisement

The Choice We Are Avoiding

None of this is an argument against protecting children online. It is an argument against pretending there is no trade-off.

Some observers present privacy-preserving age proofs involving a third party, such as the government, as a solution, but they inherit the same structural flaw: Many users who are legally old enough to use a platform do not have government ID. In countries where the minimum age for social media is lower than the age at which ID is issued, platforms face a choice between excluding lawful users and monitoring everyone. Right now, companies are making that choice quietly, after building systems and normalizing behavior that protects them from the greater legal risks. Age-restriction laws are not just about kids and screens. They are reshaping how identity, privacy, and access work on the Internet for everyone.

The age-verification trap is not a glitch. It is what you get when regulators treat age enforcement as mandatory and privacy as optional.

From Your Site Articles

Advertisement

Related Articles Around the Web

Source link

Continue Reading

Tech

UK data hoarder flies to America to buy hard drives, saves $2,000

Published

on


In sharing the story, Redditor cgtechuk said he was running out of space after upgrading to four 16TB drives purchased from Amazon UK in 2020. Unfortunately, the 28TB replacement drives he had been eyeing were skyrocketing in price locally. Having previously purchased hardware in the US before, he decided to…
Read Entire Article
Source link

Continue Reading

Tech

Google clamps down on Antigravity ‘malicious usage’, cutting off OpenClaw users in sweeping ToS enforcement move

Published

on

Google caused controversy among some developers this weekend and today, Monday, February 23rd, after restricting their usage of its new Antigravity “vibe coding” platform, alleging “maliciously usage.” 

Some users who had been using the open source autonomous AI agent OpenClaw in conjunction with agents built on Antigravity, as well as those who had connected OpenClaw agents to their Gmails, claimed on social media that they lost access to their Google accounts. 

According to Google, said users had been using Antigravity to access a larger number of Gemini tokens via third-party platforms like OpenClaw, which overwhelmed the system for other Antigravity customers. 

This move has cut off several users, underscoring the architectural and trust issues that can arise with OpenClaw. The timing of Google’s crackdown is particularly pointed. Just one week ago, on February 15, OpenAI CEO Sam Altman announced that OpenClaw creator Peter Steinberger had joined OpenAI to lead its “next generation of personal agents.” While OpenClaw remains an open-source project under an independent foundation, it is now financially backed and strategically guided by Google’s primary rival.

Advertisement

By cutting off OpenClaw’s access to Antigravity, Google isn’t just protecting its server load; it is effectively severing a pipeline that allows an OpenAI-adjacent tool to leverage Google’s most advanced Gemini models.

Google DeepMind engineer and former CEO and founder of Windsurf, Varun Mohan, said in an X post that the company noticed “malicious usage” that led to service degradation.

“We’ve been seeing a massive increase in malicious usage of the Antigravity backend that has tremendously degraded the quality of service for our users. We needed to find a path to quickly shut off access to these users that are not using the product as intended. We understand that a subset of these users were not aware that this was against our ToS [Terms of Service] and will get a path for them to come back on but we have limited capacity and want to be fair to our actual users,” the post said. 

A Google DeepMind spokesperson told VentureBeat that the move is not to permanently ban the use of Antigravity to access third-party platforms, but to align its use with the platform’s terms of service.   

Advertisement

Unsurprisingly, Google’s move has caused a furor among OpenClaw users, including from OpenClaw creator Peter Steinberger, who announced that OpenClaw will remove Google support as a result. 

Infrastructure and connection uncertainty

OpenClaw emerged as a way for individual users to run shell commands and access local files, fulfilling a major promise of AI agents: efficiently running workflows for users.

But, as VentureBeat has frequently pointed out, it can often run into security and guardrail issues. There are companies building ways for enterprise customers to access OpenClaw securely and with a governance layer, though OpenClaw is so new that we should expect more announcements soon.

However, Google’s move was not framed as a security issue but rather as one of access and runtime, further showing that there is still significant uncertainty when users want to bring in something like OpenClaw into their workflow. 

Advertisement

This is not the first time developers and power users of agentic AI found their access curtailed. Last year, Anthropic throttled access to Claude Code after the company claimed some users were abusing the system by running it 24/7. 

What this does highlight is the disconnect between companies like Google and OpenClaw users. OpenClaw offered many interesting possibilities for creating workflows with agents. However, because it is continually evolving, users may inadvertently run afoul of ToS or rate limits. 

Mohan said Google is working to bring the banned users back, but whether this means the company will amend its ToS or figure out a secure connection between OpenClaw agents and Antigravity models remains to be seen. 

For developers, the message is clear: the era of “bring your own agent” to a frontier model is ending. Providers are now prioritizing vertically integrated experiences where they can capture 100% of the telemetry and subscription revenue, often at the expense of the open-source interoperability that defined the early days of the LLM boom.

Advertisement

Affected users

Several users said on both the Y Combinator chat boards and X that they no longer had access to their Google accounts after running OpenClaw instances for certain Google products.

Google’s move mirrors a broader industry shift toward “walled garden” agent ecosystems. Earlier this year, Anthropic introduced “client fingerprinting” to ensure that its Claude Code environment remains the exclusive interface for its models, effectively locking out third-party wrappers like OpenClaw. For developers, the message is clear: the era of “bring your own agent” to a frontier model is ending. Providers are now prioritizing vertically integrated experiences where they can capture 100% of the telemetry and subscription revenue, often at the expense of the open-source interoperability that defined the early days of the LLM boom.

Some have said they will no longer use Google or Gemini for their projects. Right now, people who still want to keep using Antigravity will need to wait until Google figures out a way for them to use OpenClaw and access Gemini tokens in a manner Google deems “fair.” 

Google DeepMind reiterated that it had only cut access to Antigravity, not to other Google applications. 

Advertisement

Conclusion: the enterprise takeaway

For enterprise technical decision-makers, the “Antigravity Ban” serves as a definitive case study in the risks of agentic dependency. As the industry moves from chatbots to autonomous agents, the following realities must now dictate strategy:

  • Platform fragility is the new normal: The sudden lockout of $250/month “Ultra” users proves that even high-paying enterprise customers have little leverage when a provider decides to change its “fair use” definitions. Relying on OAuth-based third-party wrappers for core business logic is now a high-risk gamble.

  • The rise of local-first governance: With OpenClaw moving toward an OpenAI-backed foundation and Google/Anthropic tightening their clouds, enterprises should prioritize agent frameworks that can run “local-first” or within VPCs. The “token loophole” that OpenClaw exploited is being closed; future agentic scale will require direct, high-cost API contracts rather than subsidized consumer seats.

  • Account portability as a requirement: The fact that users “lost access to their Google accounts” underscores the danger of bundling development environments with primary identity providers. Decision-makers should decouple AI development from core corporate identity (SSO) where possible to avoid a single ToS violation paralyzing an entire team’s communications.

Ultimately, the Antigravity incident marks the end of the “Wild West” for AI agents. As Google and OpenAI stake their claims, the enterprise must choose between the stability of the walled garden or the complexity (and cost) of truly independent, self-hosted infrastructure.

Source link

Advertisement
Continue Reading

Tech

Viral Doomsday Report Lays Bare Wall Street’s Deep Anxiety About AI Future

Published

on

A 7,000-word “doomsday” thought experiment from Citrini Research helped trigger an 800-point drop in the Dow, “painting a dark portrait of a future in which technological change inspires a race to the bottom in white-collar knowledge work,” reports the Wall Street Journal. From the report: Concerns of hyperscalers overspending are out. Worries of software-industry disruption don’t go far enough. The “global intelligence crisis” is about to hit. The new, broader question: What if AI is so bullish for the economy that it is actually bearish? “For the entirety of modern economic history, human intelligence has been the scarce input,” Citrini wrote in a post it described as a scenario dated June 2028, not a prediction. “We are now experiencing the unwind of that premium.”

Many of Monday’s moves roughly aligned with the situation outlined by Citrini, in which fast-advancing AI tools allow spending cuts across industries, sparking mass white-collar unemployment and in turn leading to financial contagion. Software firms DataDog, CrowdStrike and Zscaler each plunged more than 9%. International Business Machines’ 13% decline was its worst one-day performance since 2000. American Express, KKR and Blackstone — all name-checked by Citrini — tumbled. That anxiety, coupled with renewed uncertainty about trade policy from Washington, weighed down major indexes Monday. The Dow Jones Industrial Average led declines, falling 1.7%, or 822 points. The S&P 500 shed 1%, while the Nasdaq composite retreated 1.1%.

[…] Monday’s market swings extended a run of AI-linked volatility. A small research outfit that has garnered a huge Substack following for macro and thematic stock research, Citrini said in its new post that software firms, payment processors and other companies formed “one long daisy chain of correlated bets on white-collar productivity growth” that AI is poised to disrupt. […] Shares in DoorDash also veered 6.6% lower Monday after Citrini’s Substack note called the delivery app a “poster child” for how new tools would upend companies that monetize interpersonal friction. In the research firm’s scenario, AI agents would help both drivers and customers navigate food deliveries at much lower costs.

Source link

Advertisement
Continue Reading

Tech

Group alleges fake sign-ins used to pad apparent opposition to Washington state ‘millionaires tax’

Published

on

Washington state Sen. Victoria Hunt, a co-sponsor of SB 6346, speaks during a virtual news conference on Monday about how she learned that her name had been fraudulently signed in as “con” over the weekend on a public comment page ahead of a House Committee on Finance hearing on the millionaires tax. (Screen grab via Invest in Washington Now)

Invest in Washington Now, a Washington state-based advocacy group focused on progressive revenue reform, is alleging that widespread fraud in the Legislature’s public comment system has been used to pad apparent opposition to the so-called “millionaires tax.”

In a news release and virtual press conference on Monday, Invest in Washington Now said there have been tens of thousands of duplicate names used as sign-ins for hearings on Senate Bill 6346 and House Bill 2724. The group said more than 100 sign-ins marked “con” were confirmed as fraudulent over the weekend and ahead of Tuesday’s public hearing in the House Committee on Finance.

The Seattle Times reported on the allegations on Monday.

Among those who were allegedly impersonated: Sen. Victoria Hunt (D-Issaquah), a co-sponsor of the millionaires tax; former Rep. Derek Kilmer; SEIU 775 Secretary Treasurer Adam Glickman; and WEA President Larry Delaney.

Invest In Washington Now shared a letter it sent to Attorney General Nick Brown and House Chief Clerk Bernard Dean calling for an investigation into the scale of the alleged fraud and who is behind it.

Advertisement

“This is a clearly fraudulent effort to mislead legislators and the public about the level of opposition to the millionaires tax, and the ability to commit this type of fraud could undermine the integrity of legislative process on this and other issues,” the letter said.

The millionaires tax, which was approved by the Senate last week, would create a 9.9% tax applied to taxable, personal annual income that exceeds $1 million. The legislation marks the first time in decades that state lawmakers have pursued a personal income tax aimed at high‑income residents.

The bill has drawn opposition from some tech leaders and entrepreneurs who worry it could undermine the sector by souring Washington’s relatively favorable tax laws for startup founders, investors and high-wage earners.

Opponents of the tax have been pointing to what they call the “most unpopular bill in state history,” citing the many thousands of Washington residents who have signed on in opposition.

Advertisement

“More than 60,000 people signed in against SB 6346 when it received a rushed hearing in the Senate,” Sen. John Braun (R-Centralia) said in a news release last week. “That is so impressive that Democrats have tried to say bots are responsible, even though the Legislature blocks bots. We know better.”

The legislative sign-in page does require CAPTCHA, a security mechanism used to prevent bots from abusing websites. But Invest in Washington Now pointed to the frequency and high number of duplicate names, many signed in within seconds of each other, that suggested the possible use of automated sign-in tools.

Hunt, who represents the 5th Legislative District, said she was signed in fraudulently twice.

“I did not sign in ‘con,’ I’m not sure who is doing this,” Hunt said. “I don’t know why a senator would sign into a House hearing in any event. It was not me.”

Advertisement

SEIU’s Glickman said he strongly supports the millionaires tax, so he was surprised to learn of his own apparent opposition to the bill.

“I was shocked to say the least, to learn that at 4:32 a.m. Thursday morning while I was home fast asleep, somebody apparently put my name and organization into the official testimony record as against the millionaires tax,” Glickman said. “I was even more appalled to learn that I wasn’t the only one that happened to over the weekend.”

Related:

Source link

Advertisement
Continue Reading

Tech

Anthropic accuses three Chinese AI labs of abusing Claude to improve their own models

Published

on

Anthropic is issuing a call to action against AI “distillation attacks,” after accusing three AI companies of misusing its Claude chatbot. On its website, Anthropic claimed that DeepSeek, Moonshot and MiniMax have been conducting “industrial-scale campaigns…to illicitly extract Claude’s capabilities to improve their own models.”

Distillation in the AI world refers to when less capable models lean on the responses of more powerful ones to train themselves. While distillation isn’t a bad thing across the board, Anthropic said that these types of attacks can be used in a more nefarious way. According to Anthropic, these three Chinese AI firms were responsible for more than “16 million exchanges with Claude through approximately 24,000 fraudulent accounts.” From Anthropic’s perspective, these competing companies were using Claude as a shortcut to develop more advanced AI models, which could also lead to circumventing certain safeguards.

Anthropic said in its post that it was able to link each of these distilling attack campaigns to the specific companies with “high confidence” thanks to IP address correlation, metadata requests and infrastructure indicators, along with corroborating with others in the AI industry who have noticed similar behaviors.

Early last year, OpenAI made similar claims of rival firms distilling its models and banned suspected accounts in response. As for Anthropic, the company behind Claude said it would upgrade its system to make distillation attacks harder to do and easier to identify. While Anthropic is pointing fingers at these other firms, it’s also facing a lawsuit from music publishers who accused the AI company of using illegal copies of songs to train its Claude chatbot.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025