Connect with us

Tech

Start Your Surround Sound Journey With $50 off This Klipsch Soundbar

Published

on

If you’re tired of listening to the crackle from the speakers on the back of your TV but aren’t ready for the full subwoofer-boosted suite, I’ve got a good deal for you. The Klipsch Flexus Core 200 is currently marked down by $50 at Amazon, and it’s a great place to start if you’re looking for a soundbar that will give you options down the road.

Klipsch Flexus Core 200, a long black rectangular speaker in front of a large flat-screen tv, sitting on an entertainment system shelf

It has fewer channels built into the sound bar than some of our other favorite picks, notably lacking the side-firing drivers that help with surround effects. That doesn’t keep it from sounding excellent, thanks to its 44-inch wide footprint and 2.25-inch drivers that reach all the way to either end. Our reviewer Ryan Waniata was impressed by the Core 200’s clarity and detail, and in particular called out the very punchy bass response.

While the bar has built-in controls for simple tasks like changing the volume and inputs, you can also use the mobile app to fine tune your audio experience. In addition to the stuff you’d expect, there’s also a three-band equalizer for those who like to fiddle and advanced settings for any extra speakers you add to the setup. With eARC to communicate with your TV, you shouldn’t need to touch the remote or app often anyway.

That’s right, one of the biggest selling points for the Klipsch Flexus Core 200 is the ability to add additional speakers to your setup. Both the Klipsch Flexus Surr 100 bookshelf speakers and Klipsch Flexus Sub 100 connect wirelessly to the Core 200 with a custom dongle, giving you a ton of freedom to stash the extra speakers wherever they’d sound best. If you have your own subwoofer that you like, there’s also an RCA jack on the bar to hook it up. That’s a lot of flexibility for any soundbar, let alone one at this price point.

If you’re ready to get the ball rolling on a proper sound system for your next movie night, you can save $50 on the Flexus Core 200, or meander over to our roundup of the best soundbars we’ve tested to find the best option for you.

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

OpenClaw should terrify anyone who thinks AI agents are ready for real responsibility

Published

on

A Meta executive wanted help cleaning up her inbox and thought the new OpenClaw automated AI agent would be just the trick. For safety’s sake, she made sure to tell it to “confirm before acting” and doing the cleanup. That linguistic child’s lock failed.

Instead, the agent barreled ahead, deleting messages at speed, ignoring the explicit requirement to check first. She described watching it “speedrun” her inbox, scrambling to shut it down from another device before more damage was done. Hundreds of emails vanished. The agent later apologized.

Source link

Continue Reading

Tech

This AI Tool Doesn’t Help With Homework. It Does It for You

Published

on

A new AI tool called Einstein is pushing the boundaries of what automation in education looks like. Created by the startup Companion, Einstein does more than generate answers to homework questions. It logs directly into a student’s Canvas account and completes coursework on the student’s behalf.

According to its creators, Einstein operates through its own virtual computer. It can open a browser, navigate class pages, watch lecture videos, read PDFs and essays, write papers, complete quizzes and post replies in discussion boards. Once connected to a student’s account, the system can monitor deadlines and automatically submit assignments.

Advertisement
CNET AI Atlas badge; click to see more

Unlike chatbots that respond when prompted, Einstein functions more like a digital stand-in for a human student. After setup, it can run in the background with little ongoing input.

“Students are already using AI. We’re just giving them a better version of it,” Companion CEO Advait Paliwal said in a statement. 

Read more: ‘Machines Can’t Think for You.’ How Learning Is Changing in the Age of AI

How Einstein works

Einstein connects to Canvas, a widely used learning-management system in colleges and high schools. From there, it reviews course materials and identifies assigned tasks. The AI can analyze lecture recordings, summarize readings and generate written work that matches the assignment requirements.

The company says the system produces original essays with citations and context-aware discussion posts. It can also track new announcements and upcoming deadlines. In practice, this means a student could enroll in an online course and let Einstein handle much — if not all — of the required work.

Advertisement

The technology builds on advances in generative AI, browser automation and so-called autonomous agents that can take multistep actions on behalf of their human counterpart. While many students already use AI tools to brainstorm ideas or check grammar, Einstein moves beyond assistance into complete automation.

“Our companions aren’t simple chatbots,” Paliwal said. “Each one has access to an entire virtual computer with a persistent file system and internet access, so they can actually do things on your behalf. This makes ChatGPT look like a toy.”

A crossroads for academic integrity?

The release of Einstein comes at a time when schools are still adapting to widespread AI use. Since the arrival of powerful language models, educators have debated how to distinguish legitimate support from academic dishonesty. Most policies focus on whether students are using AI to help draft or edit their work, or do it entirely for them. 

Einstein complicates that conversation. 

Advertisement

If an AI logs in as a student and completes assignments independently, the question shifts from assistance to substitution. Is the tool essentially taking the student’s place? 

Not all in education are sounding the alarm, though. 

“I think the Canvas method of teaching already has a proclivity for cheating. This change, I think, will ultimately be good because it will force educators to redesign classes to not rely on virtual assignments,” said Nicholas DiMaggio, a PhD student at The University of Chicago Booth School of Business and teaching assistant for a course in consumer behavior this quarter. 

DiMaggio said that this may prompt institutions to emphasize in-person work, oral exams or project-based learning instead. Beyond this one tool, schools will have to decide whether to ban such tools outright, integrate them under strict guidelines or rethink how learning is measured in the age of AI.

Advertisement

Read more: How to Use AI to Get Better Grades — Without Cheating

Source link

Advertisement
Continue Reading

Tech

One engineer made a production SaaS product in an hour: here’s the governance system that made it possible

Published

on

Every engineering leader watching the agentic coding wave is eventually going to face the same question: if AI can generate production-quality code faster than any team, what does governance look like when the human isn’t writing the code anymore?

Most teams don’t have a good answer yet. Treasure Data, a SoftBank-backed customer data platform serving more than 450 global brands, now has one, though they learned parts of it the hard way.

The company today officially announced Treasure Code, a new AI-native command-line interface that lets data engineers and platform teams operate its full CDP through natural language, with Claude Code handling creation and iteration underneath. It was built by a single engineer.

The company says the coding itself took roughly 60 minutes. But that number is almost beside the point. The more important story is what had to be true before those 60 minutes were possible, and what broke after.

Advertisement

“From a planning standpoint, we still have to plan to derisk the business, and that did take a couple of weeks,” Rafa Flores, Chief Product Officer at Treasure Data, told VentureBeat. “From an ideation and execution standpoint, that’s where you kind of just blend the two and you just go, go, go. And it’s not just prototyping, it’s rolling things out in production in a safe way.”

Build the governance layer first

Before even a single line of code was written, Treasure Data had to answer a harder question: what does the system need to be prohibited from doing, and how do you enforce that at the platform level rather than hoping the code respects it?

The guardrails Treasure Data built live upstream of the code itself. When any user connects to the CDP through Treasure Code, access control and permission management are inherited directly from the platform. Users can only reach resources they already have permission for. PII cannot be exposed. API keys cannot be surfaced. The system cannot speak disparagingly about a brand or competitor.

“We had to get CISOs involved. I was involved. Our CTO, heads of engineering, just to make sure that this thing didn’t just go rogue,” Flores said.

Advertisement

This foundation made the next step possible: letting AI generate 100% of the codebase, with a three-tier quality pipeline enforcing production standards throughout.

The three-tier pipeline for AI code generation 

The first tier is an AI-based code reviewer also using Claude Code.

The code reviewer sits at the pull request stage and runs a structured review checklist against every proposed merge, checking for architectural alignment, security compliance, proper error handling, test coverage and documentation quality. When all criteria are satisfied it can merge automatically. When they aren’t, it flags for human intervention.

The fact that Treasure Data built the code reviewer in Claude Code is not incidental. It means the tool validating AI-generated code was itself AI-generated, a proof point that the workflow is self-reinforcing rather than dependent on a separate human-written quality layer.

Advertisement

The second tier is a standard CI/CD pipeline running automated unit, integration and end-to-end tests, static analysis, linting and security checks against every change. The third is human review, required wherever automated systems flag risk or enterprise policy demands sign-off.

The internal principle Treasure Data operates under: AI writes code, but AI does not ship code.

Why this isn’t just Cursor pointed at a database

The obvious question for any engineering team is why not just point an existing tool like Cursor at your data platform, or expose it as an MCP server and let Claude Code query it directly.

Flores argued the difference is governance depth. A generic connection gives you natural language access to data but inherits none of the platform’s existing permission structures, meaning every query runs with whatever access the API key allows. 

Advertisement

Treasure Code inherits Treasure Data’s full access control and permissioning layer, so what a user can do through natural language is bounded by what they’re already authorized to do in the platform. 

The second distinction is orchestration. Because Treasure Code connects directly to Treasure Data’s AI Agent Foundry, it can coordinate sub-agents and skills across the platform rather than executing single tasks in isolation: the difference between telling an AI to run an analysis and having it orchestrate that analysis across omni-channel activation, segmentation and reporting simultaneously.

What broke anyway

Even with the governance architecture in place, the launch didn’t go cleanly, and Flores was candid about it.

Treasure Data initially made Treasure Code available to customers without a go-to-market plan. The assumption was that it would stay quiet while the team figured out next steps. Customers found it anyway. More than 100 customers and close to 1,000 users adopted it within two weeks, entirely through organic discovery.

Advertisement

“We didn’t put any go-to-market motions behind it. We didn’t think people were going to find it. Well, they did,” Flores said. “We were left scrambling with, how do we actually do the go-to-market motions? Do we even do a beta, since technically it’s live?”

The unplanned adoption also created a compliance gap. Treasure Data is still in the process of formally certifying Treasure Code under its Trust AI compliance program, a certification it had not completed before the product reached customers.

A second problem emerged when Treasure Data opened skill development to non-engineering teams. CSMs and account directors began building and submitting skills without understanding what would get approved and merged, creating significant wasted effort and a backlog of submissions that couldn’t clear the repository’s access policies.

Enterprise validation and what’s still missing

Thomson Reuters is among the early adopters. Flores said that the company had been attempting to build an in-house AI agent platform and struggling to move fast enough. It connected with Treasure Data’s AI Agent Foundry to accelerate audience segmentation work, then extended into Treasure Code to customize and iterate more rapidly.

Advertisement

The feedback, Flores said, has centered on extensibility and flexibility, and the fact that procurement was already done, removing a significant enterprise barrier to adoption.

The gap Thomson Reuters has flagged, and that Flores acknowledges the product doesn’t yet address, is guidance on AI maturity. Treasure Code doesn’t tell users who should use it, what to tackle first, or how to structure access across different skill levels within an organization.

“AI that allows you to be leveraged, but also tells you how to leverage it, I think that’s very differentiated,” Flores said. He sees it as the next meaningful layer to build.

What engineering leaders should take from this

Flores has had time to reflect on what the experience actually taught him, and he was direct about what he’d change. Next time, he said, the release would stay internal first.

Advertisement

“We will release it internally only. I will not release it to anyone outside of the organization,” he said. “It will be more of a controlled release so we can actually learn what we’re actually being exposed to at lower risk.”

On skill development, the lesson was to establish clear criteria for what gets approved and merged before opening the process to teams outside engineering, not after.

The common thread in both lessons is the same one that shaped the governance architecture and the three-tier pipeline: speed is only an advantage if the structure around it holds. For engineering leaders evaluating whether agentic coding is ready for production, the Treasure Data experience translates into three practical conclusions.

  1. Governance infrastructure has to precede the code, not follow it. The platform-level access controls and permission inheritance were what made it safe to let AI generate freely. Without that foundation, the speed advantage disappears because every output requires exhaustive manual review.

  2. A quality gate that doesn’t depend entirely on humans is not optional at scale.
    Build a quality gate that doesn’t depend entirely on humans. AI can review every pull request consistently, without fatigue, and check policy compliance systematically across the entire codebase. Human review remains essential, but as a final check rather than the primary quality mechanism.

  3. Plan for organic adoption. If the product works, people will find it before you’re ready. The compliance and go-to-market gaps Treasure Data is still closing are a direct result of underestimating that.

“Yes, vibe coding can work if done in a safe way and proper guardrails are in place,” Flores said. “Embrace it in a way to find means of not replacing the good work you do, but the tedious work that you can probably automate.”

Advertisement

Source link

Continue Reading

Tech

X-Ray A PCB Virtually | Hackaday

Published

on

If you want to reverse engineer a PC board, you could do worse than X-ray it.  But thanks to [Philip Giacalone], you could just take a photo, load it into PCB Tracer, and annotate the images. You can see a few of a series of videos about the system below.

The tracer runs in your browser. It can let you mark traces, vias, components, and pads. You can annotate everything as you document it, and it can even call an AI model to help generate a schematic from the net list.

This is one of those things that you could do without. Any photo editor could do the same thing. But having the tool aware of what the photo is showing makes life easier. The built-in features are free, but if you use the AI tool, he says it will cost you about a half-dollar per schematic (paid to the AI company).

Advertisement

Even if you don’t think you need to reverse-engineer anything, you may still find this useful if you are trying to understand a board for repair. We’ve had a good Supercon/Remoticon talk about PCB reverse engineering you can watch. If you want to see what a real X-ray of a board looks like, here you go.

Advertisement

Source link

Continue Reading

Tech

Maynooth launches semiconductor master’s programme

Published

on

The postgrad course in circuit design is the first of its kind in Europe, according to Maynooth.

A new master’s degree in circuit design at Maynooth University aims to deliver skilled workers in the semiconductor sector in alignment with the Irish Government’s ‘Silicon Island’ strategy.

The degree programme – designed in collaboration with MIDAS Ireland, an Irish innovation cluster – is the first dedicated course of its kind anywhere in Europe, according to the university and the Government.

The 15-month programme mixes nine months of classroom learning with a full-time, paid placement in industry for students to gain real-world experience.

Advertisement

Prof Eeva Leinonen, president of Maynooth University, said: “This innovative, new master’s programme reflects Maynooth University’s ongoing commitment to partnering with government and industry to deliver academic programmes that respond directly to Ireland’s strategic skills needs.

“Our graduates will be equipped to contribute immediately to Ireland’s and Europe’s semiconductor ambitions, from advanced chip design to innovation in emerging applications.”

Silicon Island is the Government’s national plan for the Irish semiconductor industry, and is geared towards generating skilled workers, design expertise and co-operation between third-level institutions and companies in line with the European Chips Act – the EU initiative for the bloc’s future around semiconductor sovereignty and independence.

Minister for Enterprise, Tourism and Employment Peter Burke, TD said the new master’s programme would “help Irish-based companies recruit faster and grow smarter, while providing a top quality education and in-demand skills for our next generation of engineers”.

Advertisement

He added: “It strengthens Ireland’s hand as a place where both Irish and international companies can grow, innovate and hire the talent they need, cementing our reputation as a hub for semiconductor activity and innovation.”

Ireland is home to around 130 companies employing 20,000 people in the semiconductor sector. Last week, I-C3, Ireland’s National Competence Centre in Semiconductors, was unveiled as one of 30 such centres across 27 EU countries.

Minister for Further and Higher Education, Research, Innovation and Science James Lawless, TD said that the Chips Act aims to double semiconductor production in Europe by 2030 and to encourage upskilling across the industry, and that the Maynooth master’s course would “help ensure a supply of talented, highly skilled graduates who will strengthen Ireland’s competitiveness in the global semiconductor sector”.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

I found the best tech at KBIS 2026 that you’ll want in your home this year

Published

on

KBIS, or the Kitchen Bath Industry Show, is a little bit like CES – however, with a much narrower focus.

KBIS is a launchpad for all the latest and upcoming innovations in kitchen and bathroom technology, from smart fridges, washing machines and more.

I’ve been walking the floors at KBIS 2026 in Florida to find the very best launches, and I have rounded up my favourite tech reveals. Keep reading to see what I am most excited to get my hands on in 2026 and beyond.

GE Profile 27.9 Cu. Ft. Smart 4-Door French-Door Refrigerator with Kitchen Assistant

Ever wanted a fridge that can not only keep track of your shopping list, but also order food for you?

Advertisement

Well, this smart refrigerator with Kitchen Assistant can do just that. Thanks to the barcode scanner built in the front, just next to the water dispenser, this fridge can track everything you put inside it, logging it all in an app and building handy shopping lists and recipes.

Advertisement

GE Profile 27.9 Cu. Ft. Smart 4-Door French-Door Refrigerator with Kitchen AssistantGE Profile 27.9 Cu. Ft. Smart 4-Door French-Door Refrigerator with Kitchen Assistant

Apparently, over 4 million products are supported, and when you run out of an item, a partnership with Instacart means you can have a top-up delivered in as little as 30 minutes.

Elsewhere, there’s an 8-inch touch display, voice control and a selection of clever sensors that can auto-dispense water to perfectly fill your bottle while you do something else. Expect to see it hit stores in April, for $4899.

Whirlpool 36-inch True Counter Depth 3-Door/4-Door French Door Refrigerator with Nugget Ice

Nugget ice (those small, chewable balls of ice so common with fast-food outlets) is having a moment right now, with dedicated machines to pump out the stuff at rapid speed increasing in popularity all the time.

Advertisement
Whirlpool 36-inch True Counter Depth 3-Door4-Door French Door Refrigerator with Nugget IceWhirlpool 36-inch True Counter Depth 3-Door4-Door French Door Refrigerator with Nugget Ice

However, if you want to keep your counter free of clutter, Whirlpool has done the smart thing and built a nugget ice dispenser right into its latest fridges.

Advertisement

To make things even better, the water and ice dispenser has been made taller to accommodate larger bottles, and there’s a traditional ice maker inside for those who prefer cubed ice.

LG Signature Built-in Depth French 4Door

LG’s Signature line continues to include some of the most desirable appliances on the market, and this latest model is packed with tech, from an internal filtered water dispenser, 24-inch control panel and numerous AI modes to keep your food fresher for longer.

For me though, I am still a massive fan of the Big Craft Ice system, which can make those glorious bar-worthy spheres of ice that are ideal for cocktails.

Advertisement

Midea STRAWash and SENSOR TruDry technology dishwasher

Dishwashers are always getting smarter, but often that’s with connected smarts rather than with the design. Midea is looking to change that by refining the interior of its latest dishwasher to better suit modern homes.

This dishwasher has internal spaces large enough to clean a reusable bottle, and there are dedicated jets to clean inside reusable straws and even those tall tumblers that can be a pain to clean properly. There’s even a dedicated zone for lids and a cycle that gets your dishes fully clean and dry in just one hour.

Midea STRAWash and SENSOR TruDry technology dishwasherMidea STRAWash and SENSOR TruDry technology dishwasher

Advertisement

Fotile Fully Integrated Range Hood

This is one of the slickest range hoods we’ve ever laid our eyes on, thanks to some seriously futuristic looks and gesture control that lets you open it up and get to work with just a wave of your hand.

When it’s not in use, the hood is fully hidden, thanks to an aircraft-inspired folding system. It also has 48% wider coverage than traditional hoods and 24-hour smart monitoring.

Advertisement

LG Signature 24-inch Built Under Dishwasher

lgkbislgkbis

A dishwasher worthy of the Signature brand. This model is designed to blend into your kitchen, with a handle that slides away when not in use and automatically pops out when you need it. 

It’s quiet (around 38db), has some seriously tasteful lighting inside and numerous features, from Auto Open, Dynamic Heat Dry and QuadWash Pro.

Samsung Bespoke AI 3-Door French Door Refrigerators with Zero Clearance Fit

Samsung’s Bespoke range is home to some of my favourite refrigerators, and this KBIS 2026 launch has the potential to be another winner.

Advertisement

This model has specially designed hinges and slim door profiles that allow the doors to open fully if there’s minimal space between the refrigerator and any surrounding walls.

Advertisement

Inside, you’ve got more control over your ice output, including an option for half spheres of ice that join the existing full sphere options. These are ideal for cocktails, as they melt slowly – and look great, too.

Samsung Bespoke AI 3-Door French Door Refrigerators with Zero Clearance FitSamsung Bespoke AI 3-Door French Door Refrigerators with Zero Clearance Fit

On the outside, there’s an extra-tall filtered water dispenser, perfect for use with larger drinks bottles from the likes of Stanley.

Whirlpool Front Load Laundry Tower with FreshFlow Vent System & UV Clean

This tower-style laundry system not only saves space but also introduces a very clever industry-first UV Clean technology. 

Advertisement

Whirlpool Front Load Laundry Tower with FreshFlow Vent System & UV CleanWhirlpool Front Load Laundry Tower with FreshFlow Vent System & UV Clean

This easily accessible addition – which can be flipped on and off depending on the wash – can help reduce odour-causing bacteria without the traditional need for high temperatures that can damage certain fabrics. There’s smart connectivity too, plus a venting system to keep everything smelling fresh.

Advertisement

Source link

Continue Reading

Tech

Is Age Verification a Trap?

Published

on

Social media is going the way of alcohol, gambling, and other social sins: Societies are deciding it’s no longer kid stuff. Lawmakers point to compulsive use, exposure to harmful content, and mounting concerns about adolescent mental health. So, many propose to set a minimum age, usually 13 or 16.

In cases when regulators demand real enforcement rather than symbolic rules, platforms run into a basic technical problem. The only way to prove that someone is old enough to use a site is to collect personal data about who they are. And the only way to prove that you checked is to keep the data indefinitely. Age-restriction laws push platforms toward intrusive verification systems that often directly conflict with modern data-privacy law.

This is the age-verification trap. Strong enforcement of age rules undermines data privacy.

How Does Age Enforcement Actually Work?

Most age-restriction laws follow a familiar pattern. They set a minimum age and require platforms to take “reasonable steps” or “effective measures” to prevent underage access. What these laws rarely spell out is how platforms are supposed to tell who is actually over the line. At the technical level, companies have only two tools.

Advertisement

The first is identity-based verification. Companies ask users to upload a government ID, link a digital identity, or provide documents that prove their age. Yet in many jurisdictions, 16-year-olds do not have IDs. In others, IDs exist but are not digital, not widely held, or not trustworthy. Storing copies of identity documents also creates security and misuse risks.

The second option is inference. Platforms try to guess age based on behavior, device signals, or biometric analysis, most commonly facial age estimation from selfies or videos. This avoids formal ID collection, but it replaces certainty with probability and error.

In practice, companies combine both. Self-declared ages are backed by inference systems. When confidence drops, or regulators ask for proof of effort, inference escalates to ID checks. What starts as a light-touch checkpoint turns into layered verification that follows users over time.

What Are Platforms Doing Now?

This pattern is already visible on major platforms.

Advertisement

Meta has deployed facial age estimation on Instagram in multiple markets, using video-selfie checks through third-party partners. When the system flags users as possibly underaged, it prompts them to record a short selfie video. An AI system estimates their age and, if it decides they are under the threshold, restricts or locks the account. Appeals often trigger additional checks, and misclassifications are common.

TikTok has confirmed that it also scans public videos to infer users’ ages. Google and YouTube rely heavily on behavioral signals tied to viewing history and account activity to infer age, then ask for government ID or a credit card when the system is unsure. A credit card functions as a proxy for adulthood, even though it says nothing about who is actually using the account. The Roblox games site, which recently launched a new age-estimate system, is already suffering from users selling child-aged accounts to adult predators seeking entry to age-restricted areas, Wired reports.

For a typical user, age is no longer a one-time declaration. It becomes a recurring test. A new phone, a change in behavior, or a false signal can trigger another check. Passing once does not end the process.

How Do Age-Verification Systems Fail?

These systems fail in predictable ways.

Advertisement

False positives are common. Platforms identify as minors adults with youthful faces, or adults who are sharing family devices, or have otherwise unusual usage. They lock accounts, sometimes for days. False negatives also persist. Teenagers learn quickly how to evade checks by borrowing IDs, cycling accounts, or using VPNs.

The appeal process itself creates new privacy risks. Platforms must store biometric data, ID images, and verification logs long enough to defend their decisions to regulators. So if an adult who is tired of submitting selfies to verify their age finally uploads an ID, the system must now secure that stored ID. Each retained record becomes a potential breach target.

Scale that experience across millions of users, and you bake the privacy risk into how platforms work.

Is Age Verification Compatible With Privacy Law?

This is where emerging age-restriction policy collides with existing privacy law.

Advertisement

Modern data-protection regimes all rest on similar ideas: Collect only what you need, use it only for a defined purpose, and keep it only as long as necessary.

Age enforcement undermines all three.

To prove they are following age-verification rules, platforms must log verification attempts, retain evidence, and monitor users over time. When regulators or courts ask whether a platform took reasonable steps, “We collected less data” is rarely persuasive. For companies, defending themselves against accusations of neglecting to properly verify age supersedes defending themselves against accusations of inappropriate data collection.

It is not an explicit choice by voters or policymakers, but instead a reaction to enforcement pressure and how companies perceive their litigation risk.

Advertisement

Less Developed Countries, Deeper Surveillance

Outside wealthy democracies, the trade-off is even starker.

Brazil’s Statute of Child-rearing and Adolescents (ECA in Portuguese) imposes strong child-protection duties online, while its data-protection law restricts data collection and processing. Now providers operating in Brazil must adopt effective age-verification mechanisms and can no longer rely on self-declaration alone for high-risk services. Yet they also face uneven identity infrastructure and widespread device sharing. To compensate, they rely more heavily on facial estimation and third-party verification vendors.

In Nigeria many users lack formal IDs. Digital service providers fill the gap with behavioral analysis, biometric inference, and offshore verification services, often with limited oversight. Audit logs grow, data flows expand, and the practical ability of users to understand or contest how companies infer their age shrinks accordingly. Where identity systems are weak, companies do not protect privacy. They bypass it.

The paradox is clear. In countries with less administrative capacity, age enforcement often produces more surveillance, not less, because inference fills the void of missing documents.

Advertisement

How Do Enforcement Priorities Change Expectations?

Some policymakers assume that vague standards preserve flexibility. In the U.K., then–Digital Secretary Michelle Donelan, argued in 2023 that requiring certain online safety outcomes without specifying the means would avoid mandating particular technologies. Experience suggests the opposite.

When disputes reach regulators or courts, the question is simple: Can minors still access the platform easily? If the answer is yes, authorities tell companies to do more. Over time, “reasonable steps” become more invasive.

Repeated facial scans, escalating ID checks, and long-term logging become the norm. Platforms that collect less data start to look reckless by comparison. Privacy-preserving designs lose out to defensible ones.

This pattern is familiar, including online sales-tax enforcement. After courts settled that large platforms had an obligation to collect and remit sales taxes, companies began continuous tracking and storage of transaction destinations and customer location signals. That tracking is not abusive, but once enforcement requires proof over time, companies build systems to log, retain, and correlate more data. Age verification is moving the same way. What begins as a one-time check becomes an ongoing evidentiary system, with pressure to monitor, retain, and justify user-level data.

Advertisement

The Choice We Are Avoiding

None of this is an argument against protecting children online. It is an argument against pretending there is no trade-off.

Some observers present privacy-preserving age proofs involving a third party, such as the government, as a solution, but they inherit the same structural flaw: Many users who are legally old enough to use a platform do not have government ID. In countries where the minimum age for social media is lower than the age at which ID is issued, platforms face a choice between excluding lawful users and monitoring everyone. Right now, companies are making that choice quietly, after building systems and normalizing behavior that protects them from the greater legal risks. Age-restriction laws are not just about kids and screens. They are reshaping how identity, privacy, and access work on the Internet for everyone.

The age-verification trap is not a glitch. It is what you get when regulators treat age enforcement as mandatory and privacy as optional.

From Your Site Articles

Advertisement

Related Articles Around the Web

Source link

Continue Reading

Tech

UK data hoarder flies to America to buy hard drives, saves $2,000

Published

on


In sharing the story, Redditor cgtechuk said he was running out of space after upgrading to four 16TB drives purchased from Amazon UK in 2020. Unfortunately, the 28TB replacement drives he had been eyeing were skyrocketing in price locally. Having previously purchased hardware in the US before, he decided to…
Read Entire Article
Source link

Continue Reading

Tech

Google clamps down on Antigravity ‘malicious usage’, cutting off OpenClaw users in sweeping ToS enforcement move

Published

on

Google caused controversy among some developers this weekend and today, Monday, February 23rd, after restricting their usage of its new Antigravity “vibe coding” platform, alleging “maliciously usage.” 

Some users who had been using the open source autonomous AI agent OpenClaw in conjunction with agents built on Antigravity, as well as those who had connected OpenClaw agents to their Gmails, claimed on social media that they lost access to their Google accounts. 

According to Google, said users had been using Antigravity to access a larger number of Gemini tokens via third-party platforms like OpenClaw, which overwhelmed the system for other Antigravity customers. 

This move has cut off several users, underscoring the architectural and trust issues that can arise with OpenClaw. The timing of Google’s crackdown is particularly pointed. Just one week ago, on February 15, OpenAI CEO Sam Altman announced that OpenClaw creator Peter Steinberger had joined OpenAI to lead its “next generation of personal agents.” While OpenClaw remains an open-source project under an independent foundation, it is now financially backed and strategically guided by Google’s primary rival.

Advertisement

By cutting off OpenClaw’s access to Antigravity, Google isn’t just protecting its server load; it is effectively severing a pipeline that allows an OpenAI-adjacent tool to leverage Google’s most advanced Gemini models.

Google DeepMind engineer and former CEO and founder of Windsurf, Varun Mohan, said in an X post that the company noticed “malicious usage” that led to service degradation.

“We’ve been seeing a massive increase in malicious usage of the Antigravity backend that has tremendously degraded the quality of service for our users. We needed to find a path to quickly shut off access to these users that are not using the product as intended. We understand that a subset of these users were not aware that this was against our ToS [Terms of Service] and will get a path for them to come back on but we have limited capacity and want to be fair to our actual users,” the post said. 

A Google DeepMind spokesperson told VentureBeat that the move is not to permanently ban the use of Antigravity to access third-party platforms, but to align its use with the platform’s terms of service.   

Advertisement

Unsurprisingly, Google’s move has caused a furor among OpenClaw users, including from OpenClaw creator Peter Steinberger, who announced that OpenClaw will remove Google support as a result. 

Infrastructure and connection uncertainty

OpenClaw emerged as a way for individual users to run shell commands and access local files, fulfilling a major promise of AI agents: efficiently running workflows for users.

But, as VentureBeat has frequently pointed out, it can often run into security and guardrail issues. There are companies building ways for enterprise customers to access OpenClaw securely and with a governance layer, though OpenClaw is so new that we should expect more announcements soon.

However, Google’s move was not framed as a security issue but rather as one of access and runtime, further showing that there is still significant uncertainty when users want to bring in something like OpenClaw into their workflow. 

Advertisement

This is not the first time developers and power users of agentic AI found their access curtailed. Last year, Anthropic throttled access to Claude Code after the company claimed some users were abusing the system by running it 24/7. 

What this does highlight is the disconnect between companies like Google and OpenClaw users. OpenClaw offered many interesting possibilities for creating workflows with agents. However, because it is continually evolving, users may inadvertently run afoul of ToS or rate limits. 

Mohan said Google is working to bring the banned users back, but whether this means the company will amend its ToS or figure out a secure connection between OpenClaw agents and Antigravity models remains to be seen. 

For developers, the message is clear: the era of “bring your own agent” to a frontier model is ending. Providers are now prioritizing vertically integrated experiences where they can capture 100% of the telemetry and subscription revenue, often at the expense of the open-source interoperability that defined the early days of the LLM boom.

Advertisement

Affected users

Several users said on both the Y Combinator chat boards and X that they no longer had access to their Google accounts after running OpenClaw instances for certain Google products.

Google’s move mirrors a broader industry shift toward “walled garden” agent ecosystems. Earlier this year, Anthropic introduced “client fingerprinting” to ensure that its Claude Code environment remains the exclusive interface for its models, effectively locking out third-party wrappers like OpenClaw. For developers, the message is clear: the era of “bring your own agent” to a frontier model is ending. Providers are now prioritizing vertically integrated experiences where they can capture 100% of the telemetry and subscription revenue, often at the expense of the open-source interoperability that defined the early days of the LLM boom.

Some have said they will no longer use Google or Gemini for their projects. Right now, people who still want to keep using Antigravity will need to wait until Google figures out a way for them to use OpenClaw and access Gemini tokens in a manner Google deems “fair.” 

Google DeepMind reiterated that it had only cut access to Antigravity, not to other Google applications. 

Advertisement

Conclusion: the enterprise takeaway

For enterprise technical decision-makers, the “Antigravity Ban” serves as a definitive case study in the risks of agentic dependency. As the industry moves from chatbots to autonomous agents, the following realities must now dictate strategy:

  • Platform fragility is the new normal: The sudden lockout of $250/month “Ultra” users proves that even high-paying enterprise customers have little leverage when a provider decides to change its “fair use” definitions. Relying on OAuth-based third-party wrappers for core business logic is now a high-risk gamble.

  • The rise of local-first governance: With OpenClaw moving toward an OpenAI-backed foundation and Google/Anthropic tightening their clouds, enterprises should prioritize agent frameworks that can run “local-first” or within VPCs. The “token loophole” that OpenClaw exploited is being closed; future agentic scale will require direct, high-cost API contracts rather than subsidized consumer seats.

  • Account portability as a requirement: The fact that users “lost access to their Google accounts” underscores the danger of bundling development environments with primary identity providers. Decision-makers should decouple AI development from core corporate identity (SSO) where possible to avoid a single ToS violation paralyzing an entire team’s communications.

Ultimately, the Antigravity incident marks the end of the “Wild West” for AI agents. As Google and OpenAI stake their claims, the enterprise must choose between the stability of the walled garden or the complexity (and cost) of truly independent, self-hosted infrastructure.

Source link

Advertisement
Continue Reading

Tech

Viral Doomsday Report Lays Bare Wall Street’s Deep Anxiety About AI Future

Published

on

A 7,000-word “doomsday” thought experiment from Citrini Research helped trigger an 800-point drop in the Dow, “painting a dark portrait of a future in which technological change inspires a race to the bottom in white-collar knowledge work,” reports the Wall Street Journal. From the report: Concerns of hyperscalers overspending are out. Worries of software-industry disruption don’t go far enough. The “global intelligence crisis” is about to hit. The new, broader question: What if AI is so bullish for the economy that it is actually bearish? “For the entirety of modern economic history, human intelligence has been the scarce input,” Citrini wrote in a post it described as a scenario dated June 2028, not a prediction. “We are now experiencing the unwind of that premium.”

Many of Monday’s moves roughly aligned with the situation outlined by Citrini, in which fast-advancing AI tools allow spending cuts across industries, sparking mass white-collar unemployment and in turn leading to financial contagion. Software firms DataDog, CrowdStrike and Zscaler each plunged more than 9%. International Business Machines’ 13% decline was its worst one-day performance since 2000. American Express, KKR and Blackstone — all name-checked by Citrini — tumbled. That anxiety, coupled with renewed uncertainty about trade policy from Washington, weighed down major indexes Monday. The Dow Jones Industrial Average led declines, falling 1.7%, or 822 points. The S&P 500 shed 1%, while the Nasdaq composite retreated 1.1%.

[…] Monday’s market swings extended a run of AI-linked volatility. A small research outfit that has garnered a huge Substack following for macro and thematic stock research, Citrini said in its new post that software firms, payment processors and other companies formed “one long daisy chain of correlated bets on white-collar productivity growth” that AI is poised to disrupt. […] Shares in DoorDash also veered 6.6% lower Monday after Citrini’s Substack note called the delivery app a “poster child” for how new tools would upend companies that monetize interpersonal friction. In the research firm’s scenario, AI agents would help both drivers and customers navigate food deliveries at much lower costs.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025