Connect with us

Tech

Google Chrome ships WebMCP in early preview, turning every website into a structured tool for AI agents

Published

on

When an AI agent visits a website, it’s essentially a tourist who doesn’t speak the local language. Whether built on LangChain, Claude Code, or the increasingly popular OpenClaw framework, the agent is reduced to guessing which buttons to press: scraping raw HTML, firing off screenshots to multimodal models, and burning through thousands of tokens just to figure out where a search bar is.

That era may be ending. Earlier this week, the Google Chrome team launched WebMCP — Web Model Context Protocol — as an early preview in Chrome 146 Canary. WebMCP, which was developed jointly by engineers at Google and Microsoft and incubated through the W3C’s Web Machine Learning community group, is a proposed web standard that lets any website expose structured, callable tools directly to AI agents through a new browser API: navigator.modelContext.

The implications for enterprise IT are significant. Instead of building and maintaining separate back-end MCP servers in Python or Node.js to connect their web applications to AI platforms, development teams can now wrap their existing client-side JavaScript logic into agent-readable tools — without re-architecting a single page.

AI agents are expensive, fragile tourists on the web

The cost and reliability issues with current approaches to web-agent (browser agents)  interaction are well understood by anyone who has deployed them at scale. The two dominant methods — visual screen-scraping and DOM parsing — both suffer from fundamental inefficiencies that directly affect enterprise budgets.

Advertisement

With screenshot-based approaches, agents pass images into multimodal models (like Claude and Gemini) and hope the model can identify not only what is on the screen, but where buttons, form fields, and interactive elements are located. Each image consumes thousands of tokens and can have a long latency. With DOM-based approaches, agents ingest raw HTML and JavaScript — a foreign language full of various tags, CSS rules, and structural markup that is irrelevant to the task at hand but still consumes context window space and inference cost.

In both cases, the agent is translating between what the website was designed for (human eyes) and what the model needs (structured data about available actions). A single product search that a human completes in seconds can require dozens of sequential agent interactions — clicking filters, scrolling pages, parsing results — each one an inference call that adds latency and cost.

How WebMCP works: Two APIs, one standard

WebMCP proposes two complementary APIs that serve as a bridge between websites and AI agents.

The Declarative API handles standard actions that can be defined directly in existing HTML forms. For organizations with well-structured forms already in production, this pathway requires minimal additional work; by adding tool names and descriptions to existing form markup, developers can make those forms callable by agents. If your HTML forms are already clean and well-structured, you are probably already 80% of the way there.

Advertisement

The Imperative API handles more complex, dynamic interactions that require JavaScript execution. This is where developers define richer tool schemas — conceptually similar to the tool definitions sent to the OpenAI or Anthropic API endpoints, but running entirely client-side in the browser. Through the registerTool(), a website can expose functions like searchProducts(query, filters) or orderPrints(copies, page_size) with full parameter schemas and natural language descriptions.

The key insight is that a single tool call through WebMCP can replace what might have been dozens of browser-use interactions. An e-commerce site that registers a searchProducts tool lets the agent make one structured function call and receive structured JSON results, rather than having the agent click through filter dropdowns, scroll through paginated results, and screenshot each page.

The enterprise case: Cost, reliability, and the end of fragile scraping

For IT decision makers evaluating agentic AI deployments, WebMCP addresses three persistent pain points simultaneously.

Cost reduction is the most immediately quantifiable benefit. By replacing sequences of screenshot captures, multimodal inference calls, and iterative DOM parsing with single structured tool calls, organizations can expect significant reductions in token consumption. 

Advertisement

Reliability improves because agents are no longer guessing about page structure. When a website explicitly publishes a tool contract — “here are the functions I support, here are their parameters, here is what they return” — the agent operates with certainty rather than inference. Failed interactions due to UI changes, dynamic content loading, or ambiguous element identification are largely eliminated for any interaction covered by a registered tool.

Development velocity accelerates because web teams can leverage their existing front-end JavaScript rather than standing up separate backend infrastructure. The specification emphasizes that any task a user can accomplish through a page’s UI can be made into a tool by reusing much of the page’s existing JavaScript code. Teams do not need to learn new server frameworks or maintain separate API surfaces for agent consumers.

Human-in-the-loop by design, not an afterthought

A critical architectural decision separates WebMCP from the fully autonomous agent paradigm that has dominated recent headlines. The standard is explicitly designed around cooperative, human-in-the-loop workflows — not unsupervised automation.

According to Khushal Sagar, a staff software engineer for Chrome, the WebMCP specification identifies three pillars that underpin this philosophy. 

Advertisement
  1. Context: All the data agents need to understand what the user is doing, including content that is often not currently visible on screen. 

  2. Capabilities: Actions the agent can take on the user’s behalf, from answering questions to filling out forms. 

  3. Coordination: Controlling the handoff between user and agent when the agent encounters situations it cannot resolve autonomously.

The specification’s authors at Google and Microsoft illustrate this with a shopping scenario: a user named Maya asks her AI assistant to help find an eco-friendly dress for a wedding. The agent suggests vendors, opens a browser to a dress site, and discovers the page exposes WebMCP tools like getDresses() and showDresses().  When Maya’s criteria go beyond the site’s basic filters, the agent calls those tools to fetch product data, uses its own reasoning to filter for “cocktail-attire appropriate,” and then calls showDresses()to update the page with only the relevant results. It’s a fluid loop of human taste and agent capability, exactly the kind of collaborative browsing that WebMCP is designed to enable.

This is not a headless browsing standard. The specification explicitly states that headless and fully autonomous scenarios are non-goals. For those use cases, the authors point to existing protocols like Google’s Agent-to-Agent (A2A) protocol. WebMCP is about the browser — where the user is present, watching, and collaborating.

Not a replacement for MCP, but a complement

WebMCP is not a replacement for Anthropic’s Model Context Protocol, despite sharing a conceptual lineage and a portion of its name. It does not follow the JSON-RPC specification that MCP uses for client-server communication. Where MCP operates as a back-end protocol connecting AI platforms to service providers through hosted servers, WebMCP operates entirely client-side within the browser.

The relationship is complementary. A travel company might maintain a back-end MCP server for direct API integrations with AI platforms like ChatGPT or Claude, while simultaneously implementing WebMCP tools on its consumer-facing website so that browser-based agents can interact with its booking flow in the context of a user’s active session. The two standards serve different interaction patterns without conflict.

Advertisement

The distinction matters for enterprise architects. Back-end MCP integrations are appropriate for service-to-service automation where no browser UI is needed. WebMCP is appropriate when the user is present and the interaction benefits from shared visual context — which describes the majority of consumer-facing web interactions that enterprises care about.

What comes next: From flag to standard

WebMCP is currently available in Chrome 146 Canary behind the “WebMCP for testing” flag at chrome://flags. Developers can join the Chrome Early Preview Program for access to documentation and demos. Other browsers have not yet announced implementation timelines, though Microsoft’s active co-authorship of the specification suggests Edge support is likely.

Industry observers expect formal browser announcements by mid-to-late 2026, with Google Cloud Next and Google I/O as probable venues for broader rollout announcements. The specification is transitioning from community incubation within the W3C to a formal draft — a process that historically takes months but signals serious institutional commitment.

The comparison that Sagar has drawn is instructive: WebMCP aims to become the USB-C of AI agent interactions with the web. A single, standardized interface that any agent can plug into, replacing the current tangle of bespoke scraping strategies and fragile automation scripts.

Advertisement

Whether that vision is realized depends on adoption — by both browser vendors and web developers. But with Google and Microsoft jointly shipping code, the W3C providing institutional scaffolding, and Chrome 146 already running the implementation behind a flag, WebMCP has cleared the most difficult hurdle any web standard faces: getting from proposal to working software.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Attackers prompted Gemini over 100,000 times while trying to clone it, Google says

Published

on

On Thursday, Google announced that “commercially motivated” actors have attempted to clone knowledge from its Gemini AI chatbot by simply prompting it. One adversarial session reportedly prompted the model more than 100,000 times across various non-English languages, collecting responses ostensibly to train a cheaper copycat.

Google published the findings in what amounts to a quarterly self-assessment of threats to its own products that frames the company as the victim and the hero, which is not unusual in these self-authored assessments. Google calls the illicit activity “model extraction” and considers it intellectual property theft, which is a somewhat loaded position, given that Google’s LLM was built from materials scraped from the Internet without permission.

Google is also no stranger to the copycat practice. In 2023, The Information reported that Google’s Bard team had been accused of using ChatGPT outputs from ShareGPT, a public site where users share chatbot conversations, to help train its own chatbot. Senior Google AI researcher Jacob Devlin, who created the influential BERT language model, warned leadership that this violated OpenAI’s terms of service, then resigned and joined OpenAI. Google denied the claim but reportedly stopped using the data.

Even so, Google’s terms of service forbid people from extracting data from its AI models this way, and the report is a window into the world of somewhat shady AI model-cloning tactics. The company believes the culprits are mostly private companies and researchers looking for a competitive edge, and said the attacks have come from around the world. Google declined to name suspects.

Advertisement

The deal with distillation

Typically, the industry calls this practice of training a new model on a previous model’s outputs “distillation,” and it works like this: If you want to build your own large language model (LLM) but lack the billions of dollars and years of work that Google spent training Gemini, you can use a previously trained LLM as a shortcut.

Source link

Advertisement
Continue Reading

Tech

Stop talking to AI, let them talk to each other: The A2A protocol

Published

on


Have you ever asked Alexa to remind you to send a WhatsApp message at a determined hour? And then you just wonder, ‘Why can’t Alexa just send the message herself? Or the incredible frustration when you use an app to plan a trip, only to have to jump to your calendar/booking website/tour/bank account instead of your AI assistant doing it all? Well, exactly this gap between AI automation and human action is what the agent-to-agent (A2A) protocol aims to address. With the introduction of AI Agents, the next step of evolution seemed to be communication. But when communication between machines…
This story continues at The Next Web

Source link

Continue Reading

Tech

Storing Image Data As Analog Audio

Published

on

Ham radio operators may be familiar with slow-scan television (SSTV) where an image is sent out over the airwaves to be received, decoded, and displayed on a computer monitor by other radio operators. It’s a niche mode that isn’t as popular as modern digital modes like FT8, but it still has its proponents. SSTV isn’t only confined to the radio, though. [BLANCHARD Jordan] used this encoding method to store digital images on a cassette tape in a custom-built tape deck for future playback and viewing.

The self-contained device first uses an ESP32 and its associated camera module to take a picture, with a screen that shows the current view of the camera as the picture is being taken. In this way it’s fairly similar to any semi-modern digital camera. From there, though, it starts to diverge from a typical digital camera. The digital image is converted first to analog and then stored as audio on a standard cassette tape, which is included in the module in lieu of something like an SD card.

To view the saved images, the tape is played back and the audio signal captured by an RP2040. It employs a number of methods to ensure that the reconstructed image is faithful to the original, but the final image displays the classic SSTV look that these images tend to have as a result of the analog media. As a bonus feature, the camera can use a serial connection to another computer to offload this final processing step.

Advertisement

We’ve been seeing a number of digital-to-analog projects lately, and whether that’s as a result of nostalgia for the 80s and 90s, as pushback against an increasingly invasive digital world, or simply an ongoing trend in the maker space, we’re here for it. Some of our favorites are this tape deck that streams from a Bluetooth source, applying that classic cassette sound, and this musical instrument which uses a cassette tape to generate all of its sounds.

Source link

Advertisement
Continue Reading

Tech

Critical BeyondTrust RCE flaw now exploited in attacks, patch now

Published

on

BeyondTrust

A critical pre-authentication remote code execution vulnerability in BeyondTrust Remote Support and Privileged Remote Access appliances is now being exploited in attacks after a PoC was published online.

Tracked as CVE-2026-1731 and assigned a near-maximum CVSS score of 9.9, the flaw affects BeyondTrust Remote Support versions 25.3.1 and earlier and Privileged Remote Access versions 24.3.4 and earlier.

BeyondTrust disclosed the vulnerability on February 6, warning that unauthenticated attackers could exploit it by sending specially crafted client requests.

Wiz

“BeyondTrust Remote Support and older versions of Privileged Remote Access contain a critical pre-authentication remote code execution vulnerability that may be triggered through specially crafted client requests,” explained BeyondTrust.

“Successful exploitation could allow an unauthenticated remote attacker to execute operating system commands in the context of the site user. Successful exploitation requires no authentication or user interaction and may lead to system compromise, including unauthorized access, data exfiltration, and service disruption.”

Advertisement

BeyondTrust automatically patched all Remote Support and Privileged Remote Access SaaS instances on February 2, 2026, but on-premise customers must install patches manually.

CVE-2026-1731 is now exploited in the wild

Hacktron discovered the vulnerability and responsibly disclosed it to BeyondTrust on January 31.

Hacktron says approximately 11,000 BeyondTrust Remote Support instances were exposed online, with around 8,500 on-premises deployments.

Ryan Dewhurst, head of threat intelligence at watchTowr, now reports that attackers have begun actively exploiting the vulnerability, warning that if devices are not patched, they should be assumed to be compromised.

Advertisement

“Overnight we observed first in-the-wild exploitation of BeyondTrust across our global sensors,” Dewhurst posted on X.

“Attackers are abusing get_portal_info to extract the x-ns-company value before establishing a WebSocket channel.”

This exploitation comes a day after a proof-of-concept exploit was published on GitHub targeting the same /get_portal_info endpoint.

The attacks target exposed BeyondTrust portals to retrieve the ‘X-Ns-Company‘ identifier, which is then used to create a websocket to the targeted device. This allows the attackers to execute commands on vulnerable systems.

Advertisement

Organizations using self-hosted BeyondTrust Remote Support or Privileged Remote Access appliances should immediately apply available patches or upgrade to the latest versions.

BleepingComputer contacted BeyondTrust and Dewhurst to ask if they had any details on post-exploitation activity and will update this story if we receive a response.

Modern IT infrastructure moves faster than manual workflows can handle.

In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.

Source link

Advertisement
Continue Reading

Tech

These Hanging, Reusable Grocery Bags Blow My Old Floppy Totes Away

Published

on

It’s rare that I spot something at the grocery store that makes my heart cry out with unbridled, capitalistic desire. Yes, both the wine and fancy cheese departments sometimes have fun finds, but otherwise, there are only so many ways to remix the foodstuff canon. 

It wasn’t something edible that recently caught my eye, though, but rather a genius bit of infrastructure. And it was brightly colored packaging, in fact, but not in the processed-food department or the produce aisle. I spotted them in a fellow shopper’s cart: four technicolor shopping bags, one of them insulated, designed to fit inside the grocery cart, with overhanging handles that keep them open and in place while you shop. 

Simple. Genius. How did I not realize that these were missing in my life?

Advertisement
installed-in-the-cart-and-ready-for-fun

I spotted these clever shopping bags in a fellow shopper’s cart. I knew I had to have them. 

Pamela Vachon/CNET

The rainbow colors are certainly what grabbed my attention here, but once my brain processed what I was seeing, it was my type-A heart that decided I must have them. (I enthusiastically stopped the owner to ask if I could take a picture, as though they were a quartet of puppies and not shopping bags.) 

Surely for a highly organized, competitive personality, efficiently sorting one’s grocery purchases into their shopping bags while parading the well-stocked aisles is about as much fun as one can have in the grocery store outside of contestantship on Supermarket Sweep. (The spice rack, you fools! Go to the spice rack!)

Advertisement
4 grocery bags hanging on rails of a shopping cart

Sorting groceries into shopping bags in real time is about as much fun as one can have in the grocery store.

Pamela Vachon/CNET

Bags designed for grocery cart organization in real time

There are plenty of reusable grocery shopping bags that are sturdy enough to situate inside your cart, but to maximize space and organization, look for those called “cart bags,” “cart caddies” or “trolley bags,” which also offer the added bonus of making grocery shopping sound like a fun outing more than a weekly chore. 

Advertisement
tidy-roll-of-bags-looking-like-a-tent-roll

There are numerous bag designs to choose from.

Pamela Vachon/CNET

There are numerous designs and layouts here to choose from: Some have clip-on cart handles that retract, some have separate, removable clips, and others are outfitted with dowels that overhang the sides of the cart, which are then stored in what looks rather like a tent roll. (Again, adventure, not tedium.) Not every set comes with an insulated bag, and some brands feature bags that are all the same color. (Presumably so you don’t attract the attention of people like me who treat the grocery store like a fact-finding mission.) 

You do you with regard to these various options, but here are several sets available on Amazon, all around the $30 to $40 range:

Advertisement

Bixtaneab

Handy Sandy

Tangtty

Sort as you shop

refrigerated-items-in-an-insurlated-compartment

These bags create order out of chaos when grocery shopping.

Pamela Vachon/CNET

Perhaps your kitchen pantry, like mine, isn’t exactly designed with grocery aisle layouts in mind. Things that sit side by side on retail shelves often live in opposite corners in real life. “Snacks,” for example, are relegated to various shelves in my kitchen based on factors that I don’t know you well enough to divulge here. 

Perhaps you get sniffy about cleaning products sharing bag space, or even cart space, with fresh produce. Perhaps you have numerous errands to run when you grocery shop, and you’re wondering about the condition of your refrigerated or frozen items once you leave the store. These bags create order for all of this potential chaos, real or imagined.

Advertisement
veggies-dont-even-need-plastic-bags

I use the color-coded bags for dedicated categories.

Pamela Vachon/CNET

The real beauty of these bags is that you can sort your groceries in real time, according to whatever system makes sense to you. (See “snacks,” above.) This is also the argument for multi-colored bags, which let you assign groceries to their appropriate bags, saving you time at the putting-away stage of grocery acquisition. 

I’m sure I don’t need to mention that these are also environment-positive, if you’re not already in the reusable grocery bag game. A dedicated, ventilated bag for all your produce may even preclude the need to wrestle with the uncooperative produce aisle bag roll. Safe in their own color-coordinated zone, your lettuces and broccoli crowns won’t mingle with anything you don’t want them to touch. 

Use with scan-as-you-go apps for extreme efficiency

Advertisement
use-smart-shopping-handhelds-for-extra-efficiency

Combine these clever bags with scan-to-pay shopping for the most efficient supermarket trip ever. 

Pamela Vachon/CNET

Checking out and repacking your groceries becomes that much more sane when everything is already sorted in a like-with-like format. I realize this only amounts to mere minutes of your life, but for many of us, those minutes add up, not even over the course of a lifetime but in the course of a day, and a little bit of extra sanity can go a very long way in turbulent times.

If your grocery store has an app or device that allows you to scan as you go, now you’re really in a high-efficiency grocery zone. Like TSA Pre-check, except for the kind of elite grocery shoppers who would never double-park their cart in a high-traffic aisle. Those programs, which preclude even the need for checking out in any time-sucking sense, plus your pre-sorted groceries in these bags, amount to just about the pinnacle of what in-person grocery shopping can aspire to.

Advertisement

Source link

Continue Reading

Tech

Netherlands to probe Chinese-owned chipmaker Nexperia

Published

on

A Dutch appeal court also upheld an October decision to suspend the company’s Chinese CEO Zhang Xuezheng.

Nexperia’s Chinese owner Wingtech was unable to sway the Amsterdam Court of Appeal and regain control of the Dutch chipmaker that plays a vital role in the global automotive industry.

As per a translated press release published yesterday (11 February), the court’s enterprise chamber instead ordered an investigation into Nexperia, citing “well-founded reasons to doubt a proper policy and proper course of affairs” at the company.

The court also upheld an October decision to suspend the company’s Chinese CEO Zhang Xuezheng and hand control off to EU-based directors. Xuezheng’s shares were handed over to a trust, but he still retained economic benefits.

Advertisement

Nexperia’s seizure began in September last year when the Dutch government invoked the rarely used Goods Availability Act, pointing to “serious governance shortcomings” at the company.

The Netherlands believed that alleged mismanagement at Nexperia posed a “threat” to Europe’s semiconductor capabilities.

Responding to the seizure, China halted Nexperia chip exports in early October, which resulted in a disruption affecting nearly three-quarters of the company’s output. On 9 November, however, the export ban was lifted.

In a statement issued that month, the Dutch government said that concerns around Nexperia stemmed from the now-suspended CEO who took part in the “improper transfer of product assets, funds, technology and knowledge to a foreign entity”.

Advertisement

Nexperia’s Chinese and European arms have stopped collaborating since the seizure, and despite signs of easing tensions in November, issues between the parties still persist.

The Dutch company stopped shipping silicon wafers to its Chinese subsidiary last year, claiming the local unit refused to make payments. According to the Financial Times, customers are now purchasing wafers from the European unit and sending them to the Chinese unit for assembly themselves.

Nexperia supplies chips to the likes of Volvo, JLR and Volkswagen.

In its order following the public hearing of 14 January, the Dutch court found “indications that careless action was taken with a conflicting interest” at Nexperia.

Advertisement

It said that Xuezheng changed company strategies without consulting other board members. In a hearing last month, Nexperia’s lawyers claimed that Zhang was moving equipment to China and used its assets for Wing Systems, a different company he owned.

Responding to yesterday’s orders, Nexperia said it welcomed the ruling and is committed to fully complying with the investigation.

“Despite the challenging situation, our underlying business continues to be healthy and resilient and we remain committed to being a strong, reliable partner for all our stakeholders, including customers,” it said.

The Dutch-headquartered Nexperia – an offshoot of NXP – was acquired by China’s contract manufacturing giant Wingtech Technology in 2018.

Advertisement

Last year’s takeover has caused a severe strain in the relationship between parent company Wingtech and Nexperia, who have accused each other of disrupting operations and destabilising business.

In 2024, the US government added Wingtech to its Entity List – a designation given to companies that could pose a risk to the country’s national security. In 2022, the UK government ordered Wingtech-owned Nexperia to undo its acquisition of the Newport Wafer Fab, citing a national security risk.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

ICE, CBP Knew Facial Recognition App Couldn’t Do What DHS Says It Could, Deployed It Anyway

Published

on

from the fuck-everyone-but-us-policy-still-in-play dept

The DHS and its components want to find non-white people to deport by any means necessary. Of course, “necessary” is something that’s on a continually sliding scale with Trump back in office, which means everything (legal or not) is “necessary” if it can help White House advisor Stephen Miller hit his self-imposed 3,000 arrests per day goal.

As was reported last week, DHS components (ICE, CBP) are using a web app that supposedly can identify people and link them with citizenship documents. As has always been the case with DHS components (dating back to the Obama era), the rule of thumb is “deploy first, compile legally-required paperwork later.” The pattern has never changed. ICE, CBP, etc. acquire new tech, hand it out to agents, and much later — if ever — the agencies compile and publish their legally-required Privacy Impact Assessments (PIAs).

PIAs are supposed to precede deployments of new tech that might have an impact on privacy rights and other civil liberties. In almost every case, the tech has been deployed far ahead of the precedential paperwork.

As one would expect, the Trump administration was never going to be the one to ensure the paperwork arrived ahead of the deployment. As we covered recently, both ICE and CBP are using tech provided by NEC called “Mobile Fortify” to identify migrants who are possibly subject to removal, even though neither agency has bothered to publish a Privacy Impact Assessment.

Advertisement

As Wired reported, the app is being used widely by officers working with both agencies, despite both agencies making it clear they don’t have the proper paperwork in place to justify these deployments.

While CBP says there are “sufficient monitoring protocols” in place for the app, ICE says that the development of monitoring protocols is in progress, and that it will identify potential impacts during an AI impact assessment. According to guidance from the Office of Management and Budget, which was issued before the inventory says the app was deployed for either CBP or ICE, agencies are supposed to complete an AI impact assessment before deploying any high-impact use case. Both CBP and ICE say the app is “high-impact” and “deployed.”

While this is obviously concerning, it would be far less concerning if we weren’t dealing with an administration that has told immigration officers that they don’t need warrants to enter houses or effect arrests. And it would be insanely less concerning if we weren’t dealing with an administration that has claimed that simply observing or reporting on immigration enforcement efforts is an act of terrorism.

Officers working for the combined forces of bigotry d/b/a/ “immigration enforcement” know they’re safe. The Supreme Court has ensured they’re safe by making it impossible to sue federal officers. And the people running immigration-related agencies have made it clear they don’t even care if the ends justify the means.

These facts make what’s reported here even worse, especially when officers are using the app to “identify” pretty much anyone they can point a smartphone at.

Advertisement

Despite DHS repeatedly framing Mobile Fortify as a tool for identifying people through facial recognition, however, the app does not actually “verify” the identities of people stopped by federal immigration agents—a well-known limitation of the technology and a function of how Mobile Fortify is designed and used.

[…]

Records reviewed by WIRED also show that DHS’s hasty approval of Fortify last May was enabled by dismantling centralized privacy reviews and quietly removing department-wide limits on facial recognition—changes overseen by a former Heritage Foundation lawyer and Project 2025 contributor, who now serves in a senior DHS privacy role.

Even if you’re the sort of prick who thinks whatever happens to non-citizens is deserved due to their alleged violation of civil statutes, one would hope you’d actually care what happens to your fellow citizens. I mean, one would hope, but even the federal government doesn’t care what happens to US citizens if they happen to be unsupportive of Trump’s migrant-targeting crime wave.

DHS—which has declined to detail the methods and tools that agents are using, despite repeated calls from oversight officials and nonprofit privacy watchdogs—has used Mobile Fortify to scan the faces not only of “targeted individuals,” but also people later confirmed to be US citizens and others who were observing or protesting enforcement activity.

TLDR and all that: DHS knows this tool performs worst in the situations where it’s used most. DHS and its components also knew they were supposed to produce PIAs before deploying privacy-impacting tech. And DHS knows its agencies are not only misusing the tech to convert AI shrugs into probable cause, but are using it to identify people protesting or observing their efforts, which means this tech is also a potential tool of unlawful retribution.

Advertisement

There’s nothing left to be discussed. This tech will continue to be used because it can turn bad photos into migrant arrests. And its off-label use is just as effective: it allows ICE and CBP agents to identify protesters and observers, even as DHS officials continue to claim doxing should be a federal offense if they’re not the ones doing it. Everything about this is bullshit. But bullshit is all this administration has.

Filed Under: border patrol, cbp, dhs, facial recognition tech, ice, privacy impact assessment, surveillance, trump administration

Companies: mobile fortify, nec

Source link

Advertisement
Continue Reading

Tech

Looking for a cheap but capable laptop? Our experts have rounded up the best deals from Dell’s Presidents’ Day sale

Published

on

Dell‘s Presidents’ Day sale is happening this week, so I’ve asked TechRadar’s own computing experts to hand-pick their favorite deals. You can find discounts on award-winning Dell laptops, monitors, and desktops at prices comparable to those in its Black Friday sale.

Shop Dell’s full Presidents’ Day sale

You’ll find our favorite laptop deals first, including the budget Dell 15 laptop for only $329.99, the powerful XPS 13 laptop for $949.99, and the versatile Inspiron 14 2-In-1 Laptop for $499.99.

If you’re looking for a cheap monitor for your home office, Dell has this 24-inch model for only $89.99, and gamers can get this 34-inch curved Alienware monitor for $349.99.

Advertisement

Last but not least, Dell is also offering discounts on its desktops, and our favorite is a whopping $470 off the Dell Tower Desktop.

Dell designs some of the best laptops on the market, and today’s offers make them even more affordable. Dell’s Presidents’ Day deals are limited-time offers, and all offers will expire at Midnight on Presidents’ Day proper (Monday, February 16).

Dell’s best laptop deals

Advertisement

Source link

Continue Reading

Tech

S’pore’s richest 20% of households own more wealth than rest of the population

Published

on

MOF’s first inequality deep-dive since 2015 tracks who got richer, how wealth piled up, and why climbing the ladder is getting harder

The latest Singapore Occasional Paper on income growth, inequality and social mobility trends has been released by the Ministry of Finance (MOF) on Feb 9.

For the first time, the government is releasing data on wealth inequality, where administrative data taken from household surveys is used to derive estimates of wealth distribution in Singapore.

This is the second Occasional Paper to be published, coming after the first one that was released in Aug 2015.

Here are some of the highlights Vulcan Post found to be worth pondering upon.

Advertisement

1. S’pore’s top 20% holds onto more average household wealth than 80% of the population combined

average household wealth among resident households singapore occasional paper 2026average household wealth among resident households singapore occasional paper 2026
Image Credit: Singapore’s Department of Statistics

For instance, with the latest statistics from 2023, in total wealth, the paper reports that the top 20% holds an average household wealth of S$5,264,000, more than the combined average household wealth of the rest of the 80% at S$3,541,000 (after adding up the bottom four quintiles).

That’s a whopping 32.7% difference in average household wealth between the top 20% and the rest of the population.

Total wealth is calculated by taking the difference between total assets (property asset value, net CPF balances and other financial assets) and total liabilities (mortgages and other liabilities).

However, MOF notes that these numbers may be inaccurate, as “estimates may still be susceptible to under-reporting,” especially for higher net-worth individuals, who are “more likely to underestimate wealth”.

2. Singapore’s wealth inequality is ‘comparable’ to other advanced economies

singapore occasional paper 2026 international comparison of home equity as a share of wealthsingapore occasional paper 2026 international comparison of home equity as a share of wealth
Image Credit: Singapore’s Department of Statistics

Globally, wealth inequality tends to be higher than income inequality. Singapore is no exception, where its wealth Gini coefficient stands at 0.55 (vs 0.38 for income after taxes/transfers) in 2025.

Gini coefficient is a statistical measure of economic inequality, with a range of 0 (perfect equality) to 1 (maximum inequality), used to analyse income or wealth distribution.

Advertisement

Therefore, Singapore’s wealth inequality is comparable to other advanced economies like the UK, Japan and Germany, which range 0.6-0.74.

This is because of HDB and CPF policies, which act as key moderators of wealth inequality by supporting households, especially the lower income, to attain home ownership and accumulate retirement savings.

The report also revealed that most Singaporean households hold positive net wealth, unlike countries like the UK/Australia, where the bottom 20% have zero or negative home equity.

In Singapore, home equity constitutes over half of wealth, even for the bottom 20% of Singaporean households. 

Advertisement

3. Social mobility remains strong, but shows early signs of moderation

singapore occasional paper 2026 share of children earning more than fathers singapore occasional paper 2026 share of children earning more than fathers
Image Credit: Singapore’s Department of Statistics

Most Singaporeans have experienced upward income mobility across generations, and Singapore has done relatively well in sustaining social mobility compared to other advanced economies. 

In addition, most Singaporeans earn more than their parents in real terms, consistent across birth cohorts.

Relative mobility is competitive internationally: Children born to the bottom-20% fathers have better odds of earning higher incomes in adulthood, with 13.8% of whom become top-20% earners, compared to the US, UK or Australia.

However, as Singapore’s economy matures, MOF said that sustaining mobility across generations will be more difficult, as our social mobility has shown signs of gradual moderation.

The correlation between parent and child incomes has increased modestly over time, and the share of poor children remaining in bottom 20% has risen—early signs of slowing mobility similar to patterns in other advanced economies.

Advertisement

4. Singapore’s tax and transfer system is highly progressive

singapore occasional paper 2026 benefit to tax ratio household member among citizen householdssingapore occasional paper 2026 benefit to tax ratio household member among citizen households
Image Credit: Singapore’s Department of Statistics

Singapore’s tax and transfer system is benefiting our lower-income families as it should. 

The Government redistributes resources to support those with greater needs, while keeping the tax low for lower-and middle-income households.

Lower-income households receive far more in benefits than they pay in taxes, whether measured by market or employment income. 

For every S$1 in taxes paid, bottom 20% households receive approximately S$7 in benefits, while the top 20% receive about S$0.20.

This benefit-to-tax ratio is more favourable to lower-income households than in Finland or the UK.

Advertisement

Approximately 35% of Singapore workers pay no personal income tax, while the top 10% of earners pay about 75% of all income tax.

The system keeps the overall tax burden low for the broad middle while targeting support to those who need it most, ensuring that economic benefits are shared equitably across all segments of society, said the Government.

  • Read other articles we’ve written on Singapore’s job landscape here.
  • Read more stories we’ve written on Singaporean businesses here.

Featured Image Credit: Andrzej Rostek via Shutterstock

Advertisement

Source link

Continue Reading

Tech

TikTok has launched a US-exclusive Local Feed

Published

on

TikTok has introduced a new Local Feed for US users. It uses precise GPS data to surface nearby content, mirroring the Nearby Feed that launched in the UK and Europe last year.

Local Feeds should appear as a tab on the home screen once enabled. TikTok says the feed will highlight posts related to travel, events, restaurants, shopping, and local creators. Small businesses also gain visibility, making the feature a potential tool for local discovery.

The rollout comes shortly after TikTok’s US app faced a major outage. The company blamed a “cascading systems failure” for the disruption. Local Feeds mark the first new feature since TikTok’s ownership change last month.

Since it gets your location, privacy remains a key concern. TikTok refuted this by stating that location tracking is only active while the app is in use. Ads and recommendations will not access chat history or personal details. Users under 18 cannot enable Local Feeds, and sensitive topics remain excluded.

Advertisement

Advertisement

With this in mind, control still sits with the user. Again, since this is an opt-in feature, local Feeds are off by default and require manual activation. TikTok says users can dismiss content, manage personalisation, and opt out at any time. This approach aims to balance relevance with transparency.

The feature could reshape how creators reach audiences. Local musicians, restaurants, and shops may find new ways to connect with nearby users. For TikTok, Local Feeds strengthen its position against rivals like Instagram, which already emphasises location‑based discovery.

Group trends may also shift with this new local craze. Instead of viral content spreading globally, Local Feeds could highlight smaller, community‑driven trends. That change may encourage more diverse content and give regional creators a stronger voice.

Advertisement

TikTok plans to expand the feature gradually. Needless to say, early feedback will guide adjustments before its wider, global rollout.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025