Connect with us
DAPA Banner
DAPA Coin
DAPA
COIN PAYMENT ASSET
PRIVACY · BLOCKDAG · HOMOMORPHIC ENCRYPTION · RUST
ElGamal Encrypted MINE DAPA
🚫 GENESIS SOLD OUT
DAPAPAY COMING

Tech

GM agrees to pay $12.75M in California driver privacy settlement

Published

on

General Motors has reached a privacy-related settlement with a group of law enforcement agencies led by California Attorney General Rob Bonta.

Back in 2024, The New York Times reported that automakers including GM were sharing information about their customers’ driving behavior with insurance companies, and that some customers were concerned that their insurance rates had gone up as a result.

The settlement announcement from Bonta’s office similarly alleges that GM sold “the names, contact information, geolocation data, and driving behavior data of hundreds of thousands of Californians” to Verisk Analytics and LexisNexis Risk Solutions, which are both data brokers.  Bonta’s office further alleges that this data was collected through GM’s OnStar program, and that the company made roughly $20 million from data sales.

However, Bonta’s office also said the data did not lead to increased insurance prices in California, “likely because under California’s insurance laws, insurers are prohibited from using driving data to set insurance rates.”

Advertisement

As part of the settlement, GM has agreed to pay $12.75 million in civil penalties and to stop selling driving data to any consumer reporting agencies for five years, Bonta’s office said. GM has also agreed to delete any driver data that it still retains within 180 days (unless it obtains consent from customers), and to request that Lexis and Verisk delete that data.

“General Motors sold the data of California drivers without their knowledge or consent and despite numerous statements reassuring drivers that it would not do so,” Bonta said in a statement, adding that the settlement “requires General Motors to abandon these illegal practices and underscores the importance of the data minimization in California’s privacy law — companies can’t just hold on to data and use it later for another purpose.”

GM had previously settled with the Federal Trade Commission over its data sales, with a final order banning General Motors and OnStar from selling certain data with consumer reporting agencies.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

GM told Reuters that the settlement “addresses Smart Driver, a product we discontinued in 2024, and reinforces steps we’ve taken to strengthen our privacy practices.”

Advertisement

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

UFC 328 live stream: how to watch Chimaev vs Strickland online

Published

on

UFC 328 live streams feature two titles up for grabs in the Octagon, with middleweights Khamzat Chimaev vs Sean Strickland headlining proceedings after a flyweight title tussle between Joshua Van vs Tatsuro Taira in the co-main event at the Prudential Center in Newark, New Jersey on Saturday night.

Source link

Advertisement
Continue Reading

Tech

Skulking in Middle-earth: Shadow of Mordor is superb, and the Nemesis System is yet to be bested by 12 years of action games

Published

on

My nerd cred, painstakingly built up over the last 30 years, is about to take a big hit. Until 2021, I’d never seen The Lord of the Rings trilogy. Until 2026, very recently, I’d never played Middle-earth: Shadow of Mordor. I’d made attempts, you understand, way back in the wilderness years of the 2010s, but it never really grabbed me. Then, just recently, I gave it another go, and, it really did.

From the Backlog

Every gamer has a backlog — and that’s no different for us at TechRadar Gaming. From the Backlog is a series about overdue first-plays, revisiting classics, returning to online experiences, or rediscovering and appreciating established favorites in new ways. Read the full series here.

Advertisement

Source link

Continue Reading

Tech

Cisco Releases Open-Source ‘DNA Test for AI Models’

Published

on

Cisco has released an open-source tool “to trace the origins of AI models,” reports SC World, “and compare model similarities for great visibility into the AI supply chain.”


[Cisco’s Model Provenance Kit] is a Python toolkit and command-line interface (CLI) that looks at signals such as metadata and weights to create a “fingerprint” for AI models that can then be compared to other model fingerprints to determine potential shared origins. “Think of Model Provenance Kit as a DNA test for AI models,” Cisco researchers wrote. “[…] Much like a DNA test reveals biological origins, the Model Provenance Kit examines both metadata and the actual learned parameters of a model (like a unique genome that comprises a model), to assess whether models share a common origin and identify signs of modification.”

The tool aims to address gaps in visibility into the AI model supply chain. For example, many organizations utilize open-source models from repositories like HuggingFace, where models could potentially be uploaded with incomplete or deceptive documentation. The Model Provenance Kit provides a way for organizations to verify claims about a model’s origins, such as claims that a model is trained from scratch, when in reality it may be copied from another model, Cisco said. This may put organizations at risk of using models with unknown biases, vulnerabilities or manipulations and make it more difficult to resolve any incidents that arise from these risks.

Thanks to Slashdot reader spatwei for sharing the news.

Advertisement

Source link

Continue Reading

Tech

Nissan Frontier Vs. Toyota Tacoma: Which Truck Depreciates Faster?

Published

on





Both the Nissan Frontier and Toyota Tacoma start around $33,000 (including destination fee) for the 2026 model year, but they won’t be worth that much for long. In fact, as soon as drivers leave the dealership, their trucks will lose some of their original value; they’ll continue to do so as the months and years tick by. Yet based on the latest data, one of these two trucks is likely to lose its value notably faster than the other.

The estimated difference in depreciation between the two trucks varies between data sources, but the overall picture remains consistent. The Tacoma is predicted to lose less of its value over a 5 year period than the Frontier, although both models hold their value well compared to best-selling full-size pickups like the Ford F-150 and Chevrolet Silverado 1500.

Advertisement

According to the latest iSeeCars study, the Frontier will lose an average of 35.5% of its value after 5 years on the road, while the Tacoma will lose just 19.9% of its value across the same period of time. That makes the Tacoma the least-depreciating pickup truck on the market according to the study, just ahead of the larger Toyota Tundra. Meanwhile, CarEdge predicts that a new Frontier will lose 37% of its value after five years, while a new Tacoma will only drop 22% in value. KBB isn’t so optimistic about either truck’s depreciation rates, predicting that the Frontier and Tacoma will lose 52.2% and 44.3% of their value respectively over the same period.

Advertisement

Value retention estimates are only a rough guide, but the Tacoma remains a winner

The difference in predicted values between sources can be attributed to a variety of factors, from differences in calculation methodology to assumptions about the average new price each buyer will be paying. The latter factor is particularly important when comparing the Frontier and Tacoma, since the Tacoma has a far bigger price difference between its base and top trims.

Although both trucks start around the same MSRP for a bare-bones, base-spec model, many buyers will be looking further up the trim range to add as much extra capability and comfort as their budget allows. The costliest trim of the 2026 Frontier is the Long Bed Pro-4X, which starts from $44,115 (including a $1,745 destination fee). That price dwarfs the top end of the Tacoma’s trim range, where the TRD Pro starts from $66,195 (also including a $1,745 destination fee).

The currently available study data doesn’t confirm whether buyers who pick a top-spec Tacoma, which retails for roughly double the price of a base variant, can expect to hold onto as much of their original investment as those who buy a base-spec truck. Nonetheless, average value retention across the model as a whole remains very high, and given that the Tacoma was crowned the most dependable midsize truck on the market by JD Power in 2026, that class-leading value retention is unlikely to change anytime soon.

Advertisement



Source link

Advertisement
Continue Reading

Tech

Prime Video follows Netflix and Disney by adding a TikTok-like ‘Clips’ feed in its app

Published

on

Amazon is adding a short-form video feed to the Prime Video app called “Clips,” the company announced on Friday.

Rolling out first in the U.S., Clips will include… well, clips of shows on Prime Video that are designed to hook a viewer and get them to give the full show a try. From that clip, users can add a title to their watchlist, share it with a friend, or navigate to rent, buy, or access the title through their subscription.

“Clips gives customers a whole new way to browse with short, personalized snippets tailored to their interests,” said Prime Video’s director of Global Application Experiences, Brian Griffin, in a press release. “Whether they have a few minutes to scroll or are looking for something to watch when they have more time, entertainment is just a tap away.”

Amazon first tested this short-form feed during the NBA season, showing highlights that users can scroll through as though they’re watching TikToks.

Advertisement

It’s not a surprise to see Prime Video make this change — Netflix, PeacockTubi, Disney, and others have recently rolled out similar experiences, which are designed to promote discovery. Netflix’s short-form feed even shares the Clips name.

Clips is first rolling out to select U.S. customers on iOS, Android, and Fire tablets, but it will be available more broadly this summer. Users can navigate to Clips by scrolling down on the Clips carousel on the Prime Video mobile home page, which will surface a full-screen vertical feed.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Source link

Advertisement
Continue Reading

Tech

Plant Seeds Do Something Incredible When the Sound of Rain Strikes

Published

on

“Plant seeds can sense the vibrations generated by falling raindrops,” reports ScienceAlert, “and respond by waking from their state of dormancy to welcome the water, new research shows…. to germinate in ‘anticipation’ of the coming deluge.”
The finding, discovered by MIT mechanical engineers Nicholas Makris and Cadine Navarro, offers the first direct evidence that seeds and seedlings can sense and respond to sounds in nature… “The energy of the rain sound is enough to accelerate a seed’s growth,” [explains Markis].

Plants don’t have the same aural equipment we do to actually hear sounds, of course. But the study suggests that seeds respond to the same vibrations that can produce a sound experience in our human ears. Across a series of experiments, the researchers submerged nearly 8,000 rice seeds in shallow tubs of water, at a depth of around 3 centimeters (1 inch), and exposed some of them to falling water drops over periods of six days… A hydrophone recorded the acoustic vibrations produced by the drops, confirming that the experiment mimicked the vibrations produced by actual raindrops falling in nature — such as the driving downpours that can sometimes pelt Massachusetts’ puddles, ponds, and wetlands… In their study, the researchers observed that seeds exposed to the falling drops germinated up to around 37% faster, compared with seeds that did not receive the simulated rainstorm treatment but were housed in otherwise identical conditions.

More information in Scientific American and Scientific Reports.

Source link

Continue Reading

Tech

AI can now decode your Facebook ads into a full personal profile, and it is faster, cheaper, and easier than anyone expected

Published

on


  • Ad patterns alone reveal identity traits without accessing personal data directly
  • AI profiling from ads is faster, cheaper, and more scalable
  • Short browsing sessions provide enough data for accurate personal inference

The ads that appear on your screen are not chosen at random, and researchers have now proven AI can turn those ads into a detailed picture of your private life.

A team from UNSW Sydney and QUT examined more than 435,000 Facebook ads collected from 891 Australians through a citizen science project.

Source link

Advertisement
Continue Reading

Tech

Nvidia has already committed $40B to equity AI deals this year

Published

on

Nvidia continues to be a major investor in the AI ecosystem, committing more than $40 billion to equity investments in AI companies — and that’s just in these early months of 2026, according to CNBC.

Much of that total comes from a single bet, a $30 billion investment in OpenAI. But CNBC reports that the chipmaker has also announced seven multi-billion dollar investments in publicly traded companies, most recently deals to invest up to $3.2 billion in glassmaker Corning and up to $2.1 billion in data center operator IREN.

We’ve previously rounded up Nvidia’s investments in AI startups, including 67 venture deals in 2025. And according to FactSet data, it’s already participated in around two dozen investment rounds in private startups in 2026.

The fact that Nvidia has been investing in some of its own customers has led to the recurring criticism that these are circular deals moving money back-and-forth between the same companies.

Advertisement

Wedbush Securities analyst Matthew Bryson said Nvidia’s investments fall “squarely into the circular investment theme,” but suggested that if successful, they could help the company build a “competitive moat.”

Source link

Continue Reading

Tech

Google tweaks Chrome AI privacy wording, insists processing stays on-device

Published

on

ai and ml

Deletion of a longstanding privacy assurance sparks concerns

Google has changed Chrome’s disclosure language about how its on-device AI works, but that doesn’t mean the company intends to capture on-device AI interactions.

The Chrome menu modification, which isn’t universally rolled out yet even in Chrome 148, was noted this week on Reddit.

Advertisement

The “On-device AI” message in Chrome’s System settings previously read, “To power features like scam detection, Chrome can use AI models that run directly on your device without sending your data to Google servers. When this is off, these features might not work.”

But the message changed recently – it lost the phrase “without sending your data to Google servers.” 

That prompted privacy advocate Alexander Hanff to question whether the edit signaled an architectural change that would see local AI interactions processed by Google servers instead of remaining on-device.

“Why was the sentence ‘without sending your data to Google servers’ removed from the on-device AI description in Chrome’s Settings UI?” Hanff asked. “Was the previous text inaccurate? Has the architecture changed? Was the wording withdrawn on legal advice because Google was unwilling to defend it as a representation?”

Advertisement

Asked about this, a Google spokesperson said, “This doesn’t reflect a change to how we handle on-device AI for Chrome. The data that is passed to the model is processed solely on device.”

It appears this situation deserves a more genteel rendering of Hanlon’s Razor – “Never attribute to malice that which is adequately explained by stupidity.” 

In this case, it’s “Never attribute to malice that which is adequately explained by bad timing.”

Word of the menu modification surfaced as Chrome was rolling out the Prompt API, which is designed to provide web pages with a programmatic way to interact with a browser-resident AI model. The API’s arrival and public discussion of it drew attention to the fact that Chrome has been silently downloading Google’s 4GB Nano model onto users’ devices. The coincidence of these events made it seem that Google was preparing to capture on-device prompts and responses, which would be a significant privacy retreat.

Advertisement

In fact, Chrome has been letting Nano sleep on the couch for early adopters dating back two years when local AI was implemented in Chrome 126 as a preview program. While Google hasn’t yet made model downloading and storage opt-in, the biz did earlier this year implement a way to deactivate and remove the space-hogging model.

“We’ve offered Gemini Nano for Chrome since 2024 as a lightweight, on-device model,” a Google spokesperson explained, pointing to relevant help documentation

“It powers important security capabilities like scam detection and developer APIs without sending your data to the cloud. While this requires some local space on the desktop to run, the model will automatically uninstall if the device is low on resources. In February, we began rolling out the ability for users to easily turn off and remove the model directly in Chrome settings. Once disabled, the model will no longer download or update.”

The edit to the “On-device AI” message occurred in early April. According to Google, Gemini Nano in Chrome processes all data on-device.

Advertisement

But when websites interact with Gemini Nano in Chrome – via the Prompt API, for example – they can see the inputs and outputs of the model. In such cases, the data handling would fall under the privacy policy of the website interacting with the user’s Nano instance.

Google decided to change its “On-device AI” message to avoid confusion – and perhaps to preclude legal claims alleging policy violations – when the user is interacting with a Google site that calls out to the Nano model on-device, in support of some service it provides. 

In that scenario, the Google site would have access to the prompts it sends and responses it gets from the user’s on-device model. That interaction would happen “without sending your data to Google servers,” at least in the context of a user querying a model running in Google Cloud.

But since the user’s on-device Chrome-resident Nano model would send data to the Google site in response to that site’s API calls, that data transmission might be interpreted as a violation of the local AI commitment language. Hence the edit.

Advertisement

Google’s decision to have Gemini Nano become a Chrome squatter is a novel way of doing things, given that co-opting people’s computing resources has largely been the province of covert crypto-mining scripts. But perhaps after years of offering Gmail and Search at no monetary cost, Google feels entitled to a few gigabytes of Chrome users’ local storage and occasional bursts of their on-device compute. ®

Source link

Continue Reading

Tech

Akamai stock surges 27% on $1.8B Anthropic cloud deal as CDN company pivots to AI infrastructure

Published

on

TL;DR

Akamai disclosed a 1.8 billion dollar, seven-year cloud deal with Anthropic, its largest contract ever. The stock rose 27 per cent in a day as the CDN company’s AI cloud pivot received its most significant validation.

 

Advertisement

Akamai Technologies disclosed a 1.8 billion dollar, seven-year cloud infrastructure deal with a customer it described only as “a leading frontier model provider.Bloomberg identified the customer as Anthropic. The stock rose 27 per cent in a single day, the largest rally in the company’s 28-year history. A company that built its business delivering web pages faster than anyone else just became an AI infrastructure provider on the strength of one contract.

The deal is the centrepiece of a quarter in which Akamai’s cloud infrastructure services revenue grew 40 per cent year over year to 95 million dollars, while its legacy content delivery business declined 7 per cent. The company is being repriced by investors not for what it has been for two decades but for what one contract suggests it could become. The question is whether a single customer commitment, however large, constitutes a transformation or a concentration risk.

The deal

The 1.8 billion dollar contract is the largest in Akamai’s history. Revenue from the commitment is expected to begin in the fourth quarter of 2026, contributing approximately 20 to 25 million dollars in that period. The seven-year term provides visibility that Akamai’s legacy CDN business, which operates on shorter cycles and faces persistent price compression, has never offered.

The deal follows a 200 million dollar, four-year cloud services agreement that Akamai signed in February with another unnamed US technology company, under which the customer will use a multi-thousand NVIDIA Blackwell GPU cluster. Together, the two contracts represent two billion dollars in committed cloud revenue from customers that Akamai did not have two years ago.

Anthropic signed to take all of SpaceX’s Colossus 1 data centre capacity, adding more than 300 megawatts and over 220,000 NVIDIA GPUs to its compute footprint. The Akamai deal extends the same logic: Anthropic is buying compute capacity from every available provider as demand for Claude outpaces supply. Dario Amodei, Anthropic’s chief executive, said the company experienced “80x growth” in annualised revenue and usage in the first quarter of 2026 and is “working as quickly as possible” to secure more computing resources.

Advertisement

The pivot

Akamai was founded in 1998 at MIT to solve the problem of delivering web content without congestion. For two decades, it operated the world’s largest content delivery network, caching and distributing web pages, video streams, and software downloads across more than 4,000 locations in 130 countries. The CDN business made Akamai indispensable to the internet. It also became a commodity.

Under chief executive Tom Leighton, who moved from chief scientist to the top role in 2013, the company spent a decade diversifying. The first pivot was into cybersecurity, which now accounts for 55 per cent of revenue at 590 million dollars per quarter, growing 11 per cent year over year. The second pivot, into cloud computing, began with the 900 million dollar acquisition of Linode in 2022 and is now producing the growth that investors had been waiting to see.

Leighton told CNBC that the deal represents validation of the company’s “different approach” and that Akamai has “a very strong pipeline of major enterprise customers, including some that have very large cloud needs.” The company announced at NVIDIA’s GTC event in March that it would deploy thousands of NVIDIA RTX PRO 6000 GPUs and build what it described as the “industry’s first global-scale implementation of NVIDIA’s AI Grid,” pushing AI inference closer to end users to reduce latency and cost.

The customer

Anthropic’s decision to sign a 1.8 billion dollar contract with Akamai reflects the constraint that defines the current AI infrastructure market: demand for compute exceeds the capacity of any single provider. Anthropic already runs Claude across Google tensor processing units, Amazon’s custom chips, and NVIDIA hardware. It has signed with SpaceX for data centre capacity. It is exploring building its own chips.

Advertisement

Anthropic is exploring building its own AI chips as its run-rate revenue surpasses 30 billion dollars, but custom silicon takes years to design and validate. In the interim, Anthropic is buying capacity wherever it can find it. Akamai’s distributed network of edge locations, originally built for CDN traffic, offers something that centralised hyperscale data centres do not: the ability to run inference workloads close to end users, which reduces latency for the real-time applications that enterprises are beginning to deploy.

Nebius acquired Eigen AI for 643 million dollars to optimise inference performance, a bet that the most valuable layer in AI infrastructure is not raw compute but the efficiency with which that compute is used. Akamai’s pitch to Anthropic rests on a similar premise: that distributed inference at the edge is more efficient for certain workloads than centralised processing in a hyperscale facility.

The numbers

Akamai reported first-quarter revenue of 1.074 billion dollars, up 6 per cent year over year. Adjusted earnings per share were 1.61 dollars. Cloud infrastructure services revenue was 95 million dollars, up 40 per cent. Security revenue was 590 million dollars, up 11 per cent. Delivery and other revenue was 389 million dollars, down 7 per cent.

The cloud segment represents less than 9 per cent of total revenue. The 1.8 billion dollar deal, at approximately 257 million dollars per year, would more than double the segment’s current annual run rate. The contract transforms cloud from a promising but small division into the company’s primary growth engine, at least on a committed-revenue basis.

Advertisement

For the full year, Akamai is forecasting revenue of 4.45 to 4.55 billion dollars and adjusted earnings of 6.40 to 7.15 dollars per share. The guidance does not yet reflect the full impact of the Anthropic contract, which begins contributing in the fourth quarter. Analysts will spend the next two quarters trying to determine whether the deal is a one-off or the first in a series.

The risk

Anthropic launched a marketplace for Claude-powered enterprise software, and committed 100 million dollars to the Claude Partner Network, signalling that the company’s commercial ambitions extend well beyond model development into the infrastructure and services layers that support enterprise AI deployment. The scale of Anthropic’s expansion explains the compute hunger that produced the Akamai deal.

But a 1.8 billion dollar contract with one customer concentrates risk as much as it concentrates revenue. Anthropic’s annualised revenue has grown from approximately 900 million dollars in late 2025 to a reported 30 billion dollar run rate. Growth at that pace creates demand for infrastructure. It also creates the conditions for a correction if the demand curve flattens. Akamai’s stock gained 27 per cent on the announcement. The company’s ability to sustain that valuation depends on whether Anthropic’s growth trajectory holds for seven years.

Leighton said there is more coming. The company’s history suggests patience. Akamai survived the dot-com crash, navigated the commoditisation of its original business, and spent a decade building a cybersecurity franchise before the market rewarded it. The AI cloud deal is the latest reinvention of a company that has been reinventing itself since 1998. The difference is that this time, the reinvention depends on one customer’s continued appetite for compute, and on the assumption that the demand for AI inference at the edge will grow as fast as the demand for AI itself.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025