Connect with us
DAPA Banner
DAPA Coin
DAPA
COIN PAYMENT ASSET
PRIVACY · BLOCKDAG · HOMOMORPHIC ENCRYPTION · RUST
ElGamal Encrypted MINE DAPA
🚫 GENESIS SOLD OUT
DAPAPAY COMING

Tech

Apple can’t avoid $4.1 billion iCloud suit in UK

Published

on

Apple could be forced to pay $95 to all iCloud users in the UK, if a class-action lawsuit agains the company is successful.

Apple has failed to reduce the scope of a UK class-action lawsuit, and all iCloud users in the country will be owed $95 if the company loses.

Apple has had its fair share of lawsuits in the United Kingdom, with ongoing cases regarding App Store fees, an alleged price-fixing scheme with retailers, and more.

In November 2024, consumer rights group and publication Which? also sued Apple, alleging that the company had an anti-competitive way of locking users into paying for iCloud storage.

Advertisement

It was argued that Apple breached UK competition law and abused its dominant market position by not letting iOS and iPadOS choose an alternative cloud provider. Additionally, Apple was accused of charging “rip-off prices” for iCloud storage in the country.

As Apple failed to narrow down the scope of the class-action lawsuit, the case is now moving to trial.

As Which? claims in a social media post, Apple locked “millions of consumers into its iCloud service at rip-off prices.” The group says that around 40 million UK iCloud users may be eligible for compensation equating to $95, assuming its lawsuit is successful.

The lawsuit seeks damages for iPhone and iPad users who paid for iCloud storage. However, with the principle of Forgone Consumer Surplus (FCS), the suit also argues that iCloud users in the UK were priced out of an iCloud subscription, as Apple abused its market position.

Advertisement

Hypothetically, users who found Apple’s roughly $12 monthly payment for 2TB of cloud storage would have paid around $11 if it were a “fair” market price. Per the FCS legal theory, those potential customers “lost” $1 because of Apple’s uncompetitive pricing and the lack of an adequate alternative.

Consumer rights group Which? argued that Apple should pay these hypothetical buyers, even though they never really lost anything or paid for anything in the traditional sense.

It says that “around 40 million Apple customers in the UK who have used iCloud services on or after 8 November 2018” could be entitled to compensation.

Apple attempted to narrow the scope of the lawsuit so that it only included UK iCloud users who paid for a subscription. “We reject any suggestion that our iCloud practices are anticompetitive and will vigorously defend against any legal claim otherwise,” said the company in 2024.

Advertisement

However, its attempts were unsuccessful. The UK’s Competition Appeal Tribunal ruled in a two-to-one vote that the FCS legal theory is applicable. In essence, the class-action suit could result in compensation for both paying and non-paying UK iCloud users.

The iCloud restrictions that inspired the lawsuit

iCloud itself has been around since 2011, when it debuted with iOS 5. The service delivered system-wide integration, allowing users to sync their notes, emails, photos, files, and more via the cloud storage platform.

iCloud Change Storage Plan screen showing three upgrade options: 50GB for $0.99, 200GB for $2.99, and 2TB for $9.99 per month on a blue background

Apple has been accused of using uncompetitive pricing for its iCloud storage options in the UK.

At the time, Apple gave its users 5GB of iCloud storage for free. While that may have been generous all those years ago, Which? argued that 5G wouldn’t meet consumer needs in 2024 and beyond.

Advertisement

There might be some truth to this claim, as almost two-thirds of US Apple users paid for extra iCloud storage in 2024. However, Apple dealt with a lawsuit about its 5GB free iCloud storage option in the United States, and that case was ultimately dismissed the same year.

The UK lawsuit against Apple, however, is still ongoing, and it revolves around more than just the free 5GB storage plan.

Which? also argued that Apple made iCloud the simplest cloud service to use on iOS. Alternative cloud storage options on iOS truly don’t deliver the same degree of integration.

If you wanted to store your photos and videos via Google Drive, for instance, you’d have to install the app yourself. You also wouldn’t be able to use a Google-designed tool or locking tool in place of Find My on iOS, which requires the use of an iCloud account.

Advertisement

If the UK lawsuit against Apple succeeds, all iCloud users in the country will be automatically opted in, meaning they’ll be eligible for payment. This includes UK consumers who have used iCloud on or after November 8, 2018.

The case could also set a precedent in the country. One member of the UK’s Competition Appeal Tribunal argued that we could see a multitude of similar cases centered around hypothetical purchases.

Still, the outcome of the class-action lawsuit remains to be seen, and it could be months before a decision is made.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Report: Boston Celtics investors set to bid on Seahawks

Published

on

Lumen Field in Seattle, home of the Seahawks. (GeekWire Photo / Kurt Schlosser)

Former Boston Celtics majority owner Wyc Grousbeck and Aditya Mittal, an investor in the NBA team, are preparing a bid to purchase the Seattle Seahawks, according to a report Thursday by Sportico.

The report cites multiple people familiar with the process in saying that Grousbeck and Mittal submitted a letter of interest to the banking team handling the sale process for the Paul G. Allen estate. The Seahawks, Grousbeck and Mittal declined to comment to Sportico.

Mittal is a member of one of India’s richest families and is CEO of ArcelorMittal, a Luxembourg-based steel manufacturing company. He invested $1 billion in the group that purchased the Celtics in 2025 for $6.1 billion.

Grousbeck led the ownership group that bought the Celtics in 2002 for $360 million.

At least one Seahawks fan site was optimistic about the potential bid. 12th Man Rising quoted Celtics expert Ben Handler, who called Grousbeck a popular owner who was “present but also hands off” — much like Paul and then his sister, Jody Allen.

Advertisement

“If the Seahawks are going to be sold, then Grousbeck and Mittal, who could invest the most amount of money, would appear to be the perfect transition from the Allen family,” the site said.

The estate of Allen, the late Microsoft co-founder, announced that the Seahawks were being put up for sale in February as part of the long process of divesting many of the assets and investments that Allen made during his lifetime. All proceeds are being directed toward philanthropy.

The team, which won its second NFL championship last season, is expected to fetch upwards of $7 billion.

A report last month named Apple CEO Tim Cook and Meta founder Mark Zuckerberg as potential Seahawks suitors, but the two denied any interest.

Advertisement

Source link

Continue Reading

Tech

Anthropic Skill scanners passed every check. The malicious code rode in on a test file.

Published

on

Picture this scenario: An Anthropic Skill scanner runs a full analysis of a Skill pulled from ClawHub or skills.sh. Its markdown instructions are clean, and no prompt injection is detected. No shell commands are hiding in the SKILL.md. Green across the board.

The scanner never looked at the .test.ts file sitting one directory over. It didn’t need to. Test files aren’t part of the agent execution surface, so no publicly documented scanner inspects them (as of publication of this post). The file runs anyway. Not through the agent but through the test runner, with full access to the filesystem, environment variables, and SSH keys.

Gecko Security researcher Jeevan Jutla detailed this attack flow, demonstrating that when a developer runs npx Skills add, the installer copies the entire skill directory into the repo. If a malicious Skill bundles a *.test.ts file, the Jest and Vitest testing frameworks discover it through recursive glob patterns, treat it as a first-class test, and execute it during npm test or when the IDE auto-runs tests on save. The default configuration in open-source JavaScript test framework Mocha follows a similar recursive discovery pattern. The payload fires in beforeAll, before any assertions run. Nothing in the test output flags anything unusual. In CI, process.env holds deployment tokens, cloud credentials, and every secret the pipeline can reach.

The attack class is not new; malicious npm postinstall scripts and pytest plugins have exploited trust-on-install for years. What makes the Skill vector worse is that installed Skills land in a directory designed to be committed and shared across the team, propagate to every teammate who clones, and sit outside every scanner’s detection surface.

Advertisement

The agent is never invoked, and the Anthropic Skill scanner reads the right files for the wrong threat model.

Three audits, one blind spot

Gecko’s disclosure didn’t arrive in isolation. It landed on top of two large-scale security audits that had already documented the scope of the problem from the other direction, illustrating what scanners detect rather than what they miss. Both audits did exactly what they’re designed to do: They measured the threat on the execution surface scanners already inspect. Gecko measured what sits outside it.

A SkillScan academic study, published on January 15, analyzed 31,132 unique Anthropic Skills collected from two major marketplaces. Their findings: 26.1% of Skills contained at least one vulnerability spanning 14 distinct patterns across four categories. Data exfiltration showed up in 13.3% of Skills. Privilege escalation appeared in 11.8%. Skills bundling executable scripts were 2.12x more likely to contain vulnerabilities than instruction-only Skills.

Three weeks later, Snyk published ToxicSkills, the first comprehensive security audit of the ClawHub and skills.sh marketplaces. Snyk’s team scanned 3,984 Skills (as of February 5). The results: 13.4% of all Skills contained at least one critical-level security issue. Seventy-six confirmed malicious payloads were identified through a combination of automated scanning and human-in-the-loop review. Eight of those malicious Skills were still publicly available on ClawHub when the research was published.

Advertisement

Then Cisco shipped its AI Agent Security Scanner for IDEs on April 21, integrating its open-source Skill Scanner directly into VS Code, Cursor, and Windsurf. The scanner brings genuine capability to developers’ workflows. It does not inspect bundled test files, because the detection categories Cisco built target the agent interaction layer, not the developer toolchain layer.

The three major Anthropic Skill scanners share a structural blind spot: None inspects bundled test files as an execution surface, even though Gecko Security proved that those files execute with full local permissions through standard test runners.

Snyk Agent Scan, Cisco’s AI Agent Security Scanner, and VirusTotal Code Insight all work. They catch prompt injection, shell commands, and data exfiltration in Skill definitions and agent-referenced scripts. What they do not do is look beyond the agent execution surface to the developer execution surface sitting in the same directory.

How the attack chain works

The mechanics of the attack chain matter because the fix is precise. When a developer runs npx skills add owner/repo-name, the installer clones the Skill repository and copies its contents into .agents/skills// inside the project. Claude Code, Cursor, and other agent IDEs get symlinks into their own Skill directories. The only files excluded are .git, metadata.json, and files prefixed with _. Everything else lands on disk.

Advertisement

Jest and Vitest both pass dot: true to their glob engines. That means they discover test files inside dot-prefixed directories like .agents/. Mocha’s behavior depends on configuration but follows similar recursive patterns by default. None of them exclude .agents/, .claude/, or .cursor/ from their default discovery paths.

An attacker publishes a Skill with a clean SKILL.md and a tests/reviewer.test.ts file containing a beforeAll block. The block reads process.env, .env files, ~/.ssh/ private keys, and ~/.aws/credentials. It posts everything to an external endpoint. The test cases look real. The exfiltration happens during setup, silently, whether the tests pass or fail.

The vector is not limited to TypeScript. Python repos face the same exposure through conftest.py, which pytest auto-executes during test collection. Add .agents to testpaths exclusion in pyproject.toml to block it.

The .agents/skills/ directory is designed to be committed to the repo so teammates can share Skills. GitHub’s default .gitignore templates do not include .agents/. Once the malicious test file enters the repo, every developer who clones and runs tests executes the payload. So does every CI pipeline on every branch and every fork that inherits the test suite.

Advertisement

Scanners are reading the wrong threat surface

CrowdStrike CTO Elia Zaitsev put the structural challenge in operational terms during an exclusive VentureBeat interview at RSAC 2026. “Observing actual kinetic actions is a structured, solvable problem,” Zaitsev said. “Intent is not.”

That distinction cuts directly at the Anthropic Skill scanner gap. No publicly documented scanner operates outside the assumption that the threat lives in the SKILL.md and in scripts the agent is instructed to run. These tools analyze intent: What does the Skill tell the agent to do? Gecko’s finding sits on the kinetic side. The test file executes through the developer’s own toolchain. No agent is involved. No prompt is interpreted. The payload is TypeScript, running with full local permissions through a legitimate test runner. The scanner was solving the wrong problem.

CrowdStrike’s Zaitsev framed the identity dimension: “AI agents and non-human identities will explode across the enterprise, expanding exponentially and dwarfing human identities,” he told VentureBeat. “Each agent will operate as a privileged super-human with OAuth tokens, API keys, and continuous access to previously siloed data sets.”

CrowdStrike’s Charlotte AI and similar enterprise agents operate with exactly these privileges. When those credentials live in environment variables accessible to any process in the repo, a test-file payload does not need agent privileges. It already has developer privileges, which in most CI configurations means deployment tokens and cloud access.

Advertisement

Mike Riemer, SVP of the network security group and field CISO at Ivanti, quantified the exploitation window in a VentureBeat interview. “Threat actors are reverse engineering patches within 72 hours,” Riemer said. “If a customer doesn’t patch within 72 hours of release, they’re open to exploit.”

Most enterprises take weeks. The Anthropic Skill scanner blind spot compounds that window. A developer installs a malicious Skill today. The test file executes immediately. No patch exists because no scanner flagged it.

The Anthropic Skill Audit Grid

VentureBeat has covered the Anthropic Skill supply chain since the ClawHavoc campaign hit ClawHub in January. Every conversation with security leaders lands on the same frustration. Their teams bought a scanner, it reports clean, and they have no framework for asking what it does not check.

VentureBeat has polled dev teams who install Anthropic Skills from ClawHub and skills.sh. The grid below connects the published-audit half (Snyk, SkillScan) with the scanner-bypass half (Gecko). Each row represents a detection surface a security team should verify before approving any Skill scanning tool for Q2 procurement.

Advertisement

Audit question

What scanners do today

The gap

Recommended action

Advertisement

Inspect SKILL.md and agent-invoked scripts

Covered by Snyk Agent Scan, Cisco AI Agent Security Scanner, VirusTotal Code Insight

This is the covered surface. Attackers shift payloads to files outside it.

Continue running current scanners. They catch real threats at the instruction layer.

Advertisement

Inspect bundled test files (*.test.ts, *.spec.js, conftest.py)

Not currently inspected as attack surface by any scanner

Gecko proved test files execute via Jest/Vitest (documented) and Mocha (config-dependent) with full local permissions. No agent invoked.

Add .agents/ to testPathIgnorePatterns (Jest) or exclude (Vitest). One config line.

Advertisement

Flag Skills that bundle test files or build configs

Not flagged as higher-risk metadata by any scanner

Trivial static check. Skills with extra executables are 2.12x more likely to be vulnerable (SkillScan).

Add CI gate: find .agents/ -name “*.test.*” | grep -q . && exit 1. Block merge on match.

Advertisement

Restrict test-runner globs to project-owned paths

Rare. Most CI configs use recursive glob. Jest/Vitest pass dot: true by default.

Default globs traverse .agents/, .claude/, .cursor/ directories. Malicious test files auto-discovered.

Scope test roots to first-party directories (src/, app/). Deny .agents/, .claude/, .cursor/.

Advertisement

Distinguish script-bundling Skills vs. instruction-only

Partial coverage via static and semantic analysis

SkillScan: script-bundling Skills 2.12x more likely to contain vulnerabilities than instruction-only.

Require structured audit entry: Skill type, execution surfaces, scanner coverage, residual risk.

Advertisement

Publish audit methodology with sample size

Snyk yes (3,984 Skills). SkillScan yes (31,132 Skills).

Cisco and emerging scanners have not published equivalent ecosystem-scale audits.

Ask vendors: methodology, sample size, detection rate. No published audit = no independent baseline.

Advertisement

Pin Skill sources to immutable commits

Not enforced by any scanner or marketplace

Skill authors can push clean version for review, add malicious test file after approval.

Pin to specific commit hash. Review diffs on every update. OWASP Agentic Skills Top 10 recommends this.

Advertisement

Three CI hardening steps to add now

Riemer made the broader point in VentureBeat interviews that placing security controls at the perimeter invites every threat to that exact boundary. Anthropic Skill scanners placed the boundary at SKILL.md. Attackers put the payload one directory over. The three changes below move the boundary to where the code actually executes.

These changes take minutes. None requires replacing current tools or waiting for scanner vendors to close the gap.

Add .agents/ to the test runner’s ignore list. In Jest, add /\.agents/ to testPathIgnorePatterns in jest.config.js. In Vitest, add **/.agents/** to the exclude array in vitest.config.ts. One line in one config file prevents the test runner from discovering files inside installed Skill directories. Do it whether or not the team currently uses Anthropic Skills. The directory may appear in a cloned repo without anyone installing the Skill directly.

Audit every Skill install for non-instruction files before merge. Add a CI check that flags any file in .agents/skills/ matching *.test.*, *.spec.*, __tests__/, *.config.*, or conftest.py. These files have no legitimate reason to exist inside a Skill directory. The check is a shell one-liner: [ -d .agents ] && find .agents/ -name “*.test.*” -o -name “*.spec.*” -o -name “conftest.py” -o -name “*.config.*” -o -type d -name “__tests__” | grep -q . && exit 1. If it matches, block the merge. For any test files that do land in a PR, require a reviewer to skim for shell invocations (exec, spawn, child_process), external network calls, and file operations touching secrets or SSH keys.

Advertisement

Pin Skill sources to specific commits, not latest. The npx skills add command copies whatever the repo contains at the moment of install. A Skill author can push a clean version for scanner review, then add a malicious test file after approval. Pinning to a specific commit hash converts a trust-on-first-use model into a verify-on-every-change model. The OWASP Agentic Skills Top 10 recommends exactly this.

If Skills are already in your repo: Run the find command above against your existing .agents/ directory now. If test files are present, treat them as a potential compromise: Rotate any credentials accessible to CI (deployment tokens, cloud keys, SSH keys), audit CI logs for unexpected outbound network calls during test execution, and review git history to determine when the test files entered the repo and which pipelines have executed them.

Five questions to ask your Anthropic Skill scanner vendor

Security teams are signing contracts for their first dedicated Skill scanning tools. The Gecko bypass means the questions on those sales calls need to change. Do not stop at “Do you detect prompt injection?” Ask:

  • Which files and directories do you actually analyze in a Skill repo?

  • Do you treat test files as potential execution surfaces?

  • Can you flag Skills that bundle tests, CI configs, or build scripts as higher-risk? SkillScan showed script-bundling Skills are 2.12x more likely to be vulnerable.

  • Do you provide integration or guidance for restricting test-runner globs in CI? Cisco deserves credit for open-sourcing its Skill Scanner on GitHub, which lets security teams inspect exactly which detection categories the tool implements. That transparency is the baseline every vendor should meet. If your vendor will not publish detection categories or open-source their scanning logic, you cannot verify what they check and what they skip.

  • Have you published an ecosystem-scale audit with methodology and sample size? Snyk published at 3,984 Skills. SkillScan published at 31,132. Riemer described the disclosure pattern: “They chose not to publish a CVE. They just quietly patched it and moved on with life,” he said. The Anthropic Skills ecosystem is showing early signs of the same pattern: scanners document what they detect without mapping the surfaces they do not reach. The gap between documented coverage and actual execution surface is where the test-file vector lives.

The audit grid matters because the scanner model is incomplete

The Anthropic Skills ecosystem is repeating the early npm supply chain story, except without the decade of accumulated incidents that forced package registries to build security infrastructure. SkillScan’s 31,132-Skill dataset showed a quarter of the ecosystem carrying vulnerabilities. Snyk found 76 confirmed malicious payloads in fewer than 4,000 Skills. Gecko proved the scanner model itself has a structural gap that no vendor has publicly documented closing.

Advertisement

Scanner evaluations consistently test the covered surface. The Anthropic Skill Audit Grid gives security teams the seven audit surfaces to verify before signing. The three CI steps are the fixes to deploy before the next Skill install. Riemer’s Ivanti team watches the patch-to-exploit cycle compress in real time across enterprise environments. The test-file vector compresses it further: No scanner flagged the threat, so no patch window exists.

The scanner is not broken. It is incomplete. The threat model stopped at the agent. The test runner did not.

Source link

Advertisement
Continue Reading

Tech

Ctrl-Alt-Speech: The Human Element In The Room

Published

on

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Mike is joined by First Amendment lawyer Ari Cohn. Together they discuss:

Advertisement

Support the podcast by joining our Patreon, with special founder membership available until May 28th.

Filed Under: age verification, ari cohn, artificial intelligence, chatbots, content moderation, free speech, trust and safety

Companies: character.ai

Source link

Advertisement
Continue Reading

Tech

New TCLBanker malware self-spreads over WhatsApp and Outlook

Published

on

New TCLBanker malware self-spreads over WhatsApp and Outlook

A new trojan named TCLBanker, which targets 59 banking, fintech, and cryptocurrency platforms, uses a trojanized MSI installer for Logitech AI Prompt Builder to infect systems.

Additionally, the malware includes self-spreading worm modules for WhatsApp and Outlook that automatically infect new victims.

The new banking trojan was discovered by Elastic Security Labs, whose researchers believe it’s a major evolution of the older Maverick/Sorvepotel malware family.

While TCLBanker currently appears focused in Brazil, specifically checking timezone, keyboard layout, and locale, LATAM malware has, in the past, been updated to broaden its targeting scope, so the risk of the threat expanding is real.

Advertisement

TCLBanker capabilities

Elastic warns that TCLBanker is extremely well protected against analysis and debugging, featuring environment-dependent payload decryption routines that fail in sandboxes or analyst environments.

It also runs a persistent watchdog thread that continuously hunts for analysis tools like x64dbg, IDA, dnSpy, Frida, ProcessHacker, Ghidra, de4dot, and others.

Monitoring for targeted processes
Monitoring for targeted processes
Source: Elastic

The malware is loaded within the context of the legitimate Logitech application via DLL side-loading, so it won’t trigger any alarms from security products protecting the infected host.

The researchers noted that, while the loader is rich in features, none go very far toward being truly advanced, and code artifacts indicate that AI may have been used in its development.

The banking module monitors the browser address bar every second using Windows UI Automation APIs, watching for when the victim opens a website of one of its 59 targeted platforms.

Advertisement

When that happens, it establishes a WebSocket session with the command-and-control (C2), sends victim and system information, and starts remote control operations.

The capabilities given to the operators include:

  • Live screen streaming
  • Screenshot capturing
  • Keylogging
  • Clipboard hijacking
  • Shell command execution
  • Window management
  • File system access
  • Process enumeration
  • Remote mouse/keyboard control

During active sessions, the Task Manager process is killed to prevent disruptions and hide the malicious activity from the victim.

To support data theft, TCLBanker uses a WPF-based overlay system that can push to victims fake credential prompts, PIN keypads, phone-number collection forms, fake “bank support” waiting screens, fake Windows Update screens, and various fake progress screens.

There are also “cutout” overlays that stay on top, allowing only selected portions of real applications to be shown to the victim, and masking other parts.

Advertisement
Fake Windows update overlay
Generating a fake Windows update overlay
Source: Elastic

WhatsApp and Outlook worms

An interesting aspect of TCLBanker is its ability to propagate autonomously to contacts linked to the primary victim.

The malware searches Chromium browser profiles for authenticated WhatsApp Web IndexedDB data, and launches a hidden Chromium instance that hijacks the victim’s account.

Hijacking WhatsApp accounts
Hijacking WhatsApp accounts
Source: Elastic

Then, it harvests contacts, filters for Brazilian numbers, and sends them spam messages from the victim’s account, leading them to TCLBanker distribution platforms.

Another worm module abuses Microsoft Outlook through COM automation, launching the app, harvesting contacts and sender addresses, and sending phishing emails through the victim’s email account.

Harvesting Outlook contacts
Harvesting Outlook contacts
Source: Elastic

Elastic concludes that TCLBanker is as a characteristic example of the evolution of LATAM malware, offering lower-tier cybercriminals features that were once only available in highly sophisticated tools.


article image

AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.

At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.

Claim Your Spot

Source link

Advertisement
Continue Reading

Tech

Mozilla says 271 vulnerabilities found by Mythos have “almost no false positives”

Published

on

As noted earlier, Mozilla’s characterization of AI-assisted vulnerability discovery as a game changer has been met with massive, vocal skepticism in many quarters. Critics initially scoffed when Mozilla didn’t obtain CVE designations for any of the 271 vulnerabilities. Like many developers, however, Mozilla doesn’t obtain CVE listings for internally discovered security bugs. Instead, they are bundled into a single patch. Normally, Bugzilla reports detailing these “rollups” are hidden for several months after being fixed to protect those who are slow to patch. Now that Mozilla has revealed a dozen of them, the same critics will surely claim they too were cherry-picked and conceal less accurate results.

Of the 271 bugs found using Mythos, 180 were sec-high, Mozilla’s highest designation for internally reported vulnerabilities. These types of vulnerabilities can be exploited through normal user behavior, such as browsing to a web page. (The only higher rating, sec-critical, is reserved for zero-days.) Another 80 were sec-moderate, and 11 were sec-low.

The critics are right to keep pushing back. Hype is a key method for inflating the already high puffed-up valuations of AI companies. Given the extensive praise Mozilla has given to Mythos, it’s easy for even more trusting people to wonder: What’s it getting in return? Far from settling the debate, Thursday’s elaborations are likely to only further stoke the controversy.

To hear Grinstead tell it, however, the details are clear evidence of the usefulness of AI-assisted discovery, and Mozilla’s motivation is simple.

Advertisement

“People are a bit burned from the last year of these slop commits so we felt it was important to show some of our work, open up some of the bugs, and talk about it in a little more detail as a way to hopefully spur some action or continue the conversation,” he said. “There’s no sort of marketing angle here. Our team has completely bought in on this approach. We are trying to get a message out about this technique in general and not any specific model provider, company, or anything like that.”

Source link

Advertisement
Continue Reading

Tech

Canvas login portals hacked in mass ShinyHunters extortion campaign

Published

on

Canvas

The ShinyHunters extortion gang has breached education technology giant Instructure again, this time exploiting a vulnerability to deface Canvas login portals for hundreds of colleges and universities.

The defacements, which were visible for roughly 30 minutes before being taken offline, displayed a message from ShinyHunters claiming responsibility for the earlier Instructure breach and threatening to leak stolen data if a ransom is not paid.

The message warns that Instructure and schools have until May 12 to contact them to negotiate a ransom, or students’ data will be leaked.

“ShinyHunters has breached Instructure (again). Instead of contacting us to resolve it they ignored us and did some ‘security patches’,” reads the defacement.

Advertisement

“If any of the schools in the affected list are interested in preventing the release of their data, please consult with a cyber advisory firm and contact us privately at TOX to negotiate a settlement. You have till the end of the day by May 12 2026 before everything is leaked,” continued the message.

Defaced University of Texas San Antonio Canvas login page
Defaced University of Texas San Antonio Canvas login page

BleepingComputer has learned that threat actors defaced the Canvas login portals for approximately 330 educational institutions, replacing the standard login pages with an extortion message. This defacement message also appeared in the Canvas app.

The defacement was allegedly caused by a vulnerability in Instructure’s systems that allowed the threat actor to modify the login portals. Instructure has since taken Canvas offline while they respond to the latest cyberattack.

Last week, Instructure disclosed that it was investigating a cyberattack after threat actors claimed to have stolen 280 million student and staff records tied to 8,809 schools, universities, and education platforms using its Canvas learning management system.

The ShinyHunters gang later told BleepingComputer that the stolen data included user records, private messages, enrollment data, and other information allegedly gathered through Canvas data export features and APIs.

Advertisement

Instructure confirmed that data was stolen during the attack but that they are continuing to investigate the incident.

BleepingComputer has repeatedly contacted Instructure with questions about the attack, including today’s, and whether they plan on notifying students and staff about the data breach. However, our emails have so far remained unanswered.

Canvas is one of the most widely used learning management systems in higher education and K-12 environments, helping schools manage coursework, assignments, grading, and communication between students and faculty.

Who is ShinyHunters

The name ShinyHunters has long been associated with numerous threat actors who have conducted data breaches since 2018.

Advertisement

This year, threat actors using the ShinyHunters name have become among the most prolific groups conducting data theft and extortion attacks against companies worldwide.

Primarily focusing on Salesforce and other cloud SaaS environments, the threat actors are linked to a growing number of breaches involving companies such as GoogleCiscoPornHub, and online dating giant Match Group.

The extortion gang commonly breaches third-party integration companies and uses stolen authentication tokens to access connected SaaS environments and steal customer data.

The threat actors are also known for conducting voice phishing (vishing) attacks targeting Okta, Microsoft, and Google single sign-on (SSO) accounts, impersonating IT support staff to trick employees into entering credentials and multi-factor authentication (MFA) codes on phishing sites.

Advertisement

As BleepingComputer first reported, the ShinyHunters group has also recently adopted device code vishing attacks to obtain Microsoft Entra authentication tokens.

After stealing credentials and authentication codes, the threat actors hijack SSO accounts to breach connected enterprise services such as Salesforce, Microsoft 365, Google Workspace, SAP, Slack, Adobe, Atlassian, Zendesk, and Dropbox.

While members of the ShinyHunters gang are responsible for numerous attacks, they are also known to operate as an extortion-as-a-service group, conducting extortion on behalf of other threat actors in exchange for a share of ransom payments.

There have been numerous arrests linked to the ShinyHunters name, including suspects connected to the Snowflake data-theft attacksbreaches at PowerSchool, and the operation of the Breached v2 hacking forum.

Advertisement

Yet despite these arrests, companies continue to receive extortion emails signed with the message, “We are ShinyHunters.”


article image

AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.

At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.

Claim Your Spot

Source link

Advertisement
Continue Reading

Tech

Bezos family office representative leaves Slate Auto board months before $1.4B EV startup begins production in Indiana

Published

on

TL;DR

Jeff Bezos’s family office representative Melinda Lewison has left Slate Auto’s board months before the 1.4 billion dollar EV startup is scheduled to begin production of its affordable electric truck in Warsaw, Indiana. The departure follows a CEO change in March and raises questions about Bezos’s continued involvement in a company that has used his name as its most valuable fundraising asset.

 

Advertisement

The person who connected Jeff Bezos to one of the most ambitious electric vehicle startups in America has left its board. Melinda Lewison, who manages the Bezos family office and was listed as a director on Slate Auto’s corporate filings, has departed the company’s board months before its first truck is scheduled to roll off the production line in Warsaw, Indiana.

The departure follows a pattern of leadership changes at the startup that has raised 1.4 billion dollars on the strength of an idea, a factory, and a name. That name, more than any specification sheet or reservation count, has been the organising principle of Slate Auto’s public identity since TechCrunch revealed Bezos’s involvement in April 2025.

The backing

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Advertisement

Slate Auto was incubated inside Re:Build Manufacturing, the industrial conglomerate founded by Jeff Wilke, who served as chief executive of Amazon’s worldwide consumer division before retiring in 2021. Wilke and Miles Arnone, Re:Build’s chief executive, created the company under the working name Re:Car before spinning it out as an independent entity in 2023.

Bezos’s connection to Slate was indirect but unmistakable. Lewison, the head of his family office, appeared on corporate filings as a director. The arrangement gave Slate the most valuable asset a pre-revenue startup can possess: the implicit endorsement of the world’s second-richest person, without requiring Bezos himself to make public statements, attend events, or stake his reputation on production timelines.

Bezos has separately committed to a 10 billion dollar physical AI laboratory called Project Prometheus, and his family office has backed ventures across space, media, agriculture, and nuclear energy. The pattern is consistent: large bets on capital-intensive physical infrastructure, managed at arm’s length through intermediaries. Lewison’s board seat was the mechanism through which that pattern extended to Slate. Her departure removes it.

The changes

The board departure is the second significant leadership change at Slate in three months. In March, the company replaced chief executive Chris Barman with Peter Faricy, a former Amazon Marketplace vice president who had been advising Slate alongside work with McKinsey and Bessemer Venture Partners. Barman moved to the role of president of vehicles.

Advertisement

The timing of both changes is notable. Slate opened preorders in June 2025 and crossed 100,000 refundable reservations within two weeks. The reservation count has since grown to more than 160,000. The company closed a 650 million dollar Series C in April 2026, led by TWG Global, the investment firm run by Los Angeles Dodgers owner Mark Walter and Thomas Tull. Total funding reached 1.4 billion dollars.

A startup that changes its chief executive and loses a high-profile board member in the months before first production is not necessarily in trouble. Leadership transitions at this stage can reflect the shift from fundraising mode to operational execution, and Faricy’s Amazon logistics background is arguably better suited to manufacturing scale-up than Barman’s earlier role. But the optics matter for a company whose brand has been built on the Bezos connection.

Amazon-backed ventures have reached significant milestones recently, including nuclear startup X-Energy’s 1.02 billion dollar IPO in April. But X-Energy is a company where the Amazon relationship deepened over time, culminating in a 500 million dollar investment and a five-gigawatt power purchase commitment. At Slate, the Bezos connection appears to be narrowing rather than expanding.

The truck

The vehicle at the centre of this is deliberately unglamorous. Slate’s electric truck is priced in the mid-20,000 dollar range before federal incentives, which could push the effective cost below 20,000 dollars. It offers a 52.7 kilowatt-hour battery with 150 miles of range in the standard configuration, or an 84.3 kilowatt-hour battery with 240 miles in the extended version. Payload capacity is 1,400 pounds. The design is boxy, utilitarian, and deliberately analog, with physical controls and minimal software.

Advertisement

The positioning is anti-Tesla in every dimension. Where Tesla’s Cybertruck is a 80,000 dollar stainless steel statement piece, Slate is pitching a work truck for tradespeople, small business owners, and first-time EV buyers who want something that functions like the cheap trucks Detroit stopped making a decade ago. The company offers more than 100 accessories and a do-it-yourself SUV conversion kit.

The Warsaw, Indiana factory, a former R.R. Donnelley printing facility, has received approximately 400 million dollars in investment and is projected to create more than 2,000 jobs in Kosciusko County. Production is scheduled to begin late 2026, with preorders opening in June alongside official pricing.

The market

A dozen electric vehicle models have been discontinued in the United States as tariffs, tax credit changes, and import costs reshape the market. The result is a landscape that structurally favours domestically manufactured vehicles, particularly those priced below the 55,000 dollar threshold for the federal EV tax credit. Slate’s price point and Indiana factory position it squarely within these incentive boundaries.

The affordable EV truck segment is no longer uncontested. Kia has confirmed an electric pickup and plans to deploy Atlas robots in its Georgia factories, targeting the same domestic-manufacturing advantage that Slate is pursuing. Hyundai, Scout Motors, and several Chinese manufacturers exploring US assembly are all eyeing the segment below 40,000 dollars.

Advertisement

Volkswagen has overtaken Amazon as Rivian’s largest shareholder after a one billion dollar software milestone payment, illustrating how quickly investor relationships shift in the EV startup landscape. Rivian, which went public at a 153 billion dollar valuation in 2021 and saw its market capitalisation collapse by more than 90 per cent, remains the most prominent cautionary tale for EV startups that raise billions before achieving sustainable production economics.

Slate’s 160,000 reservations, collected at 50 dollars each on a fully refundable basis, represent intent rather than commitment. The conversion rate from reservation to binding order will determine whether the Warsaw factory’s capacity is a strength or an albatross.

The question

Every electric vehicle startup that has reached the production stage has experienced some version of the transition Slate is undergoing. The founders who attract early capital and generate excitement are not always the operators who can run a factory, manage a supply chain, and deliver vehicles on time. Faricy’s appointment suggests Slate’s investors understand this. Lewison’s departure suggests the Bezos orbit has decided its involvement has reached a natural conclusion, or that the risk profile of a pre-production automaker no longer fits the family office’s portfolio strategy.

What Slate has that most failed EV startups did not is a realistic product for a market that exists. The truck is not a hypercar, a flying taxi, or a autonomous robotaxi. It is a cheap, simple vehicle for people who need to move things, built in a state that wants the jobs, priced for a tax credit that currently exists, and manufactured domestically in a trade environment that punishes imports.

Advertisement

The question is whether the company can execute without the halo. Bezos’s name opened doors, attracted co-investors, and generated media coverage that a startup building affordable trucks in Indiana would not otherwise have received. The 1.4 billion dollars is in the bank. The factory is under construction. The reservations are on the books. And the person who represented the most famous investor in the building has walked out the door, six months before the first truck is supposed to drive through it.

Source link

Advertisement
Continue Reading

Tech

Utah Wants Websites To See Through VPNs. That’s Not How VPNs Work.

Published

on

from the security-theater dept

Utah has a long track record of short-sighted internet policymaking, but the latest example really does take things to a new level of stupid. As of yesterday, Utah’s “Online Age Verification Amendments” bill, Senate Bill 73, has taken effect. It is a piece of legislation that effectively tries to ban VPNs as a desperate attempt to stop people from bypassing the state’s already problematic (and likely unconstitutional) age verification requirements.

Signed by Governor Spencer Cox on March 19, the controversial law establishes that a user is considered to be accessing a website from Utah if they are physically located there, regardless of whether they use a VPN or proxy to mask their IP address. It also prohibits covered websites from sharing instructions on how to use a VPN to bypass age checks.

We’ve been highlighting the various attempts to ban VPNs as short-sighted legislators fail to grasp how necessary they are for basic security. But, now, Utah has touched the stove and is going to find out what it feels like.

While an earlier version of the law would have simply held a provider liable for not doing age verification, the amended version says service providers have to determine whether the person is physically located in Utah — even if they’re using a VPN to appear to be from somewhere else:

An individual is considered to be accessing the website from this state if the individual is actually located in the state, regardless of whether the individual is using a virtual private network, proxy server, or other means to disguise or misrepresent the individual’s geographic location to make it appear that the individual is accessing a website from a location outside this state.

In short, the genius legislators in Utah have decided that websites should do the impossible: either block all access from VPNs or somehow magically “know” that users whose digital footprints suggest they’re connecting from outside Utah are actually lying about their location. That is, in any understanding of the law, an effective ban on VPNs, because the only way to deal with that would be to block off huge segments of IP addresses associated with known VPN servers.

Advertisement

Even worse, the law says it’s a violation to tell people how to protect themselves with a VPN, which seems like a First Amendment violation on its own (you can’t ban a service from telling users how to use another service):

A commercial entity that operates a website that contains a substantial portion of material harmful to minors may not facilitate or encourage the use of a virtual private network, proxy server, or other means to circumvent age verification requirements, including by providing:

(a)instructions on how to use a virtual private network or proxy server to access the website; or

(b)means for individuals in this state to circumvent geofencing or blocking.

Lia Holland at Fight for the Future pointed out the absurdity of this in a statement, noting that the logic of the bill doesn’t even survive a basic reality check:

This is the sort of slop that if you asked the chatbot whether or not its previous statement was accurate, it would apologize profusely. Why? Because you cannot require a website doing age verification to determine where someone using a reputable VPN is browsing from—this feat is literally impossible by design for even the best hacker.

Such language and lack of logic begs the question—do Utah lawmakers actually understand what a VPN is? Let’s set the record straight: VPNs are an essential tool for online privacy, security, and liberty that everyone from abuse survivors to small businesses use to keep themselves safe. VPNs do this by totally hiding where a person is browsing the Internet from. Thus, when a person is using a VPN, the website they are browsing definitionally can’t tell whether or not they are in Utah.

Advertisement

It’s fairly astounding the level of technological ignorance legislators will openly admit in their efforts to demand technology do the impossible. Insisting that VPNs need to be banned should be a disqualifier from holding public office.

EFF’s Rindala Alajaji notes that what Utah is demanding here is technologically incomprehensible:

Blocking all known VPN and proxy IP addresses is a technical whack-a-mole that likely no company can win. Providers add new IP addresses constantly, and no comprehensive blocklist exists. Complying with Utah’s requirements would require impossible technical feats.

The internet is built to, and will always, route around censorship. If Utah successfully hampers commercial VPN providers, motivated users will transition to non-commercial proxies, private tunnels through cloud services like AWS, or residential proxies that are virtually indistinguishable from standard home traffic. These workarounds will emerge within hours of the law taking effect. Meanwhile, the collateral damage will fall on businesses, journalists, and survivors of abuse who rely on commercial VPNs for essential data security.

Again, Fight for the Future explains the real impact of such a law:

Advertisement

Websites are left with three choices: either try to block everyone around the globe who’s using a VPN (which they can’t actually do), or require age verification for everybody in the world no matter if they’re in Utah, or censor all content that meets Utah’s nebulous “harmful to minors” standard for age verification.

Oh wait, there’s a fourth option: sue Utah.

Ignoring the law or suing the state appear to be the only rational responses.

Age verification already has a long list of well-known problems, many of which put users at risk. An effective ban on VPNs just makes it that much more dangerous for anyone in that state to use the internet. The fact that they’re doing all of this under the pretense of “protecting” children, when the actual impact will put everyone at greater risk, is just the icing on the cake — performative headline-chasing dressed up as policy.

Filed Under: age verification, location, sb 73, security, utah, vpns

Advertisement

Source link

Continue Reading

Tech

How Sakana trained a 7B model to orchestrate GPT-5, Claude Sonnet 4 and Gemini 2.5 Pro

Published

on

Every LangChain pipeline your team hardcodes starts breaking the moment the query distribution shifts — and it always shifts. That bottleneck is what Sakana AI set out to eliminate.

Researchers at Sakana AI have introduced the “RL Conductor,” a small language model trained via reinforcement learning to automatically orchestrate a diverse pool of worker LLMs. Conductor dynamically analyzes inputs, distributes labor among workers, and coordinates among agents.

This automated coordination achieves state-of-the-art results on difficult reasoning and coding benchmarks, outperforming individual frontier models like GPT-5 and Claude Sonnet 4 as well as expensive human-designed multi-agent pipelines. It achieves this performance at a fraction of the cost and with fewer API calls than competitors. RL Conductor is the backbone of Fugu, Sakana AI’s commercial multi-agent orchestration service.

The limitations of manual agentic frameworks

Large language models have strong latent capabilities. But tapping these capabilities to their fullest is a great challenge. Extracting this level of performance relies heavily on manually designed agentic workflows, which serve as critical components in commercial AI products. 

Advertisement

However, these frameworks fall short because they are inherently rigid and constrained. In comments to VentureBeat, Yujin Tang, co-author of the paper, explained the exact breaking point of current systems: “While using frameworks with hard-coded pipelines like LangChain and Mixture-of-Agents can work well for specific use cases … In production, an inherent bottleneck arises when targeting domains with large user bases with very heterogeneous demands.” 

Tang noted that achieving “real-world generalization in such heterogeneous applications inherently necessitates going beyond human-hardcoded designs.”

Another bottleneck for building robust agentic systems is that no single model is optimal for all tasks. Different models are fine-tuned to specialize in distinct domains. One model might excel at scientific reasoning, while another is superior at code generation, mathematical logic, or high-level planning. 

Because models have these varying characteristics and complementary skills, manually predicting and hard-coding the ideal combination of models for every query is practically impossible. An optimal agentic framework should be able to analyze a problem and delegate subtasks to the most suitable expert in the pool.

Advertisement

Conducting an orchestra of agents

The RL Conductor is designed to overcome the limitations of rigid, human-designed frameworks. As the name implies, it conducts an orchestra of agents by dividing challenging problems, delegating targeted subtasks, and designing communication topologies for a set of worker LLMs. 

Instead of relying on fixed code or static routing, the Conductor orchestrates these models by generating a customized workflow. For each step in the workflow, the model generates a natural language instruction for a specific aspect of the task, assigns an agent to carry it out, and defines an “access list” that dictates which past subtasks and responses from other agents are included in that agent’s context.

By defining everything in natural language, the Conductor builds flexible workflows tailored to each input. It can construct simple sequential chains, parallel tree structures, or even recursive loops depending on the problem’s demands. 

Screenshot 2026-05-07 at 6.07.40 PM

RL Conductor (source: Sakana AI)

Advertisement

Importantly, the model learns these strategies not by human design but through reinforcement learning (RL) and reward maximization. During training, the model is given a task, a pool of workers, and a reward signal based on whether its answer and output format are correct.

Through a simple trial-and-error RL algorithm, the model organically discovers which combinations of instructions and communication structures yield the highest reward. As a result, it automatically adopts advanced orchestration strategies such as targeted prompt engineering, iterative refinement, and meta-prompt optimization. 

The model learns to dynamically adjust its strategies and leverage the distinct strengths of its worker agents without any human developer having to hard-code the process.

Conductor in action

To test RL Conductor in action, the researchers fine-tuned the 7-billion parameter Qwen2.5-7B using the framework. During training, the Conductor was tasked with designing agentic workflows of up to five steps. It was given access to a worker pool containing seven different models: three closed-source giants (Gemini 2.5 Pro, Claude-Sonnet-4, and GPT-5) and four open-source models (including DeepSeek-R1-Distill-Qwen-32B, Gemma3-27B, and Qwen3-32B).

Advertisement

The team evaluated the Conductor across a variety of highly challenging benchmarks, comparing it against individual frontier models acting alone, self-reflection agents prompted iteratively to improve their own answers, and state-of-the-art multi-agent routing frameworks like MASRouter, Mixture-of-Agents (MoA), RouterDC, and Smoothie. The small 7B Conductor set new benchmarks across the board. It achieved an average score of 77.27% across all tasks, hitting 93.3% on the AIME25 math benchmark, 87.5% on GPQA-Diamond, and 83.93% on LiveCodeBench, according to the researchers.

Remarkably, it achieved these marks while remaining highly efficient. While baseline models like MoA burned through 11,203 tokens per question, the Conductor used an average of just 1,820 tokens, taking an average of only three steps per workflow.

rl-conductor-performance

RL Conductor outperforms other baselines on key industry benchmarks (source: arXiv)

A closer look at the experimental details shows exactly why the framework is so effective. The Conductor automatically learned to measure task difficulty. For simple factual recall questions, it often solved the problem in a single step or used a basic two-agent setup. However, for complex coding problems, it built extensive workflows involving up to four agents with dedicated planning, implementation, and verification phases.

Advertisement

The Conductor also learned that frontier models have different strengths. To achieve record scores on coding benchmarks, the Conductor frequently assigned Gemini 2.5 Pro and Claude Sonnet 4 to act as high-level planners, and only brought in GPT-5 at the very end to write the final optimized code. In a particularly clever display of adaptability, the Conductor would sometimes completely abdicate its own role, handing the entire planning process over to Gemini 2.5 Pro and allowing it to dictate the subtasks for the rest of the pool.

Beyond math and coding benchmarks, Sakana AI is already putting the underlying architecture to work in front-office utility. “We have been using our Fugu models based on the Conductor technology internally for various practical enterprise applications: software development, deep research, strategy development, and even visual tasks like slide generation,” Tang said.

Bringing orchestration to the enterprise: Sakana Fugu

While the 7B model described in the research paper was an exploratory blueprint and is not publicly available, Sakana AI has productized the Conductor framework into its flagship commercial AI product, Sakana Fugu. Now in its beta phase, Fugu serves as a multi-agent orchestration system accessible through a standard OpenAI-compatible API.

Tang noted Fugu targets “the large market of industries where AI adoption has yet to bring large productivity gains due to the generalization limitations of current hard-coded pipelines, such as finance and defense.”

Advertisement

For enterprise developers, this allows seamless integration into existing applications without the headache of managing multiple API keys or manually routing tasks across different vendors. Behind the API interface, Fugu automates complex collaboration topologies and role assignments across a pool of models. To support varying business needs, Sakana released two variants: Fugu Mini, built for low-latency operations, and Fugu Ultra, designed for maximum performance on demanding workloads.

Addressing governance concerns around autonomous agents spinning up invisible workflows, Tang pointed out that the interpretability risks are functionally similar to the hidden reasoning traces of current top-tier closed APIs, and the system is managed with established guardrails to minimize hallucinations. 

For enterprise architects weighing when to deploy RL-orchestration versus traditional routing, the decision often comes down to engineering resources. “We believe the absolute sweet spot comes whenever users and their teams feel they are spending a disproportionate amount of time guiding their underlying agents,” Tang said. However, he cautioned that the framework isn’t necessary for everything, noting that “it’s hard to beat the economic proposition of a local model running directly on the user’s machine for simple queries.”

As the diversity of specialized open- and closed-source AI models continues to grow, static hardcoded pipelines will inevitably become obsolete. Looking ahead, this dynamic orchestration will likely extend beyond text and code environments. “There is indeed a large potential to fill this gap with cross-modal Conductor frameworks becoming the foundation for more autonomous, self-coordinating physical AI systems,” Tang said.

Advertisement

Source link

Continue Reading

Tech

Screen Time Concerns Lead to Backlash Against Edtech Vetting Process

Published

on

Among the increasing concern about screen time in school comes a new culprit: the vetting process for school software.

A growing group of parents and teachers has spent the last few years fighting against cellphones in the classroom, with some extending that to all digital devices. But the school-issued laptops, and the software accompanying them, have been left largely unscathed.

“A lot of the issues with personal devices can move to the district-issued devices,” said Kim Whitman, co-lead for Smartphone Free Childhood US, in a previous interview with EdSurge. Whitman explained that when students do not have cellphones, they can still message with friends on their Chromebooks, or through tools like Google Docs. “There are definitely issues with school-issued devices as well.”

Proposals in three states – Rhode Island, Utah and Vermont – are now tackling these concerns.

Advertisement

Better Vetting Processes

At the start of this year’s legislative session, all three states concurrently proposed reviewing the vetting process of education software.

In most districts, school boards, IT personnel and administrators choose vendors, often relying on the vendors’ own data to prove the products’ safety and efficacy.

“There is nobody right now that is confirming these products are safe, effective and legal,” Whitman said in a previous interview. “It should not fall on the district’s IT director; it would be impossible for them to do it. And the companies should not be tasked with doing it — that would be like nicotine companies vetting their own cigarettes.”

The proposed legislation is looking to change that.

Advertisement

Vermont

Bill: An act relating to educational technology products

Status: Passed by the House March 27; currently before the Senate Committee on Education

This bill proposes to require that providers of educational technology products register annually with the state. It also requires the secretary of state to create a certification standard and review process for these products before schools can use them.

Any provider of an educational technology product — specifically student-facing tools that are used for teaching and learning in schools — must register with the secretary of state, pay a registration fee of $100 and provide its most up-to-date terms and conditions and privacy policy.

Advertisement

The secretary of state would work with the Vermont Agency of Education to review registrations.

Criteria for certification include:

  • The product’s compliance with state curriculum standards
  • Advantages of using it versus non-digital methods
  • Whether it was explicitly designed for educational purposes
  • Design features, including artificial intelligence, geotracking and targeted advertising

While the initial bill proposed that any edtech provider not certified by the state, but continues to operate, could be liable for fines of $50 a day up to $10,000, that language was struck by the final bill passed from the House.

If passed by the Senate, the bill would go into effect July 1, 2026. By November 2027, the Agency of Education would submit a written report on which state entities should be involved in the edtech certification and any other recommendations for certification.

Utah

Bill: Software in Education

Advertisement

Status: Signed into law on March 18

The bill requires the Utah Board of Education to study the use of software and digital practices in public schools, review best practices and provide guidance for responsible use.

The state also passed a Classroom Technology Amendments bill tackling screen time at every grade level, banning it entirely from kindergarten through third grade, except for computer science and assessments. Middle school students must have their parents “opt-in” to taking devices home and high school students will be allowed to bring home devices unless parents “opt-out.”

“We’re not anti-technology,” Rep. Ariel Defay (R-UT) said in a statement. She is a sponsor of the Classroom Technology Amendments bill. “We just want to ensure that education technology is used intentionally and actually helps students to learn.”

Advertisement

Rhode Island

Bill: The Safe School Technology Act of 2026

Status: Passed by the House April 14; currently in the Senate Education Committee

This bill, proposed by three Rhode Island representatives who are also mothers, is part of a six-bill package focused on protecting children from social media, artificial intelligence and digital platforms.

The Safe School Technology Act bill would be enacted this August if approved, banning software providers from activating or accessing any audio or video functions on a device outside of school-related activities. It also bans the use of location data.

Advertisement

The initial bill lists a litany of concerns that the “lack of regulation” caused, including increased screen time, and “marketing commercial products as educational with no accountability; children being given devices without proof of developmental appropriateness and parents being excluded from decisions about their child’s digital exposure.”

But the main concern, argued by state Representative June Speakman (D-RI), who sponsored the bill, is that a majority of school districts’ technology policies do not have limits on tracking student devices. She added roughly two-thirds of districts also do not limit school-issued device’s ability to activate audio and video.

“Passing this bill will provide clear, consistent protection across all schools in the state that assures students and their families that their devices cannot be used to invade their privacy or track their activities,” Speakman said in a statement.

“They deserve to feel confident that their privacy is protected when they use technology that is required for school,” she added.

Advertisement

Tech Pushback

Several technology proponents have pushed back.

The Software and Information Industry Association spoke out against the Rhode Island bill in March, saying if the bill passed it would make the state be one of the most restrictive in the nation.

In an open letter to Joseph McNamara, chair of the Rhode Island House Education Committee, Abigail Wilson, director of state policy at the Software and Information Industry Association, said the bill “proposes an overly restrictive regulatory framework that will severely disrupt classroom instruction, impose massive unfunded administrative burdens on local schools, and deprive Rhode Island students of critical, evidence-based learning tools.”

Keith Krueger, CEO of the nonprofit Consortium for School Networking, told NBC News that the proposed legislation “does keep me up at night.”

Advertisement

“I think some well-intentioned policymakers … are rushing so quickly that they haven’t thought through the implications,” he said.

Source link

Continue Reading

Trending

Copyright © 2025