Connect with us
DAPA Banner

Tech

YouTube TV vs. Fubo vs. Hulu Live vs. Sling and More: 100 Top Live TV Streaming Channels Compared

Published

on

Before you’re tempted to attach yourself to a cable subscription, maybe it’s time to consider a live TV streaming service and let the cord go. The number of packages available today — for every kind of budget — is on the rise; however, live TV streaming services allow you to avoid those annoying contracts. They also offer a variety of channels, DVR and the ability to stream sports and other content. Plus, most services let you watch on your laptop or phone.  

Monthly pricing and regional sports networks can make it a challenge when choosing a live TV streamer but six main services to consider (we’re not including smaller ones) are FuboPhiloSling TVDirecTVYouTube TV and Hulu Plus Live TV

Advertisement

It really boils down to the channels, right? We’ve examined which platforms feature the most top 100 channels in their main lineups to help you determine which one is best for your household.

The Big Chart: Top 100 channels compared (updated Feb. 2026)

The main difference between the services lies in their channel selection. All of them offer different lineups of channels for various prices. 

Below, you’ll find a chart that shows the top 100 channels across all six services. Note that not every service has a worthy 100. There are actually seven listed because Sling TV has two “base” tiers, Orange and Blue. And if you’re wondering, I chose which “top” channels made the cut. Sorry, AXS TV, Discovery Life, GSN and Universal HD.

Fubo and NBCUniversal still have not resolved their carriage dispute, resulting in a gap in Fubo’s channel lineup but a drop in monthly subscription prices. DirecTV offers signature streaming packages, and its basic plan starts at $90 per month, plus fees (excluding promotional rates). With channel losses and price hikes, some of the services may seem less appealing.

Advertisement

Sling TV has made some changes to its Blue package in 2026. The price is $46 a month if you don’t have any local stations but the price has increased by $4 for those who do. If you have one or two local networks, such as NBC or Fox, the monthly rate is $50. Customers with three or more local stations in their Sling Blue package now pay $55 per month. 

Philo offers a small roster but packages HBO Max, Discovery Plus and AMC Plus access with it at no extra charge. But costs continue to go up and those changes are reflected in the chart below where applicable. 

Some more stuff to know about the chart: 

  • Yes = The channel is available on the cheapest pricing tier. That price is listed next to the service’s name.
  • No = The channel isn’t available at all on that service. 
  • $ = The channel is available for an extra fee, either a la carte or as part of a more expensive package or add-on.
  • Regional sports networks — local channels devoted to showing regular-season games of particular pro baseball, basketball and hockey teams — are not listed. DirecTV’s $130 tier has the most RSNs by far, but a few are available on other services. You can also check out its MySports package for $70 and Xfinity’s sports and news offering.
  • Local ABC, CBS, Fox, NBC, MyNetworkTV and The CW networks are not available in every city. Because the availability of these channels varies, you’ll want to check the service’s website to verify that it carries your local network.
  • Local PBS stations are only currently available on DirecTV, Hulu Live and YouTube TV. Again, you’ll want to check local availability.
  • Sling Blue subscribers in cities like Philadelphia, Chicago, Los Angeles and New York City pay extra for access to channels like NBC and ABC. Check Sling’s site to see which local channels are available in your area.
  • Fubo subscribers get an $11 price decrease on its Pro and Elite plans amid the NBCU carriage dispute, but you may find that the ACC Network and SEC Network are included with the TV package at no extra cost. Check availability for your state.
  • The chart columns are arranged in order of price, so if you can’t see everything you want, try scrolling right.
  • Overwhelmed? An easier-to-understand Google Spreadsheet is here.

Philo vs. Sling TV vs. Fubo vs. YouTube TV vs. DirecTV vs. Hulu: Top 100 channels compared

Advertisement

Channel Philo ($33) Sling Orange ($46) Sling Blue ($46) Fubo ($74) YouTube TV ($83) DirecTV ($90) Hulu with Live TV ($90)
Total channels: 43 24 34 39 78 56 75
ABC No No No Yes Yes Yes Yes
CBS No No No Yes Yes Yes Yes
Fox No No Yes (some markets) Yes Yes Yes Yes
NBC No No Yes (some markets0 No (due to carriage dispute) Yes Yes Yes
PBS No No No No Yes Yes Yes
CW No No No Yes Yes Yes (limited) Yes
MyNetworkTV No No No No Yes Yes Yes
Channel Philo ($33) Sling Orange ($46) Sling Blue ($46) Fubo ($74) YouTube TV ($83) DirecTV ($90) Hulu with Live TV ($90)
A&E Yes Yes Yes No No $ Yes
ACC Network No $ No Yes Yes $ Yes
Accuweather Yes No No Yes No Yes No
AMC Yes Yes Yes No Yes Yes No
Animal Planet Yes No No No Yes Yes Yes
BBC America Yes Yes Yes No Yes Yes No
BBC World News Yes $ $ No Yes $ No
BET Yes Yes Yes Yes Yes Yes Yes
Big Ten Network No No $ Yes Yes $ Yes
Bloomberg TV No Yes Yes Yes No Yes Yes
Boomerang No $ $ No No Yes $
Bravo No No Yes No (due to carriage dispute) Yes Yes Yes
Channel Philo ($33) Sling Orange ($46) Sling Blue ($46) Fubo ($74) YouTube TV ($83) DirecTV ($90) Hulu with Live TV ($90)
Cartoon Network No No Yes No Yes Yes Yes
CBS Sports Network No No No Yes Yes $ Yes
Cheddar Yes No No Yes Yes Yes Yes
Cinemax No No No No $ $ $
CMT Yes $ $ Yes Yes Yes Yes
CNBC No No $ No (due to carriage dispute) Yes Yes Yes
CNN No Yes Yes No Yes Yes Yes
Comedy Central Yes Yes Yes Yes Yes Yes Yes
Cooking Channel Yes $ $ $ No $ $
Destination America Yes $ $ $ No $ $
Discovery Channel Yes No Yes No Yes Yes Yes
Disney Channel No Yes No Yes Yes Yes Yes
Disney Junior No $ No Yes Yes Yes Yes
Disney XD No $ No Yes Yes Yes Yes
E! No No Yes No (due to carriage dispute) Yes Yes Yes
ESPN No Yes No Yes Yes Yes Yes
ESPN 2 No Yes No Yes Yes Yes Yes
ESPNEWS No $ No $ Yes $ Yes
ESPNU No $ No $ Yes $ Yes
Channel Philo ($33) Sling Orange ($46) Sling Blue ($46) Fubo ($74) YouTube TV ($83) DirecTV ($90) Hulu with Live TV ($90)
Food Network Yes Yes Yes No Yes Yes Yes
Fox Business No No $ Yes Yes Yes Yes
Fox News No No Yes Yes Yes Yes Yes
FS1 No No Yes Yes Yes Yes Yes
FS2 No No $ Yes Yes $ Yes
Freeform No Yes No Yes Yes Yes Yes
FX No No Yes Yes Yes Yes Yes
FX Movies No No $ $ Yes $ Yes
FXX No No $ Yes Yes Yes Yes
FYI Yes $ $ No No $ Yes
Golf Channel No No $ No (due to carriage dispute) Yes $ Yes
Hallmark Yes $ $ Yes Yes Yes Yes
HBO/Max No No No No $ $ $
HGTV Yes Yes Yes No Yes Yes Yes
History Yes Yes Yes No No $ Yes
HLN No $ Yes No Yes Yes Yes
IFC Yes Yes Yes No Yes Yes No
Investigation Discovery Yes Yes Yes No Yes Yes Yes
Lifetime Yes Yes Yes No No $ Yes
Lifetime Movie Network Yes $ $ No No $ Yes
Channel Philo ($33) Sling Orange ($46) Sling Blue ($46) FuboTV ($74) YouTube TV ($83) DirecTV ($90) Hulu with Live TV ($90)
Magnolia Network Yes $ $ No Yes $ Yes
MeTV Yes Yes Yes Yes No Yes No
MGM+ $ $ $ No $ $ No
MLB Network No $ $ $ No $ Yes
Motor Trend Yes Yes No No Yes Yes Yes
MSNBC No No Yes No (due to carriage dispute) Yes Yes Yes
MTV Yes $ $ Yes Yes Yes Yes
MTV2 Yes $ $ $ Yes Yes $
National Geographic No No Yes Yes Yes No Yes
Nat Geo Wild No No $ $ Yes $ Yes
NBA TV No $ $ $ Yes $ No
NFL Network No No Yes Yes Yes $ Yes
NFL Red Zone No No $ $ $ No $
NHL Network No $ $ $ No $ No
Nickelodeon Yes No No Yes Yes Yes Yes
Nick Jr. Yes Yes Yes Yes Yes $ Yes
Nicktoons Yes $ $ $ Yes $ $
OWN Yes No No No Yes $ Yes
Oxygen No No $ Yes Yes $ Yes
Paramount Network Yes $ $ Yes Yes Yes Yes
Science Yes $ $ $ No $ $
Channel Philo ($33) Sling Orange ($46) Sling Blue ($46) FuboTV ($74) YouTube TV ($83) DirecTV ($90) Hulu with Live TV ($90)
SEC Network No $ No $ Yes $ Yes
Showtime No $ $ $ $ $ $
Smithsonian Yes No No Yes Yes $ Yes
Starz $ $ $ $ $ $ $
Sundance TV Yes $ $ No Yes Yes No
Syfy No No Yes No (due to carriage dispute) Yes Yes Yes
Tastemade Yes $ $ Yes Yes $ No
TBS No Yes Yes No Yes Yes Yes
TCM No $ $ No Yes $ Yes
TeenNick Yes $ $ $ Yes Yes $
Telemundo No No No Yes Yes $ Yes
Tennis Channel No $ $ $ No $ No
TLC Yes No Yes No Yes Yes Yes
TNT No Yes Yes No Yes Yes Yes
Travel Channel Yes Yes Yes No Yes $ Yes
TruTV No $ Yes No Yes $ Yes
TV Land Yes $ $ Yes Yes Yes Yes
USA Network No No Yes No (due to carriage dispute) Yes Yes Yes
VH1 Yes $ $ Yes Yes Yes Yes
Vice Yes Yes Yes No No $ Yes
WE tv Yes $ $ No Yes Yes No
Channel Philo ($33) Sling Orange ($46) Sling Blue ($46) FuboTV ($74) YouTube TV ($83) DirecTV ($90) Hulu with Live TV ($90)

James Martin/CNET

Hulu Plus Live TV, which includes access to Disney Plus, Hulu on-demand and ESPN Plus, is one of the most expensive platforms, now at $90 a month for its base package. Its channel selection isn’t as robust as YouTube TV, but Hulu’s significant catalog of on-demand content sets it apart. ABC shows like High Potential and exclusive titles such as Shōgun, The Bear and Only Murders in the Building give it a content advantage.

Live TV subscribers also receive unlimited DVR that includes fast-forwarding and on-demand playback — at no additional cost. It’s a move that has aligned Hulu with its competitors in terms of features but the channel lineup may still be a deciding factor. It’s pricier than YouTube TV, which has more channels, but the access to Disney Plus and ESPN may make it a more appealing choice for you. Read our Hulu Plus Live TV review.

Advertisement

James Martin/CNET

Apart from its current carriage dispute with Disney, YouTube has an excellent channel selection, easy-to-use interface and best-in-class cloud DVR. Typically, the $83-per-month service is one of the best cable TV replacements. It offers a 4K upgrade add-on for an additional price, but the downside is that there isn’t much to watch at present unless you watch select channels. If you don’t mind paying a bit more than the Sling TVs of the world, or want to watch live NBA games, YouTube TV offers a high standard of live TV streaming. Read our YouTube TV review.

Advertisement

Sarah Tew/CNET

If you want to save a little money and don’t mind missing out on local channels, Sling TV is the best of the budget services. Its Orange and Blue packages start at $46 per month, and you can combine them for a monthly rate of $61 (more in some regions). The Orange option nets you one stream, while Blue gives you three. It’s not as comprehensive or as easy to navigate as YouTube TV, but with a bit of work, including adding an antenna or an AirTV 2 DVR, it’s an unbeatable value. We’ll also add that the service offers local channels such as ABC and CBS in some regions, where the monthly rate is $50 or $55. Read our Sling TV review.

Zooey Liao/CNET
Advertisement

DirecTV’s base signature streaming package costs more than all the other platforms on this list  except Hulu Plus Live TV, and its stiffest competition is still Hulu and YouTube TV. With its channel selection, it’s ideal for sports fans who want to watch local or national games. 

The service does have its benefits, though — for example, it includes the flipper-friendly ability to swipe left and right to change channels. Additionally, it includes some channels that some other services can’t, including nearly 250 PBS stations nationwide. The $90 Entertainment package may suit your needs with its 90-plus channels and the inclusion of ESPN Unlimited. But for cord-cutters who want to follow their local NBA or MLB team, DirecTV’s pricier Choice package is a more robust live TV streaming pick because it has access to more regional sports networks than the competition. Nonetheless, you’ll want to make sure your channel is included here and not available on one of our preferred picks before you pony up. Read our DirecTV streaming service review.

Ty Pendlebury/CNET
Advertisement

There’s a lot to like about Fubo — it offers a wide selection of channels and its sports focus makes it especially attractive to soccer fans or NBA, NHL and MLB fans who live in an area served by one of Fubo’s RSNs. It’s also a great choice for NFL fans because it’s one of three services, alongside YouTube TV and Hulu, that offer NFL Network and optional RedZone. The biggest hole in Fubo’s lineup is the lack of Warner Bros. Discovery networks, including Cartoon Network, CNN, Food Network, HGTV, TBS and TNT — especially as the latter two carry a lot of sports content, in particular MLB, NBA and NHL. Its current dispute with NBCU is causing more channel losses (no ABC, Bravo, etc.). Those missing channels, and the $74 price tag for the base plan, make it less attractive than YouTube TV for most viewers. Read our Fubo review.

Sarah Tew/CNET

Philo’s Core plan is now $33 and includes the AMC Plus bundle and HBO Max at no extra cost, and it’s still a cheap live TV streaming service with a variety of channels. But it lacks sports channels, local stations and big-name news networks — although BBC News and Cheddar are available. Philo offers bread-and-butter cable staples like Comedy Central, Hallmark Channel and Nickelodeon, and specializes in lifestyle and reality programming. It’s also one of the most affordable live services that streams Paramount, home of Yellowstone, and includes a cloud DVR, as well as optional add-ons from Hallmark Plus and Starz. We think most people are better off paying a few bucks more for Sling TV’s superior service, but if Philo has every channel you want, it’s a decent deal. Read our Philo review.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Copyright Industry Continues Its Efforts To Ban VPNs

Published

on

from the the-internet’s-infrastructure-is-under-attack dept

Last month Walled Culture wrote about an important case at the Court of Justice of the European Union, (CJEU), the EU’s top court, that could determine how VPNs can be used in that region. Clarification in this area is particularly important because VPNs are currently under attack in various ways. For example, last year, the Danish government published draft legislation that many believed would make it illegal to use a VPN to access geoblocked streaming content or bypass restrictions on illegal websites. In the wake of a firestorm of criticism, Denmark’s Minister of Culture assured people that VPNs would not be banned. However, even though references to VPNs were removed from the text, the provisions are so broadly drafted that VPNs may well be affected anyway. Companies too are taking aim at VPNs. Leading the charge are those in France, which have been targeting VPN providers for over a year now. As TorrentFreak reported last February:

Canal+ and the football league LFP have requested court orders to compel NordVPN, ExpressVPN, ProtonVPN, and others to block access to pirate sites and services. The move follows similar orders obtained last year against DNS resolvers.

The VPN Trust Initiative (VTI) responded with a press release opposing what it called a “Misguided Legal Effort to Extend Website Blocking to VPNs”. It warned:

Such blocking can have sweeping consequences that might put the security and privacy of French citizens at risk.

Targeting VPNs opens the door to a dangerous censorship precedent, risking overreach into broader areas of content.

Indeed: if VPN blocks become an option, there will inevitably be more calls to use them for a wider range of material. The VTI also noted that some of its members are considering whether to abandon the French market completely. That could mean people start using less reliable VPN providers, some of which have dubious records when it comes to security and privacy. The incentive for VPNs to pull out of France is increasing. In August last year the Paris Judicial Court ordered top VPN service providers to block more sports streaming domains, and at the beginning of this year, yet more blocking orders were issued to VPNs operating in France. To its credit, one of the VPN providers affected, ProtonVPN, fought back. As reported here by TorrentFreak, the company tried multiple angles:

Advertisement

The VPN provider raised jurisdictional questions and also requested to see evidence that Canal+ owned all the rights at play. However, these concerns didn’t convince the court.

The same applies to Proton’s net neutrality defense, which argued that Article 333-10 of the French sports code, which is at the basis of all blocking orders, violates EU Open Internet Regulation. This defense was too vague, the court concluded, noting that Proton cited the regulation without specifying which provisions were actually breached.

ProtonVPN also argued that forcing a Swiss company to block sites for the French market is a restriction of cross-border trade in services, and that in any case, the blocking measures were “technically unrealizable, costly, and unnecessarily complex.” Despite this valiant defense, the court was unimpressed. At least ProtonVPN was allowed to contest the French court’s ruling. In a similar case in Spain, no such option was given. According to TorrentFreak:

The court orders were issued inaudita parte, which is Latin for “without hearing the other side.” Citing urgency, the Córdoba court did not give NordVPN and ProtonVPN the opportunity to contest the measures before they were granted.

Without a defense, the court reportedly concluded that both NordVPN and ProtonVPN actively advertise their ability to bypass geo-restrictions, citing match schedules in their marketing materials. The VPNs are therefore seen as active participants in the piracy chain rather than passive conduits, according to local media reports.

That’s pretty shocking, and shows once more how biased in favor of the copyright industry the law has become in some jurisdictions: other parties aren’t even allowed to present a defense. It’s a further reason why a definitive ruling from the CJEU on the right of people to use VPNs how they wish is so important.

Advertisement

Alongside these recent court cases, there is also another imminent attack on the use of VPNs, albeit in a slight different way. The UK government has announced wide-ranging plans that aim to “keep children safe online”. One of the ideas the government is proposing is “to age restrict or limit children’s VPN use where it undermines safety protections and changing the age of digital consent.” Although this is presented as a child protection measure, the effects will be much wider. The only way to bring in age restrictions for children is if all adult users of VPNs verify their own age. This inevitably leads to the creation of huge new online databases of personal information that are vulnerable to attack. As a side effect, the UK government’s misguided plans will also bolster the growing attempts by the copyright industry to demonize VPNs – a core element of the Internet’s plumbing – as unnecessary tools that are only used to break the law.

Follow me @glynmoody on Mastodon and on Bluesky. Originally published on WalledCulture.

Filed Under: cjeu, copyright, encryption, privacy, security, vpns

Companies: canal plus, nordvpn, proton

Advertisement

Source link

Continue Reading

Tech

Google offers researchers early access to Willow quantum processor

Published

on


The Early Access Program invites researchers to design and propose quantum experiments that push the boundaries of what current hardware can achieve. It is a selective program – the processor will not be publicly available – and Google is setting firm deadlines for participation. Research teams have until May 15,…
Read Entire Article
Source link

Continue Reading

Tech

Artemis II Mission Launches Successfully

Published

on

At 6:36 pm Cape Canaveral time, NASA’s SLS rocket lifted off without incident with the four members of the Artemis II spacecraft aboard. During the first few hours, Orion will complete its journey into Earth orbit and, throughout the first day, will conduct critical navigation and systems tests. Around the third or fourth day, the spacecraft will begin its trajectory toward the moon and cross its gravitational sphere of influence. In total, the mission will last approximately 10 days.

The mission includes the first woman and the first Black person on a crewed mission to lunar orbit. The launch comes 53 years after Apollo 17, the last crewed mission to the Moon.

The Artemis II crew will not land on the moon (that will happen on Artemis IV ). Instead, their capsule will fly at altitudes between 6,000 and 9,000 kilometers above the surface of the far side of the moon, circle it, and begin the return journey to Earth. The mission’s main objective is to demonstrate that the space agency has the technological capability to send people to the Moon safely and without incident.

Once they achieve this, NASA will begin preparations for new moon landings in the following years, which will aim to establish the first lunar bases in history and, with them, the sustained and sustainable presence of humans on the satellite.

Advertisement

The launch was successful and occurred on schedule. The launch window opened on Wednesday, April 1, at 6:24 pm Eastern Time (EDT) and could have been extended for two hours, if necessary. NASA would have had five more days to attempt another launch.

Mission Details

The astronauts took off on a NASA SLS rocket and are traveling inside the Orion capsule, described as a spacecraft about the size of a large van. They will orbit Earth for at least two days to test the onboard instruments. Then they will align the spacecraft to begin its journey to the moon. By the fifth or sixth day of flight, the capsule is expected to enter the moon’s sphere of influence, where the satellite’s gravity is stronger than Earth’s, and dock with its orbit.

When the spacecraft passes “behind” the moon, the most dangerous phase will begin. The crew will be out of contact with Earth for about 50 minutes due to interference from the moon itself. During this crucial moment, the crew must capture images and data from the moon, taking advantage of the far-more-advanced technology they carry than was available during the Apollo era.

After completing the return, the capsule will head home, taking advantage of the Earth-moon gravity field to save fuel. According to NASA estimates, by the 10th day of flight the crew will be close to reaching the planet.

Advertisement

Source link

Continue Reading

Tech

In the wake of Claude Code’s source code leak, 5 actions enterprise security leaders should take now

Published

on

Every enterprise running AI coding agents has just lost a layer of defense. On March 31, Anthropic accidentally shipped a 59.8 MB source map file inside version 2.1.88 of its @anthropic-ai/claude-code npm package, exposing 512,000 lines of unobfuscated TypeScript across 1,906 files.

The readable source includes the complete permission model, every bash security validator, 44 unreleased feature flags, and references to upcoming models Anthropic has not announced. Security researcher Chaofan Shou broadcast the discovery on X by approximately 4:23 UTC. Within hours, mirror repositories had spread across GitHub.

Anthropic confirmed the exposure was a packaging error caused by human error. No customer data or model weights were involved. But containment has already failed. The Wall Street Journal reported Wednesday morning that Anthropic had filed copyright takedown requests that briefly resulted in the removal of more than 8,000 copies and adaptations from GitHub.

However, an Anthropic spokesperson told VentureBeat that the takedown was intended to be more limited: “We issued a DMCA takedown against one repository hosting leaked Claude Code source code and its forks. The repo named in the notice was part of a fork network connected to our own public Claude Code repo, so the takedown reached more repositories than intended. We retracted the notice for everything except the one repo we named, and GitHub has restored access to the affected forks.”

Advertisement

Programmers have already used other AI tools to rewrite Claude Code’s functionality in other programming languages. Those rewrites are themselves going viral. The timing was worse than the leak alone. Hours before the source map shipped, malicious versions of the axios npm package containing a remote access trojan went live on the same registry. Any team that installed or updated Claude Code via npm between 00:21 and 03:29 UTC on March 31 may have pulled both the exposed source and the unrelated axios malware in the same install window.

A same-day Gartner First Take (subscription required) said the gap between Anthropic’s product capability and operational discipline should force leaders to rethink how they evaluate AI development tool vendors. Claude Code is the most discussed AI coding agent among Gartner’s software engineering clients. This was the second leak in five days. A separate CMS misconfiguration had already exposed nearly 3,000 unpublished internal assets, including draft announcements for an unreleased model called Claude Mythos. Gartner called the cluster of March incidents a systemic signal.

What 512,000 lines reveal about production AI agent architecture

The leaked codebase is not a chat wrapper. It is the agentic harness that wraps Claude’s language model and gives it the ability to use tools, manage files, execute bash commands, and orchestrate multi-agent workflows. The WSJ described the harness as what allows users to control and direct AI models, much like a harness allows a rider to guide a horse. Fortune reported that competitors and legions of startups now have a detailed road map to clone Claude Code’s features without reverse engineering them.

The components break down fast. A 46,000-line query engine handles context management through three-layer compression and orchestrates 40-plus tools, each with self-contained schemas and per-tool granular permission checks. And 2,500 lines of bash security validation run 23 sequential checks on every shell command, covering blocked Zsh builtins, Unicode zero-width space injection, IFS null-byte injection, and a malformed token bypass discovered during a HackerOne review.

Advertisement

Gartner caught a detail most coverage missed. Claude Code is 90% AI-generated, per Anthropic’s own public disclosures. Under the current U.S. copyright law requiring human authorship, the leaked code carries diminished intellectual property protection. The Supreme Court declined to revisit the human authorship standard in March 2026. Every organization shipping AI-generated production code faces this same unresolved IP exposure.

Three attack paths, the readable source makes it cheaper to exploit

The minified bundle already shipped with every string literal extractable. What the readable source eliminates is the research cost. A technical analysis from Straiker’s Jun Zhou, an agentic AI security company, mapped three compositions that are now practical, not theoretical, because the implementation is legible.

Context poisoning via the compaction pipeline. Claude Code manages context pressure through a four-stage cascade. MCP tool results are never microcompacted. Read tool results skip budgeting entirely. The autocompact prompt instructs the model to preserve all user messages that are not tool results. A poisoned instruction in a cloned repository’s CLAUDE.md file can survive compaction, get laundered through summarization, and emerge as what the model treats as a genuine user directive. The model is not jailbroken. It is cooperative and follows what it believes are legitimate instructions.

Sandbox bypass through shell parsing differentials. Three separate parsers handle bash commands, each with different edge-case behavior. The source documents a known gap where one parser treats carriage returns as word separators, while bash does not. Alex Kim’s review found that certain validators return early-allow decisions that short-circuit all subsequent checks. The source contains explicit warnings about the past exploitability of this pattern.

Advertisement

The composition. Context poisoning instructs a cooperative model to construct bash commands sitting in the gaps of the security validators. The defender’s mental model assumes an adversarial model and a cooperative user. This attack inverts both. The model is cooperative. The context is weaponized. The outputs look like commands a reasonable developer would approve.

Elia Zaitsev, CrowdStrike’s CTO, told VentureBeat in an exclusive interview at RSAC 2026 that the permission problem exposed in the leak reflects a pattern he sees across every enterprise deploying agents. “Don’t give an agent access to everything just because you’re lazy,” Zaitsev said. “Give it access to only what it needs to get the job done.” He warned that open-ended coding agents are particularly dangerous because their power comes from broad access. “People want to give them access to everything. If you’re building an agentic application in an enterprise, you don’t want to do that. You want a very narrow scope.”

Zaitsev framed the core risk in terms that the leaked source validates. “You may trick an agent into doing something bad, but nothing bad has happened until the agent acts on that,” he said. That is precisely what the Straiker analysis describes: context poisoning turns the agent cooperative, and the damage happens when it executes bash commands through the gaps in the validator chain.

What the leak exposed and what to audit

The table below maps each exposed layer to the attack path it enables and the audit action it requires. Print it. Take it to Monday’s meeting.

Advertisement

Exposed Layer

What the Leak Revealed

Attack Path Enabled

Defender Audit Action

Advertisement

4-stage compaction pipeline

Exact criteria for what survives each stage. MCP tool results are never microcompacted. Read results, skip budgeting.

Context poisoning: malicious instructions in CLAUDE.md survive compaction and get laundered into ‘user directives’.

Audit every CLAUDE.md and .claude/config.json in cloned repos. Treat as executable, not metadata.

Advertisement

Bash security validators (2,500 lines, 23 checks)

Full validator chain, early-allow short circuits, three-parser differentials, blocked pattern lists

Sandbox bypass: CR-as-separator gap between parsers. Early-allow in git validators bypasses all downstream checks.

Restrict broad permission rules (Bash(git:*), Bash(echo:*)). Redirect operators chain with allowed commands to overwrite files.

Advertisement

MCP server interface contract

Exact tool schemas, permission checks, and integration patterns for all 40+ built-in tools

Malicious MCP servers that match the exact interface. Supply chain attacks are indistinguishable from legitimate servers.

Treat MCP servers as untrusted dependencies. Pin versions. Monitor for changes. Vet before enabling.

Advertisement

44 feature flags (KAIROS, ULTRAPLAN, coordinator mode)

Unreleased autonomous agent mode, 30-min remote planning, multi-agent orchestration, background memory consolidation

Competitors accelerate the development of comparable features. Future attack surface previewed before defenses ship.

Monitor for feature flag activation in production. Inventory where agent permissions expand with each release.

Advertisement

Anti-distillation and client attestation

Fake tool injection logic, Zig-level hash attestation (cch=00000), GrowthBook feature flag gating

Workarounds documented. MITM proxy strips anti-distillation fields. Env var disables experimental betas.

Do not rely on vendor DRM for API security. Implement your own API key rotation and usage monitoring.

Advertisement

Undercover mode (undercover.ts)

90-line module strips AI attribution from commits. Force ON possible, force OFF impossible. Dead-code-eliminated in external builds.

AI-authored code enters repos with no attribution. Provenance and audit trail gaps for regulated industries.

Implement commit provenance verification. Require AI disclosure policies for development teams using any coding agent.

Advertisement

AI-assisted code is already leaking secrets at double the rate

GitGuardian’s State of Secrets Sprawl 2026 report, published March 17, found that Claude Code-assisted commits leaked secrets at a 3.2% rate versus the 1.5% baseline across all public GitHub commits. AI service credential leaks surged 81% year-over-year to 1,275,105 detected exposures. And 24,008 unique secrets were found in MCP configuration files on public GitHub, with 2,117 confirmed as live, valid credentials. GitGuardian noted the elevated rate reflects human workflow failures amplified by AI speed, not a simple tool defect.

The operational pattern Gartner is tracking

Feature velocity compounded the exposure. Anthropic shipped over a dozen Claude Code releases in March, introducing autonomous permission delegation, remote code execution from mobile devices, and AI-scheduled background tasks. Each capability widened the operational surface. The same month that introduced them produced the leak that exposed their implementation.

Gartner’s recommendation was specific. Require AI coding agent vendors to demonstrate the same operational maturity expected of other critical development infrastructure: published SLAs, public uptime history, and documented incident response policies. Architect provider-independent integration boundaries that would let you change vendors within 30 days. Anthropic has published one postmortem across more than a dozen March incidents. Third-party monitors detected outages 15 to 30 minutes before Anthropic’s own status page acknowledged them.

The company riding this product to a $380 billion valuation and a possible public offering this year, as the WSJ reported, now faces a containment battle that 8,000 DMCA takedowns have not won.

Advertisement

Merritt Baer, Chief Security Officer at Enkrypt AI, an enterprise AI guardrails company, and a former AWS security leader, told VentureBeat that the IP exposure Gartner flagged extends into territory most teams have not mapped. “The questions many teams aren’t asking yet are about derived IP,” Baer said. “Can model providers retain embeddings or reasoning traces, and are those artifacts considered your intellectual property?” With 90% of Claude Code’s source AI-generated and now public, that question is no longer theoretical for any enterprise shipping AI-written production code.

Zaitsev argued that the identity model itself needs rethinking. “It doesn’t make sense that an agent acting on your behalf would have more privileges than you do,” he told VentureBeat. “You may have 20 agents working on your behalf, but they’re all tied to your privileges and capabilities. We’re not creating 20 new accounts and 20 new services that we need to keep track of.” The leaked source shows Claude Code’s permission system is per-tool and granular. The question is whether enterprises are enforcing the same discipline on their side.

Five actions for security leaders this week

1. Audit CLAUDE.md and .claude/config.json in every cloned repository. Context poisoning through these files is a documented attack path with a readable implementation guide. Check Point Research found that developers inherently trust project configuration files and rarely apply the same scrutiny as application code during reviews.

2. Treat MCP servers as untrusted dependencies. Pin versions, vet before enabling, monitor for changes. The leaked source reveals the exact interface contract.

Advertisement

3. Restrict broad bash permission rules and deploy pre-commit secret scanning. A team generating 100 commits per week at the 3.2% leak rate is statistically exposing three credentials. MCP configuration files are the newest surface that most teams are not scanning.

4. Require SLAs, uptime history, and incident response documentation from your AI coding agent vendor. Architect provider-independent integration boundaries. Gartner’s guidance: 30-day vendor switch capability.

5. Implement commit provenance verification for AI-assisted code. The leaked Undercover Mode module strips AI attribution from commits with no force-off option. Regulated industries need disclosure policies that account for this.

Source map exposure is a well-documented failure class caught by standard commercial security tooling, Gartner noted. Apple and identity verification provider Persona suffered the same failure in the past year. The mechanism was not novel. The target was. Claude Code alone generates an estimated $2.5 billion in annualized revenue for a company now valued at $380 billion. Its full architectural blueprint is circulating on mirrors that have promised never to come down.

Advertisement

Source link

Continue Reading

Tech

Samsung may raise its priciest phone prices in South Korea

Published

on

Samsung could be about to make its most expensive phones even pricier, at least in its home market.

A new report suggests the company is planning price increases for select high-end Galaxy models in South Korea. Changes could potentially kick in as early as today, April 1.

The devices in question include the Samsung Galaxy Z Fold 7, Samsung Galaxy Z Flip 7, and Samsung Galaxy S25 Edge — all firmly at the top end of Samsung’s lineup. But the increases won’t hit every version. Instead, Samsung appears to be targeting only higher storage tiers. The base 256GB models will remain unchanged.

According to the report, 512GB variants could rise by around 100,000 won (roughly $65), while the 1TB version of the Fold 7 may jump by nearly 200,000 won (~$130). It’s not a dramatic spike on paper, but it’s still a noticeable bump for devices that are already pushing premium price territory.

Advertisement

Advertisement

Keeping entry-level models at the same price feels deliberate. On one hand, it softens the blow for buyers who just want the basics. On the other, it conveniently preserves those eye-catching “starting from” prices, even if most upgrades now cost more.

The bigger question is whether this stays local. For now, the changes are expected to apply only in South Korea. However, there’s a growing pattern here. Samsung has already adjusted pricing on some mid-range devices recently, and with ongoing component pressures, particularly around AI-driven memory and storage demand, wider increases wouldn’t be a huge surprise.

If the hikes do expand globally, pricing likely won’t translate directly. Currency differences and regional strategies usually mean adjustments vary market to market, but the direction of travel is pretty clear.

Advertisement

For now, nothing is official, but if you’ve been eyeing Samsung’s top-tier phones, it might be worth keeping an eye on prices. They don’t look like they’re heading down anytime soon.

Source link

Advertisement
Continue Reading

Tech

4 Cool Bluetooth Gadgets You Can Connect To Your Echo Dot

Published

on





We may receive a commission on purchases made from links.

Smart screens and speakers have found a permanent place in many of our households, since they help with playing music, controlling smart plugs, setting reminders, and much more. The use cases are plenty, especially when paired with other smart home gadgets that solve everyday problems. Speaking of pairing your smart speaker with external devices, the Amazon Echo Dot — one of Amazon’s most affordable and popular smart speakers — sports Bluetooth connections, which means it can be paired with some cool Bluetooth gadgets for added functionality. You can, for example, can pair multiple Echo speakers for a stereo setup or even connect external speakers with a better sound output during a party. Apart from audio, though, there are several other ways that you can take advantage of the Echo Dot’s Bluetooth module.

A few smart home gadgets, like smart light bulbs, often need a hub to function. However, if the bulb has Bluetooth support, it can be connected to and controlled by an Echo Dot without an external hub, which makes it a handy option. Similarly, there are other such gadgets that can take advantage of the Bluetooth Low Energy (BLE) protocol of the Echo Dot to establish a connection. Here are some of the best and most useful gadgets that we’ve found that can enhance your life and home. All you have to do is put your Echo Dot in pairing mode and connect the required device with the help of the Alexa app on your smartphone.

Advertisement

Bluetooth speakers

While there are several handy uses for an Amazon Echo Dot speaker, arguably the most popular one is playing music. This is primarily because it’s so quick and simple to ask Alexa to play your favorite album or track without having to manually look for it on your phone. Convenience aside though, Echo devices are capable speakers by themselves, which means the sound output is loud and clear. However, the small form factor means that the bass can be lacking, and the sound may not be able to fill a large room. If you’re having a party with your friends, you might miss out on that extra oomph. This is where the Echo Dot’s ability to connect to an external speaker comes into play.

If you have a Bluetooth speaker lying around at home, all you have to do is put it in pairing mode, head to the Alexa app, and connect the speaker to your Echo Dot. This works with pretty much any Bluetooth speaker, right from budget options to large home theatre setups. As long as the speaker is connected to the Echo Dot, all its responses — not just the songs — will play via the speaker itself. That said, the Echo device will still use its onboard microphones to detect and register your voice queries. This is one of the simplest yet the most popular uses that we’re sure a lot of you will appreciate. In case you don’t already have a speaker, the Anker Soundcore 2, which retails for around $30, is a user-favorite with a rating of 4.5 from close to 150K reviews.

Advertisement

Smart bulbs

The issue with a lot of good smart lighting solutions is that the installation process can be a headache — especially if they need a hub. Bluetooth smart bulbs are an easy fix, offering a plug-and-play solution. Modern Bluetooth bulbs from brands like Philips Hue or GE connect directly to your Echo Dot right out of the box, instead of requiring a central hub. This integration capability makes it an easy entry point into smart home automation. The biggest advantage of a system like this is that you can use bulbs and other smart home gadgets from multiple brands without worrying about compatibility.

Having a brand-agnostic solution helps avoid multiple issues. Once you invest in a Philips hub, for example, you may not be able to use bulbs from other brands with the same hub. This means you’re locked into the Philips ecosystem, unless you splurge on another hub from a different brand. Wi-Fi bulbs can already tackle this problem, but they can sometimes bog down your home network. Bluetooth bulbs, on the other hand, communicate locally with your Echo Dot. The feature set remains the same; you can set up daily routines so your lights slowly turn warmer in the evening, or shut down the entire house with a single phrase as you walk out the door. Additionally, you can connect as many bulbs via Bluetooth and operate the all individually. The Philips Hue 60W smart LED bulb, with its 4.7-star rating across more than 16,000 reviews, is a good starting point for under $50.

Advertisement

Smart switches

If you’re looking for creative use cases for your old Amazon Echo, smart switches are a good investment. The Switchbot smart switch button is an excellent replacement for old appliances and gadgets that lack internet connectivity; stick it beneath a manual switch and suddenly you can control it with your smartphone or Amazon Alexa device. Lots of devices and appliances launched in recent years may have built-in smart functionality to turn them on and off remotely. However, an old coffee maker or air purifier may not have the feature, and that’s exactly where a device like the Switchbot smart switch comes in handy. Once you connect it via Bluetooth to your Echo Dot, you can turn an appliance on or off with just your voice.

This works well with push-button switches, but you can’t use a single Switchbot to operate a larger, more traditional switch like the kind that controls the lights in your house both on and off. If you want both functionalities, you will have to purchase two Switchbots and install them on either side of the switch. While the product description mentions that you need a hub to use the device with Alexa, it’s only applicable to older Echo devices that cannot behave like a Bluetooth hub. With over 28,000 reviews and a rating of 4.1 stars, users definitely seem to love the Switchbot smart button thanks to its ability to use older gadgets easier. There’s something to be said about having a fresh cup of coffee waiting for you right after stepping out of the shower in the morning, isn’t there?

Advertisement

Bluetooth turntables

For those who have a large collection of vinyl records from back in the day, a Bluetooth turntable is pretty much a must-have. If you have one lying around, you would be glad to know that you can easily connect it to your Echo Dot. Since a good number of Bluetooth turntables have built-in wireless transmitters, you can wirelessly use your Echo Dot as a speaker instead of relying on your turntable’s internal one. Thanks to this setup, you can place your turntable at a distance from the Echo Dot without running audio wires all through the room.

This is a pretty neat trick; while the Echo Dot is usually the brain sending audio out to other speakers, in this scenario, it acts as the wireless receiver instead. The Audio-Technica wireless turntable is an excellent option in case you don’t have one already and are looking to buy one. It is pricey at around $230, but it’s got a solid 4.6-star rating across more than 8,700 reviews. Apart from a turntable, pretty much any other audio device that has a built-in Bluetooth transmitter can be used with an Echo Dot as well, so don’t feel like you’re limited to just spinning records remotely.

Advertisement

How we picked these gadgets

The primary criteria for a gadget to make it to this list is the fact that it connects to an Echo Dot speaker purely via Bluetooth and not Wi-Fi. Hence, it’s vital to note that not all types of gadgets of a particular kind may work via Bluetooth. An example of this is that not all smart bulbs support Bluetooth Low Energy connectivity. That’s why we’ve included suggested products that support the technology at play here; the ones we do recommend all have a rating of at least 4.1 stars across thousands of reviews. Additionally, all Echo devices — including the Echo Dot — need to be first connected to a Wi-Fi network for their initial setup before they can be used to connect to Bluetooth devices. Therefore, all the gadgets have been recommended with the assumption that you have access to a Wi-Fi network and that your Echo device is set up.

Advertisement



Source link

Advertisement
Continue Reading

Tech

The EU Killed Voluntary CSAM Scanning. West Virginia Is Trying To Compel It. Both Cause Problems.

Published

on

from the tricky-problems dept

Last week, the European Parliament voted to let a temporary exemption lapse that had allowed tech companies to scan their services for child sexual abuse material (CSAM) without running afoul of strict EU privacy regulations. Meanwhile, here in the US, West Virginia’s Attorney General continues to press forward with a lawsuit designed to force Apple to scan iCloud for CSAM, apparently oblivious to the fact that succeeding would hand defense attorneys the best gift they’ve ever received.

Two different jurisdictions. Two diametrically opposed approaches, both claiming to protect children, and both making it harder to actually do so.

I’ll be generous and assume people pushing both of these views genuinely think they’re doing what’s best for children. This is a genuinely complex topic with real, painful tradeoffs, and reasonable people can weigh them differently. What’s frustrating is watching policymakers on both sides of the Atlantic charge forward with approaches that seem driven more by vibes than by any serious engagement with how the current system actually works — or why it was built the way it was.

The European Parliament just voted against extending a temporary regulation that had exempted tech platforms from GDPR-style privacy rules when they voluntarily scanned for CSAM. This exemption had been in place (and repeatedly extended) for years while Parliament tried to negotiate a permanent framework. Those negotiations have been going on since November 2023 without resolution, and on Thursday MEPs decided they were done extending the stopgap.

Advertisement

To be clear, Parliament didn’t pass a law banning CSAM scanning. Companies can still technically scan if they want to. But without the exemption, they’re now exposed to massive privacy liability under EU law for doing so. Scanning private messages and stored content to look for CSAM is, after all, mass surveillance — and European privacy law treats mass surveillance seriously (which, in most cases, it should!). So the practical effect is a chilling one: companies that were voluntarily scanning now face significant legal risk if they continue.

The digital rights organization eDRI framed the issue in stark terms:

“This is actually just enabling big tech companies to scan all of our private messages, our most intimate details, all our private chats so it constitutes a really, really serious interference with our right to privacy. It’s not targeted against people that are suspected of child abuse — It’s just targeting everyone, potentially all of the time.”

And that argument is compelling. Hash-matching systems that compare uploaded images against databases of known CSAM are more targeted than, say, keyword scanning of every message, but they still fundamentally involve examining every unencrypted piece of content that passes through the system. When eDRI says it targets “everyone, potentially all of the time,” that’s an accurate description of how the technology works.

But… the technology also works to find and catch CSAM. Europol’s executive director, Catherine De Bolle, pointed to concrete numbers:

Advertisement

Last year alone, Europol processed around 1.1 million of so-called CyberTips, originating from the National Center for Missing & Exploited Children (NCMEC), of relevance to 24 European countries. CyberTips contain multiple entities (files, videos, photos etc.) supporting criminal investigation efforts into child sexual abuse online.

If the current legal basis for voluntary detection by online platforms were to be removed, this is expected to result in a serious reduction of CyberTip referrals. This would undermine the capability to detect relevant investigative leads on CSAM, which in turn will severely impair the EU’s security interests of identifying victims and safeguarding children.

The companies that have been doing this scanning — Google, Microsoft, Meta, Snapchat, TikTok — released a joint statement saying they are “deeply concerned” and warning that the lapse will leave “children across Europe and around the world with fewer protections than they had before.”

So the EU’s privacy advocates aren’t wrong about the surveillance problem. Europol isn’t wrong about the child safety consequences. Both things are true — which is what makes this genuinely tricky rather than a case of one side being obviously right.

Now flip to the United States, where the problem is precisely inverted.

Advertisement

In the US, the existing system has been carefully constructed around a single, critical principle: companies voluntarily choose to scan for CSAM, and when they find it, they’re legally required to report it to NCMEC. The word “voluntarily” is doing enormous load-bearing work in that sentence — and most of the people currently shouting about CSAM don’t seem to know it. As Stanford’s Riana Pfefferkorn explained in detail on Techdirt when a private class action lawsuit against Apple tried to compel CSAM scanning:

While the Fourth Amendment applies only to the government and not to private actors, the government can’t use a private actor to carry out a search it couldn’t constitutionally do itself. If the government compels or pressures a private actor to search, or the private actor searches primarily to serve the government’s interests rather than its own, then the private actor counts as a government agent for purposes of the search, which must then abide by the Fourth Amendment, otherwise the remedy is exclusion.

If the government – legislative, executive, or judiciary – forces a cloud storage provider to scan users’ files for CSAM, that makes the provider a government agent, meaning the scans require a warrant, which a cloud services company has no power to get, making those scans unconstitutional searches. Any CSAM they find (plus any other downstream evidence stemming from the initial unlawful scan) will probably get excluded, but it’s hard to convict people for CSAM without using the CSAM as evidence, making acquittals likelier. Which defeats the purpose of compelling the scans in the first place.

In the US, if the government forces Apple to scan, that makes Apple a government agent. Government agents need warrants. Apple can’t get warrants. So the scans are unconstitutional. So the evidence gets thrown out. So the predators walk free. All because someone thought “just make them scan!” was a simple solution to a complex problem.

Congress apparently understood this when it wrote the federal reporting statute — that’s why the law explicitly disclaims any requirement that providers proactively search for CSAM. The voluntariness of the scanning is what preserves its legal viability. Everyone involved in the actual work of combating CSAM — prosecutors, investigators, NCMEC, trust and safety teams — understands this and takes great care to preserve it.

Advertisement

Everyone, apparently, except the Attorney General of West Virginia. As we discussed recently, West Virginia just filed a lawsuit demanding that a court order Apple to “implement effective CSAM detection measures” on iCloud. The remedy West Virginia seeks — a court order compelling scanning — would spring the constitutional trap that everyone who actually works on this issue has been carefully avoiding for years.

As Pfefferkorn put it:

Any competent plaintiff’s counsel should have figured this out before filing a lawsuit asking a federal court to make Apple start scanning iCloud for CSAM, thereby making Apple a government agent, thereby turning the compelled iCloud scans into unconstitutional searches, thereby making it likelier for any iCloud user who gets caught to walk free, thereby shooting themselves in the foot, doing a disservice to their client, making the situation worse than the status quo, and causing a major setback in the fight for child safety online.

The reason nobody’s filed a lawsuit like this against Apple to date, despite years of complaints from left, right, and center about Apple’s ostensibly lackadaisical approach to CSAM detection in iCloud, isn’t because nobody’s thought of it before. It’s because they thought of it and they did their fucking legal research first. And then they backed away slowly from the computer, grateful to have narrowly avoided turning themselves into useful idiots for pedophiles.

The West Virginia complaint also treats Apple’s abandoned NeuralHash client-side scanning project as evidence that Apple could scan but simply chose not to. What it skips over is why the security community reacted so strongly to NeuralHash in the first place. Apple’s own director of user privacy and child safety laid out the problem:

Advertisement

Scanning every user’s privately stored iCloud content would in our estimation pose serious unintended consequences for our users… Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types (such as images, videos, text, or audio) and content categories. How can users be assured that a tool for one type of surveillance has not been reconfigured to surveil for other content such as political activity or religious persecution? Tools of mass surveillance have widespread negative implications for freedom of speech and, by extension, democracy as a whole.

Once you create infrastructure capable of scanning every user’s private content for one category of material, you’ve created infrastructure capable of scanning for anything. The pipe doesn’t care what flows through it. Governments around the world — some of them not exactly champions of human rights — have a well-documented habit of demanding expanded use of existing surveillance capabilities. This connects directly to the perennial fights over end-to-end encryption backdoors, where the same argument applies: you cannot build a door that only the good guys can walk through.

And then there’s the scale problem. Even the best hash-matching systems can produce false positives, and at the scale of major platforms, even tiny error rates translate into enormous numbers of wrongly flagged users.

This is one of those frustrating stories where you can… kinda see all sides, and there’s no easy or obvious answer:

Scanning works, at least somewhat. 1.1 million CyberTips from Europol in a single year. Some number of children identified and rescued because platforms voluntarily detected CSAM and reported it. The system produces real results.

Advertisement

Scanning is mass surveillance. Every image, every message gets examined (algorithmically), not just those belonging to suspected offenders. The privacy intrusion is real, not hypothetical, and it falls on everyone.

Compelled scanning breaks prosecutions. In the US, the Fourth Amendment means that government-ordered scanning creates a get-out-of-jail card for the very predators everyone claims to be targeting. The voluntariness of the system is what makes it legally functional.

Scanning infrastructure is repurposable. A system built to detect CSAM can be retooled to detect political speech, religious content, or anything else. This concern is not paranoid; it’s an engineering reality.

False positives at scale are inevitable. Even highly accurate systems will flag innocent content when processing billions of items, and the consequences for wrongly accused individuals are severe.

Advertisement

People can and will weigh these tradeoffs differently, and that’s legitimate. The tension described in all this is real and doesn’t resolve neatly.

But what both the EU Parliament’s vote and West Virginia’s lawsuit share is an unwillingness to sit with that tension. The EU stripped legal cover from the voluntary system that was actually producing results, without having a workable replacement ready. West Virginia is trying to compel what must remain voluntary, apparently without bothering to read the constitutional case law that makes compelled scanning self-defeating. From opposite directions, both approaches attack the same fragile voluntary architecture that currently threads the needle between these competing interests.

The status quo in the United States — voluntary scanning, mandatory reporting, no government compulsion to search — is far from perfect. But the system functions: it produces leads, preserves prosecutorial viability, and does so precisely because it was designed by people who understood the tradeoffs and built accordingly.

It would be nice if more policymakers engaged with why the system works the way it does before trying to blow it up from either direction. In tech policy, the loudest voices in the room are rarely the ones who’ve done the reading.

Advertisement

Filed Under: 4th amendment, csam, csam scanning, eu, privacy, scanning, surveillance

Source link

Advertisement
Continue Reading

Tech

Swiss finance minister files criminal charges over Grok-generated abuse on X

Published

on

Karin Keller-Sutter, Switzerland’s finance minister and the country’s former president, has filed criminal charges for defamation and insult after Elon Musk’s AI chatbot Grok was prompted by an anonymous user to generate a torrent of sexist and vulgar remarks about her on X. The complaint, filed on 20 March with the Bern public prosecutor’s office, is directed against “persons unknown” because the X user who prompted Grok could not be identified beyond a screen name. It is, by all available evidence, the first time a serving head of a national finance ministry has pursued criminal action against an AI-generated statement.

The incident occurred on 10 March, when a user on X instructed Grok to “roast” a figure they described as “Federal Councillor KKS, my favourite chick,” urging the chatbot to attack her in crude street language. Grok complied. The resulting post, a barrage of misogynistic abuse attributed to the chatbot, was published on Keller-Sutter’s feed. A spokesperson for the minister told Politico that the post was not “a contribution protected by freedom of expression or part of the political debate, but rather a pure denigration of a woman.” The spokesperson added: “One must fundamentally defend oneself against such misogynistic statements.”

Keller-Sutter is no minor political figure. She heads the Federal Finance Department and is one of seven members of the Swiss Federal Council, the country’s highest executive authority. In 2025, she served as president of the Swiss Confederation, a role that rotates annually among the council members. Before entering federal politics, she studied political science in London and Montreal, served as a cantonal justice minister, and presided over the Council of States. Her decision to file criminal charges rather than simply delete the post signals an intent to test whether Swiss defamation law, which criminalises both defamation under Article 173 and slander under Article 174 of the penal code, can reach the operators of AI systems and the platforms that host them. The legal question at the heart of the complaint is whether social media companies and their operators, in addition to individual users, can be held criminally liable for content generated by their own AI tools.

That question has not been answered anywhere in the world, but courts are beginning to confront it. In the United States, conservative activist Robby Starbuck sued Meta in 2025 after its AI falsely linked him to the January 6 Capitol riot; Meta settled rather than litigate. A Georgia court dismissed a separate defamation case against OpenAI after ChatGPT fabricated claims about a radio host, ruling that the legal threshold for fault had not been met. No AI defamation case has reached a final judgment in any jurisdiction. Keller-Sutter’s complaint, filed under a criminal rather than civil framework and in a country whose defamation statute carries prison sentences of up to three years for deliberate slander, could establish the first binding precedent on AI platform liability for generated speech.

Advertisement

The filing arrives against the backdrop of what has become the most sustained regulatory crisis in Grok’s brief existence. Between 29 December 2025 and 8 January 2026, Grok’s image-generation tools created more than three million sexualised images, approximately 23,000 of which depicted minors, according to the Centre for Countering Digital Hate. The discovery triggered a cascade of legal and regulatory actions that has not stopped. On 2 January, French ministers reported the content to prosecutors, calling it “manifestly illegal.” On 12 January, the United Kingdom’s Ofcom opened a formal investigation into whether X had complied with the Online Safety Act, with potential penalties of up to £18 million or 10 per cent of global revenue. On 14 January, California’s attorney general announced a state investigation into whether xAI had violated California law. On 26 January, the European Commission opened a probe under the Digital Services Act into whether Grok’s deployment met the platform’s legal obligations regarding illegal content and harm to minors.

Advertisement

The enforcement actions escalated sharply in February. On 3 February, French prosecutors, accompanied by a cybercrime unit and Europol officers, raided X’s Paris offices. The investigation, originally opened over complaints about platform operation and data extraction, had widened to include charges of complicity in distributing child sexual abuse material, creating sexually explicit deepfakes, and Holocaust denial. Prosecutors have since summoned Musk and X’s former chief executive Linda Yaccarino for voluntary interviews on 20 April. A Dutch court separately ordered Grok banned from generating non-consensual intimate images. The EU had already fined X €120 million in December 2025 for violating the DSA’s transparency requirements, a penalty X is now challenging in what has become the first court test of the bloc’s landmark digital regulation.

In the United States, three Tennessee teenagers filed a class-action lawsuit against xAI on 16 March, alleging that Grok had been used to create sexualised images of them without their knowledge or consent. The images were reportedly shared on Discord and other platforms. On 25 March, Baltimore became the first American city to sue xAI over Grok-generated deepfake pornography, alleging violations of consumer protection law. A separate class action, filed by Lieff Cabraser Heimann & Bernstein, alleges that xAI knowingly designed and profited from an image generator used to produce and distribute child sexual abuse material while refusing to implement the content-safety measures adopted by every other major AI company.

The governance vacuum at xAI compounds the legal exposure. All 11 of xAI’s original co-founders have now departed the company, including researchers recruited from Google DeepMind, Google Brain, and Microsoft Research. Musk said in March that xAI was “not built right the first time around” and needed to be rebuilt from its foundations. The company was absorbed into SpaceX in February through an all-stock merger that raised immediate governance questions, creating a combined entity valued at $1.25 trillion that is now preparing for what would be the largest initial public offering in history. The regulatory and litigation risks surrounding Grok are, in effect, now embedded in the prospectus of a company seeking a $1.75 trillion public valuation.

What makes Keller-Sutter’s complaint distinct from the deepfake and CSAM cases is its simplicity. It does not involve image generation, undressing algorithms, or child exploitation. It involves a chatbot that was asked to insult a named public official and did so in language that, under Swiss law, constitutes a criminal offence. The factual question is narrow: who is responsible when an AI system, operating on a commercial platform, generates defamatory speech at a user’s request? If the user cannot be identified, does liability pass to the platform operator, to the AI developer, or to no one at all?

Advertisement

The answer to that question will shape the trajectory of AI governance far beyond Switzerland. Every major AI company operates chatbots capable of producing defamatory, abusive, or factually false statements about real people. Most have implemented guardrails designed to refuse such requests. Grok, by deliberate design, has operated with fewer restrictions than its competitors, a positioning Musk has marketed as a commitment to free expression. The Keller-Sutter case tests whether that positioning can survive contact with criminal law.

Switzerland is not the European Union and is not bound by the DSA. But Swiss defamation law is among the most stringent in Europe, and a criminal finding against an AI platform operator would reverberate through every jurisdiction currently weighing similar questions. The case is small in scope, involving a single post on a single platform about a single official. But the principle it seeks to establish, that the companies building these systems bear the kind of legal responsibility that the age of AI governance demands, is anything but small. If Grok can be prompted to defame a former president with impunity, the question is not what it says about the technology. It is what it says about the law.

Source link

Advertisement
Continue Reading

Tech

Mercury Audio Cables, So Nobody Else Has To Do It

Published

on

We’ve seen our fair share of audiophile tomfoolery here at Hackaday, and we’ve even poked fun at a few of them over the years. Perhaps one of the most outrageously over the top that we’ve so far seen comes from [Pierogi Engineering] who, we’ll grant you not in a spirit of audiophile expectation, has made a set of speaker interconnects using liquid mercury.

In terms of construction they’re transparent tubes filled with mercury and capped off with 4 mm plugs as you might expect. We hear them compared with copper cables and from where we’re sitting we can’t tell any difference, but as we’ve said in the past, the only metrics that matter in this field come from an audio analyzer.

But that’s not what we take away from the video below the break. Being honest for a minute, there was a discussion among Hackaday editors as to whether or not we should feature this story. He’s handling significant quantities of mercury, and it’s probably not over reacting to express concerns about his procedures. We wouldn’t handle mercury like that, and we’d suggest that unless you want to turn your home into a Superfund site, you shouldn’t either. But now someone has, so at lease there’s no need for anyone else to answer the question as to whether mercury makes a good interconnect.

Advertisement

Source link

Advertisement
Continue Reading

Tech

This Prototype Engine Is Designed To Power The Next Generation Of US Air Force Drones

Published

on





Drone technology has changed the face of combat, especially for missions that require both precision and stealth. In fact, one cutting-edge drone can shoot down an enemy jet without ever seeing it. Drone engine technology may be changing as well, thanks to Honeywell Aerospace. The company won a contract from the U.S. Air Force to build a new propulsion system, which is expected to be more advanced than anything currently in use.

The new engine will take cues from Honeywell’s small-thrust-class SkyShot 1600 engine. The SkyShot is a compact and flexible engine built for unmanned military aircraft. It’s a versatile system, capable of working as either a turbojet or turbofan, while also delivering thrust between 800 and 2,800 pounds. The design can be modified to allow for even higher output if needed. The engine is built to handle high G-forces, giving Air Force drones the ability to track and catch fast-moving targets.

Honeywell plans to use digital modeling for faster design, which also speeds up the performance evaluation stage. Because of this, development and manufacturing timelines are expected to shorten. Honeywell will be able to deliver the new propulsion system in a quicker timeframe. This approach allows for a smoother integration with other aircraft systems and helps improve manufacturing efficiency while making the supply chain stronger.

Advertisement

How Honeywell technology supports unmanned aircraft

Honeywell Aerospace is an established player in the world of military drone technology, and their systems are used in a number of unmanned aircraft. That includes the fast and expensive MQ-9 Reaper, a commonly used combat drone. These systems include avionics and other tech that support flight operations and aircraft capability. The engine Honeywell built for the Reaper is the TPE-331, a turboprop that was initially designed in 1959.

Advertisement

Honeywell also designed and produced onboard systems for the Boeing MQ-25 Stingray, an unmanned aircraft used by U.S. Navy carriers to refuel planes while in flight. The Stingray’s introduction is just one of the big changes to hit the U.S. military’s fleet in 2025. In addition to designing crucial systems, Honeywell specializes in a variety of drone components, from flight controls to mission computers, radar, and more.

Thanks to an agreement with the U.S. government, Honeywell will begin increasing production of military components and related defense systems. The announcement was made in March of 2026 and though drones weren’t specifically mentioned, the technologies referenced are regularly used in modern unmanned aircraft. Those technologies include actuators, navigation systems, and combat-ready electronic devices.

Advertisement



Source link

Continue Reading

Trending

Copyright © 2025