Tech
I Tell My Students Writing Is Hard. I Still Ask Them to Do It Anyway.
My life has changed so much since my time as a Voices of Change fellow during the 2023 school year. As I wrote in my final essay of the fellowship, the beautiful, imperfect school I loved and helped build had closed. With the support of my fellowship editor, Cobretti Williams, I applied and was admitted to the Creative Writing Workshop at the University of New Orleans, where I am taking graduate classes and teaching a freshman English composition course.
In deciding what to write as a reflection on my time since the fellowship, I started three different essays and hated all of them. I did a lot of cursing, went on a couple of brooding walks and wondered why I agreed to write this in the first place. During the similarly maddening process of designing the syllabus for the first college course I taught, I took a break to write my students a letter. Here is an excerpt:
Before we start this course together, it’s important for me to name something foundational to how I approach teaching it: Writing is hard for everyone. I love writing and I believe that, if I keep practicing, I can become great at it… and I still hate doing it a lot of the time. This is why writing is so important. Almost everything we want is on the other side of making ourselves do things we don’t want to do. When we sit down to write, whether we want to or not, and we keep writing when we hit that initial point where we want to stop, and continue when those moments arise again and again like waves, we are getting vital practice. This skill, ignoring the complacent you, the you that would rather do the thing tomorrow, or tomorrow’s tomorrow, and doing the thing now instead is an act of becoming the you that has the things you want. Like anything else, this becomes easier the more you do it.
This excerpt reminds me that writing is much more difficult than most of the things we do in a world that commodifies ease and comfort, upholds them as desirable and makes us feel we are entitled to them while simultaneously less and less able to tolerate their lack.
There is a common misconception that my students come to me with that manifests most often in the statement “I don’t know what to write.” They think this means they are not ready to begin, because they believe that writing is putting what you already know onto paper. I understand why this misconception exists. So often in life, we only see finished products. The published novel, the final cut, the social media post depicting the outcome and not the process and the struggle. It’s easy to think that everyone else has things figured out, that what you see is how something was from the beginning. This can trick us into believing that if something isn’t good right away, we should abandon it. Drafting insists that we try before we feel sure, finish something even if it is not yet “good.” Revision insists that what we have can be something different, something better, and teaches us to hold multiple things in our heads at the same time. Throughout this process, we gain clarity.
Each time we give or receive feedback and assess whether it moves us closer to or further from our vision, we get better at articulating what we want and closer to achieving it. When teachers and students do this work together and commit to improvement, even when we both have moments of uncertainty about what to do next, we are practicing true collaboration. We both grow. What a way to become more skillful at building the world we want.
It is a strange time to be devoting so much of my life to writing, to be telling students that they should care about writing too. Just this week, an article came out detailing pervasive, undisclosed AI use to grade and give feedback to student writing in some New Orleans schools. A study conducted in May of 2025 showed that 84 percent of high school students used generative AI to complete their school work. I understand intimately the overwhelm of educators and students, and the temporary relief that cognitive offloading with AI can provide.
However, what we lose in the long term by not engaging deeply in the writing process, the practice of giving and receiving feedback, of watching revision unfold, is so much greater than the gains we feel in accepting AI’s “help” in our moments of overwhelm. What world are we building when we delegate the human work of communication through writing to machines? We would do better to engage in a process of re-evaluating our priorities, taking on fewer assignments for longer and working collaboratively as educators and administrators to redesign curricula and systems so that teachers have the capacity to get to know their students through repeated contact with their written work.
Sometimes, it feels like we are already living in a completely different world from the one in which I grew up and was educated. Luckily, these times, despite how often folks like to say they are not, are precedented. In these times, I have been turning to Black women writers like Toni Morrison, Toni Cade Bambara, Audre Lorde and June Jordan for guidance, and they all insist writing only becomes more urgent the more dire the times. In facing what Toni Morrison described in 2004 as “a burgeoning ménage a trois of political interests, corporate interests and military interests” working to “literally annihilate an inhabitable, humane future,” I have been especially steeled by Audre Lorde’s words, “In this way alone we can survive, by taking part in a process of life that is creative and continuing, that is growth.”
In the face of a world that would automate us right out of existence, I intend for us to survive, and so I insist we write.
Tech
These Cheap Iranian Drones Keep Getting Shot Down, And That’s The Whole Point
The current war between Iran, the United States, Israel, and other Gulf countries has seen a huge spike in drone warfare, particularly from Iran. Iran’s use of drones in warfare is quite different from what Western countries do. The United States might use big surveillance drones like the RQ-4 Global Hawk or attack drones like the MQ-9 Reaper. Such drones are expensive and meant to come back to base after the mission is done.
A lot of Iranian drones, on the other hand, take a different approach. The Shahed-136 is a kamikaze drone that’s supposed to expend its payload by running into a target. As opposed to a Reaper drone, where the system to control it and the aircraft itself costs over $56 million, a Shahed-136 can cost anywhere between $20,000 and $50,000.
A Shahed, as reported by the US Army, has a wingspan of 8.2 feet and carries an 88-pound warhead. It’s powered by a small aircraft engine mounted in the “tail.” It’s also described as a “loitering” munition meaning that it can stay in the air and hunt for targets. It has a range of a little over 1,200 miles (or 2,000 kilometers).
Drones are cheap, interceptors are expensive
While an individual Shahed-136 is certainly effective, it can be intercepted easily. As such, it’s mostly used in a swarm configuration. A swarm of Shaheds can saturate air defense systems, forcing Western forces to “waste” interceptor missiles on targets that cost a fraction as much. The Terminal High Altitude Area Defense system, also called THAAD uses a network of radar installations and sensors to intercept airborne threats with missiles. Each interceptor missile costs approximately $12.7 million, according to U.S. Congress reports.
The THAAD has a reported successful intercept rate of 90%. That’s good for forces and civilians on the ground, but the cost is skyrocketing and the amount of missiles in stock is dwindling. Congress reports: “Another reported concern is that the usage rate of THAAD interceptors during Operation Fury has further depleted limited interceptor stocks.”
Each THAAD battery consists of six launcher trucks, each supplied with 48 missiles. Those trucks and missiles are guided by a TPY-2 radar station and a communications station. It requires 90 soldiers to run and a single battery costs $2.73 billion. Lockheed Martin, the developer of the THAAD, says that between the United States, United Arab Emirates, and Saudi Arabia, there are 10 active batteries.
Tech
SpaceX files for record $75 billion IPO as conflicts of interest mount
SpaceX has confidentially filed paperwork with the Securities and Exchange Commission to sell shares to the public, according to multiple sources familiar with the registration, setting the stage for what would be the largest initial public offering in history and almost certainly making Elon Musk the world’s first trillionaire. The offering, internally code-named Project Apex, could come as early as June and reportedly aims to raise as much as $75 billion at a valuation of up to $1.75 trillion. That would more than double Saudi Aramco’s $29 billion listing in 2019, the current record holder, and would value SpaceX at roughly 94 times its 2025 revenue.
Twenty-one banks have lined up to manage the deal, with Goldman Sachs, JPMorgan Chase, Morgan Stanley, Bank of America, and Citigroup in senior roles, according to CNBC. Musk, who owns approximately 42 per cent of SpaceX according to PitchBook, has a current net worth estimated by Forbes at $823 billion. At a $1.75 trillion valuation, his stake alone would be worth more than $730 billion, pushing his total wealth past the trillion-dollar mark and placing him further ahead of every other person alive than any individual in modern economic history.
The company filing for this listing, however, is no longer just a rocket business. In February, SpaceX absorbed Musk’s artificial intelligence company xAI in an all-stock transaction that valued the combined entity at $1.25 trillion. That deal, a merger that raised immediate questions about optics, governance, and valuation, folded a company reportedly burning roughly $1 billion a month into one generating substantial cash flow. SpaceX also brought Musk’s social media platform X, formerly Twitter, under the same corporate roof. The result is a conglomerate spanning orbital launches, satellite internet, defence contracts, artificial intelligence, and social media, all controlled by a single individual who is simultaneously the largest financial backer of the sitting president of the United States.
The financial engine behind the valuation is Starlink, the satellite internet service that has become the most commercially successful space venture in history. In 2025, Starlink generated $10.6 billion in revenue on 54 per cent EBITDA margins, accounting for roughly two-thirds of SpaceX’s total revenue of $16 billion. The subscriber base has grown from 10,000 beta users in 2021 to more than 10 million paying customers across 150 countries as of February 2026. The Federal Aviation Administration’s January 2026 approval for up to 44 annual Starship launches has provided the operational headroom investors needed to underwrite a public valuation at this scale.
The xAI component of the entity going public is, by contrast, a work in progress. Musk himself said in March that xAI was “not built right the first time around” and needed to be rebuilt from its foundations. Since the merger, all 11 of xAI’s original co-founders have departed the company, including researchers who had previously worked at Google DeepMind, Google Brain, and Microsoft Research. Jimmy Ba, who co-authored the Adam optimisation paper, one of the most cited in all of artificial intelligence, left in February. Critics have characterised the merger as a financial bailout that allows xAI’s mounting losses to be absorbed by Starlink’s cash flow ahead of the IPO, a framing Musk has rejected.
The conflicts of interest embedded in this offering are without precedent in American capital markets. In the past five years alone, SpaceX has won $6 billion in contracts from NASA, the Department of Defense, and other federal agencies, according to USAspending.gov. The company is NASA’s primary launch provider for crewed missions to the International Space Station and holds more than $4 billion in contracts for the Artemis lunar-landing programme. The Pentagon is reportedly preparing to award SpaceX a $2 billion contract to build a 600-satellite constellation for missile tracking as part of the Golden Dome missile-defence initiative, a programme Trump announced would cost $175 billion and begin initial operations within three years.
Musk was the largest individual donor to Trump’s 2024 presidential campaign and led the Department of Government Efficiency, or DOGE, a temporary body that unilaterally cancelled more than 10,000 federal contracts it deemed wasteful. Ethics observers noted that none of the cancellations affected Musk’s own companies. Among SpaceX’s current investors is Donald Trump Jr, the president’s eldest son, who holds shares through 1789 Capital, a venture firm that made him a partner shortly after his father won the presidency for a second time. That fund, which has crossed $1 billion in assets, has invested approximately $50 million in SpaceX and xAI and has backed at least four companies that subsequently received government contracts during the current administration. The White House has repeatedly denied any conflicts of interest between the presidency and the Trump family’s business activities.
The governance risks do not end at the political boundary. SpaceX under Musk has operated as a private company with minimal public disclosure for more than two decades. Going public will force it to file quarterly earnings, disclose executive compensation, open its books to auditors, and face shareholder lawsuits of the kind Tesla already contends with regularly. Tesla shareholders are currently suing Musk over the company’s $2 billion investment in xAI, arguing he directed shareholder capital into his own private venture. The SpaceX-xAI merger, in which both the buyer and seller were controlled by Musk, presents a similar structure of self-dealing that public-market investors and regulators already struggling with the pace of AI-era consolidation will scrutinise closely.
One unusual feature of the planned offering is the reported intention to allocate up to 30 per cent of shares to retail investors, roughly triple the typical 5 to 10 per cent. The move echoes Google’s unconventional 2004 IPO, which used a Dutch auction to broaden access, and appears designed to build a base of loyal individual shareholders who may be less inclined to challenge management. For a company whose founder has cultivated a large and vocal online following, the retail allocation could serve as both a democratisation of access and a governance insulation mechanism.
SpaceX’s listing would be the first of what could be a trio of mega-IPOs from the companies that defined the current era of AI and deep tech. OpenAI and Anthropic are both reportedly considering public offerings, though neither has filed. Together, the three listings would represent a concentration of market value in a handful of companies whose products, from orbital internet to frontier AI models, now intersect with national security, global communications, and the basic infrastructure of economic life.
The scale of what SpaceX is attempting is difficult to overstate. A $75 billion raise would exceed the gross domestic product of more than half the world’s countries. A $1.75 trillion valuation would make SpaceX more valuable at listing than every company in the S&P 500 except Apple, Microsoft, Nvidia, Amazon, and Alphabet. And at the centre of it all is a single individual who builds the rockets that carry American astronauts, runs the satellites that provide internet to war zones, leads an AI company he admits needs rebuilding, owns a social media platform that shapes political discourse, and has the mobile-phone number of the president.
Whether that concentration of power, capital, and government dependency can survive the scrutiny of public markets is the question Project Apex will ultimately answer. The defence-tech sector is already drawing record investment on the thesis that the next generation of military capability will be built by private companies rather than government labs. SpaceX is the largest and most consequential test of that thesis. If the IPO succeeds on the terms being discussed, it will not merely be the biggest stock offering in history. It will be a statement about the degree to which twenty-first-century governments have outsourced their most critical capabilities to the private sector, and about the price of getting them back.
Tech
Copyright Industry Continues Its Efforts To Ban VPNs
from the the-internet’s-infrastructure-is-under-attack dept
Last month Walled Culture wrote about an important case at the Court of Justice of the European Union, (CJEU), the EU’s top court, that could determine how VPNs can be used in that region. Clarification in this area is particularly important because VPNs are currently under attack in various ways. For example, last year, the Danish government published draft legislation that many believed would make it illegal to use a VPN to access geoblocked streaming content or bypass restrictions on illegal websites. In the wake of a firestorm of criticism, Denmark’s Minister of Culture assured people that VPNs would not be banned. However, even though references to VPNs were removed from the text, the provisions are so broadly drafted that VPNs may well be affected anyway. Companies too are taking aim at VPNs. Leading the charge are those in France, which have been targeting VPN providers for over a year now. As TorrentFreak reported last February:
Canal+ and the football league LFP have requested court orders to compel NordVPN, ExpressVPN, ProtonVPN, and others to block access to pirate sites and services. The move follows similar orders obtained last year against DNS resolvers.
The VPN Trust Initiative (VTI) responded with a press release opposing what it called a “Misguided Legal Effort to Extend Website Blocking to VPNs”. It warned:
Such blocking can have sweeping consequences that might put the security and privacy of French citizens at risk.
Targeting VPNs opens the door to a dangerous censorship precedent, risking overreach into broader areas of content.
Indeed: if VPN blocks become an option, there will inevitably be more calls to use them for a wider range of material. The VTI also noted that some of its members are considering whether to abandon the French market completely. That could mean people start using less reliable VPN providers, some of which have dubious records when it comes to security and privacy. The incentive for VPNs to pull out of France is increasing. In August last year the Paris Judicial Court ordered top VPN service providers to block more sports streaming domains, and at the beginning of this year, yet more blocking orders were issued to VPNs operating in France. To its credit, one of the VPN providers affected, ProtonVPN, fought back. As reported here by TorrentFreak, the company tried multiple angles:
The VPN provider raised jurisdictional questions and also requested to see evidence that Canal+ owned all the rights at play. However, these concerns didn’t convince the court.
The same applies to Proton’s net neutrality defense, which argued that Article 333-10 of the French sports code, which is at the basis of all blocking orders, violates EU Open Internet Regulation. This defense was too vague, the court concluded, noting that Proton cited the regulation without specifying which provisions were actually breached.
ProtonVPN also argued that forcing a Swiss company to block sites for the French market is a restriction of cross-border trade in services, and that in any case, the blocking measures were “technically unrealizable, costly, and unnecessarily complex.” Despite this valiant defense, the court was unimpressed. At least ProtonVPN was allowed to contest the French court’s ruling. In a similar case in Spain, no such option was given. According to TorrentFreak:
The court orders were issued inaudita parte, which is Latin for “without hearing the other side.” Citing urgency, the Córdoba court did not give NordVPN and ProtonVPN the opportunity to contest the measures before they were granted.
Without a defense, the court reportedly concluded that both NordVPN and ProtonVPN actively advertise their ability to bypass geo-restrictions, citing match schedules in their marketing materials. The VPNs are therefore seen as active participants in the piracy chain rather than passive conduits, according to local media reports.
That’s pretty shocking, and shows once more how biased in favor of the copyright industry the law has become in some jurisdictions: other parties aren’t even allowed to present a defense. It’s a further reason why a definitive ruling from the CJEU on the right of people to use VPNs how they wish is so important.
Alongside these recent court cases, there is also another imminent attack on the use of VPNs, albeit in a slight different way. The UK government has announced wide-ranging plans that aim to “keep children safe online”. One of the ideas the government is proposing is “to age restrict or limit children’s VPN use where it undermines safety protections and changing the age of digital consent.” Although this is presented as a child protection measure, the effects will be much wider. The only way to bring in age restrictions for children is if all adult users of VPNs verify their own age. This inevitably leads to the creation of huge new online databases of personal information that are vulnerable to attack. As a side effect, the UK government’s misguided plans will also bolster the growing attempts by the copyright industry to demonize VPNs – a core element of the Internet’s plumbing – as unnecessary tools that are only used to break the law.
Follow me @glynmoody on Mastodon and on Bluesky. Originally published on WalledCulture.
Filed Under: cjeu, copyright, encryption, privacy, security, vpns
Companies: canal plus, nordvpn, proton
Tech
Google offers researchers early access to Willow quantum processor
![]()
The Early Access Program invites researchers to design and propose quantum experiments that push the boundaries of what current hardware can achieve. It is a selective program – the processor will not be publicly available – and Google is setting firm deadlines for participation. Research teams have until May 15,…
Read Entire Article
Source link
Tech
Artemis II Mission Launches Successfully
At 6:36 pm Cape Canaveral time, NASA’s SLS rocket lifted off without incident with the four members of the Artemis II spacecraft aboard. During the first few hours, Orion will complete its journey into Earth orbit and, throughout the first day, will conduct critical navigation and systems tests. Around the third or fourth day, the spacecraft will begin its trajectory toward the moon and cross its gravitational sphere of influence. In total, the mission will last approximately 10 days.
The mission includes the first woman and the first Black person on a crewed mission to lunar orbit. The launch comes 53 years after Apollo 17, the last crewed mission to the Moon.
The Artemis II crew will not land on the moon (that will happen on Artemis IV ). Instead, their capsule will fly at altitudes between 6,000 and 9,000 kilometers above the surface of the far side of the moon, circle it, and begin the return journey to Earth. The mission’s main objective is to demonstrate that the space agency has the technological capability to send people to the Moon safely and without incident.
Once they achieve this, NASA will begin preparations for new moon landings in the following years, which will aim to establish the first lunar bases in history and, with them, the sustained and sustainable presence of humans on the satellite.
The launch was successful and occurred on schedule. The launch window opened on Wednesday, April 1, at 6:24 pm Eastern Time (EDT) and could have been extended for two hours, if necessary. NASA would have had five more days to attempt another launch.
Mission Details
The astronauts took off on a NASA SLS rocket and are traveling inside the Orion capsule, described as a spacecraft about the size of a large van. They will orbit Earth for at least two days to test the onboard instruments. Then they will align the spacecraft to begin its journey to the moon. By the fifth or sixth day of flight, the capsule is expected to enter the moon’s sphere of influence, where the satellite’s gravity is stronger than Earth’s, and dock with its orbit.
When the spacecraft passes “behind” the moon, the most dangerous phase will begin. The crew will be out of contact with Earth for about 50 minutes due to interference from the moon itself. During this crucial moment, the crew must capture images and data from the moon, taking advantage of the far-more-advanced technology they carry than was available during the Apollo era.
After completing the return, the capsule will head home, taking advantage of the Earth-moon gravity field to save fuel. According to NASA estimates, by the 10th day of flight the crew will be close to reaching the planet.
Tech
In the wake of Claude Code’s source code leak, 5 actions enterprise security leaders should take now
Every enterprise running AI coding agents has just lost a layer of defense. On March 31, Anthropic accidentally shipped a 59.8 MB source map file inside version 2.1.88 of its @anthropic-ai/claude-code npm package, exposing 512,000 lines of unobfuscated TypeScript across 1,906 files.
The readable source includes the complete permission model, every bash security validator, 44 unreleased feature flags, and references to upcoming models Anthropic has not announced. Security researcher Chaofan Shou broadcast the discovery on X by approximately 4:23 UTC. Within hours, mirror repositories had spread across GitHub.
Anthropic confirmed the exposure was a packaging error caused by human error. No customer data or model weights were involved. But containment has already failed. The Wall Street Journal reported Wednesday morning that Anthropic had filed copyright takedown requests that briefly resulted in the removal of more than 8,000 copies and adaptations from GitHub.
However, an Anthropic spokesperson told VentureBeat that the takedown was intended to be more limited: “We issued a DMCA takedown against one repository hosting leaked Claude Code source code and its forks. The repo named in the notice was part of a fork network connected to our own public Claude Code repo, so the takedown reached more repositories than intended. We retracted the notice for everything except the one repo we named, and GitHub has restored access to the affected forks.”
Programmers have already used other AI tools to rewrite Claude Code’s functionality in other programming languages. Those rewrites are themselves going viral. The timing was worse than the leak alone. Hours before the source map shipped, malicious versions of the axios npm package containing a remote access trojan went live on the same registry. Any team that installed or updated Claude Code via npm between 00:21 and 03:29 UTC on March 31 may have pulled both the exposed source and the unrelated axios malware in the same install window.
A same-day Gartner First Take (subscription required) said the gap between Anthropic’s product capability and operational discipline should force leaders to rethink how they evaluate AI development tool vendors. Claude Code is the most discussed AI coding agent among Gartner’s software engineering clients. This was the second leak in five days. A separate CMS misconfiguration had already exposed nearly 3,000 unpublished internal assets, including draft announcements for an unreleased model called Claude Mythos. Gartner called the cluster of March incidents a systemic signal.
What 512,000 lines reveal about production AI agent architecture
The leaked codebase is not a chat wrapper. It is the agentic harness that wraps Claude’s language model and gives it the ability to use tools, manage files, execute bash commands, and orchestrate multi-agent workflows. The WSJ described the harness as what allows users to control and direct AI models, much like a harness allows a rider to guide a horse. Fortune reported that competitors and legions of startups now have a detailed road map to clone Claude Code’s features without reverse engineering them.
The components break down fast. A 46,000-line query engine handles context management through three-layer compression and orchestrates 40-plus tools, each with self-contained schemas and per-tool granular permission checks. And 2,500 lines of bash security validation run 23 sequential checks on every shell command, covering blocked Zsh builtins, Unicode zero-width space injection, IFS null-byte injection, and a malformed token bypass discovered during a HackerOne review.
Gartner caught a detail most coverage missed. Claude Code is 90% AI-generated, per Anthropic’s own public disclosures. Under the current U.S. copyright law requiring human authorship, the leaked code carries diminished intellectual property protection. The Supreme Court declined to revisit the human authorship standard in March 2026. Every organization shipping AI-generated production code faces this same unresolved IP exposure.
Three attack paths, the readable source makes it cheaper to exploit
The minified bundle already shipped with every string literal extractable. What the readable source eliminates is the research cost. A technical analysis from Straiker’s Jun Zhou, an agentic AI security company, mapped three compositions that are now practical, not theoretical, because the implementation is legible.
Context poisoning via the compaction pipeline. Claude Code manages context pressure through a four-stage cascade. MCP tool results are never microcompacted. Read tool results skip budgeting entirely. The autocompact prompt instructs the model to preserve all user messages that are not tool results. A poisoned instruction in a cloned repository’s CLAUDE.md file can survive compaction, get laundered through summarization, and emerge as what the model treats as a genuine user directive. The model is not jailbroken. It is cooperative and follows what it believes are legitimate instructions.
Sandbox bypass through shell parsing differentials. Three separate parsers handle bash commands, each with different edge-case behavior. The source documents a known gap where one parser treats carriage returns as word separators, while bash does not. Alex Kim’s review found that certain validators return early-allow decisions that short-circuit all subsequent checks. The source contains explicit warnings about the past exploitability of this pattern.
The composition. Context poisoning instructs a cooperative model to construct bash commands sitting in the gaps of the security validators. The defender’s mental model assumes an adversarial model and a cooperative user. This attack inverts both. The model is cooperative. The context is weaponized. The outputs look like commands a reasonable developer would approve.
Elia Zaitsev, CrowdStrike’s CTO, told VentureBeat in an exclusive interview at RSAC 2026 that the permission problem exposed in the leak reflects a pattern he sees across every enterprise deploying agents. “Don’t give an agent access to everything just because you’re lazy,” Zaitsev said. “Give it access to only what it needs to get the job done.” He warned that open-ended coding agents are particularly dangerous because their power comes from broad access. “People want to give them access to everything. If you’re building an agentic application in an enterprise, you don’t want to do that. You want a very narrow scope.”
Zaitsev framed the core risk in terms that the leaked source validates. “You may trick an agent into doing something bad, but nothing bad has happened until the agent acts on that,” he said. That is precisely what the Straiker analysis describes: context poisoning turns the agent cooperative, and the damage happens when it executes bash commands through the gaps in the validator chain.
What the leak exposed and what to audit
The table below maps each exposed layer to the attack path it enables and the audit action it requires. Print it. Take it to Monday’s meeting.
|
Exposed Layer |
What the Leak Revealed |
Attack Path Enabled |
Defender Audit Action |
|
4-stage compaction pipeline |
Exact criteria for what survives each stage. MCP tool results are never microcompacted. Read results, skip budgeting. |
Context poisoning: malicious instructions in CLAUDE.md survive compaction and get laundered into ‘user directives’. |
Audit every CLAUDE.md and .claude/config.json in cloned repos. Treat as executable, not metadata. |
|
Bash security validators (2,500 lines, 23 checks) |
Full validator chain, early-allow short circuits, three-parser differentials, blocked pattern lists |
Sandbox bypass: CR-as-separator gap between parsers. Early-allow in git validators bypasses all downstream checks. |
Restrict broad permission rules (Bash(git:*), Bash(echo:*)). Redirect operators chain with allowed commands to overwrite files. |
|
MCP server interface contract |
Exact tool schemas, permission checks, and integration patterns for all 40+ built-in tools |
Malicious MCP servers that match the exact interface. Supply chain attacks are indistinguishable from legitimate servers. |
Treat MCP servers as untrusted dependencies. Pin versions. Monitor for changes. Vet before enabling. |
|
44 feature flags (KAIROS, ULTRAPLAN, coordinator mode) |
Unreleased autonomous agent mode, 30-min remote planning, multi-agent orchestration, background memory consolidation |
Competitors accelerate the development of comparable features. Future attack surface previewed before defenses ship. |
Monitor for feature flag activation in production. Inventory where agent permissions expand with each release. |
|
Anti-distillation and client attestation |
Fake tool injection logic, Zig-level hash attestation (cch=00000), GrowthBook feature flag gating |
Workarounds documented. MITM proxy strips anti-distillation fields. Env var disables experimental betas. |
Do not rely on vendor DRM for API security. Implement your own API key rotation and usage monitoring. |
|
Undercover mode (undercover.ts) |
90-line module strips AI attribution from commits. Force ON possible, force OFF impossible. Dead-code-eliminated in external builds. |
AI-authored code enters repos with no attribution. Provenance and audit trail gaps for regulated industries. |
Implement commit provenance verification. Require AI disclosure policies for development teams using any coding agent. |
AI-assisted code is already leaking secrets at double the rate
GitGuardian’s State of Secrets Sprawl 2026 report, published March 17, found that Claude Code-assisted commits leaked secrets at a 3.2% rate versus the 1.5% baseline across all public GitHub commits. AI service credential leaks surged 81% year-over-year to 1,275,105 detected exposures. And 24,008 unique secrets were found in MCP configuration files on public GitHub, with 2,117 confirmed as live, valid credentials. GitGuardian noted the elevated rate reflects human workflow failures amplified by AI speed, not a simple tool defect.
The operational pattern Gartner is tracking
Feature velocity compounded the exposure. Anthropic shipped over a dozen Claude Code releases in March, introducing autonomous permission delegation, remote code execution from mobile devices, and AI-scheduled background tasks. Each capability widened the operational surface. The same month that introduced them produced the leak that exposed their implementation.
Gartner’s recommendation was specific. Require AI coding agent vendors to demonstrate the same operational maturity expected of other critical development infrastructure: published SLAs, public uptime history, and documented incident response policies. Architect provider-independent integration boundaries that would let you change vendors within 30 days. Anthropic has published one postmortem across more than a dozen March incidents. Third-party monitors detected outages 15 to 30 minutes before Anthropic’s own status page acknowledged them.
The company riding this product to a $380 billion valuation and a possible public offering this year, as the WSJ reported, now faces a containment battle that 8,000 DMCA takedowns have not won.
Merritt Baer, Chief Security Officer at Enkrypt AI, an enterprise AI guardrails company, and a former AWS security leader, told VentureBeat that the IP exposure Gartner flagged extends into territory most teams have not mapped. “The questions many teams aren’t asking yet are about derived IP,” Baer said. “Can model providers retain embeddings or reasoning traces, and are those artifacts considered your intellectual property?” With 90% of Claude Code’s source AI-generated and now public, that question is no longer theoretical for any enterprise shipping AI-written production code.
Zaitsev argued that the identity model itself needs rethinking. “It doesn’t make sense that an agent acting on your behalf would have more privileges than you do,” he told VentureBeat. “You may have 20 agents working on your behalf, but they’re all tied to your privileges and capabilities. We’re not creating 20 new accounts and 20 new services that we need to keep track of.” The leaked source shows Claude Code’s permission system is per-tool and granular. The question is whether enterprises are enforcing the same discipline on their side.
Five actions for security leaders this week
1. Audit CLAUDE.md and .claude/config.json in every cloned repository. Context poisoning through these files is a documented attack path with a readable implementation guide. Check Point Research found that developers inherently trust project configuration files and rarely apply the same scrutiny as application code during reviews.
2. Treat MCP servers as untrusted dependencies. Pin versions, vet before enabling, monitor for changes. The leaked source reveals the exact interface contract.
3. Restrict broad bash permission rules and deploy pre-commit secret scanning. A team generating 100 commits per week at the 3.2% leak rate is statistically exposing three credentials. MCP configuration files are the newest surface that most teams are not scanning.
4. Require SLAs, uptime history, and incident response documentation from your AI coding agent vendor. Architect provider-independent integration boundaries. Gartner’s guidance: 30-day vendor switch capability.
5. Implement commit provenance verification for AI-assisted code. The leaked Undercover Mode module strips AI attribution from commits with no force-off option. Regulated industries need disclosure policies that account for this.
Source map exposure is a well-documented failure class caught by standard commercial security tooling, Gartner noted. Apple and identity verification provider Persona suffered the same failure in the past year. The mechanism was not novel. The target was. Claude Code alone generates an estimated $2.5 billion in annualized revenue for a company now valued at $380 billion. Its full architectural blueprint is circulating on mirrors that have promised never to come down.
Tech
Samsung may raise its priciest phone prices in South Korea
Samsung could be about to make its most expensive phones even pricier, at least in its home market.
A new report suggests the company is planning price increases for select high-end Galaxy models in South Korea. Changes could potentially kick in as early as today, April 1.
The devices in question include the Samsung Galaxy Z Fold 7, Samsung Galaxy Z Flip 7, and Samsung Galaxy S25 Edge — all firmly at the top end of Samsung’s lineup. But the increases won’t hit every version. Instead, Samsung appears to be targeting only higher storage tiers. The base 256GB models will remain unchanged.
According to the report, 512GB variants could rise by around 100,000 won (roughly $65), while the 1TB version of the Fold 7 may jump by nearly 200,000 won (~$130). It’s not a dramatic spike on paper, but it’s still a noticeable bump for devices that are already pushing premium price territory.
Keeping entry-level models at the same price feels deliberate. On one hand, it softens the blow for buyers who just want the basics. On the other, it conveniently preserves those eye-catching “starting from” prices, even if most upgrades now cost more.
The bigger question is whether this stays local. For now, the changes are expected to apply only in South Korea. However, there’s a growing pattern here. Samsung has already adjusted pricing on some mid-range devices recently, and with ongoing component pressures, particularly around AI-driven memory and storage demand, wider increases wouldn’t be a huge surprise.
If the hikes do expand globally, pricing likely won’t translate directly. Currency differences and regional strategies usually mean adjustments vary market to market, but the direction of travel is pretty clear.
For now, nothing is official, but if you’ve been eyeing Samsung’s top-tier phones, it might be worth keeping an eye on prices. They don’t look like they’re heading down anytime soon.
Tech
4 Cool Bluetooth Gadgets You Can Connect To Your Echo Dot
We may receive a commission on purchases made from links.
Smart screens and speakers have found a permanent place in many of our households, since they help with playing music, controlling smart plugs, setting reminders, and much more. The use cases are plenty, especially when paired with other smart home gadgets that solve everyday problems. Speaking of pairing your smart speaker with external devices, the Amazon Echo Dot — one of Amazon’s most affordable and popular smart speakers — sports Bluetooth connections, which means it can be paired with some cool Bluetooth gadgets for added functionality. You can, for example, can pair multiple Echo speakers for a stereo setup or even connect external speakers with a better sound output during a party. Apart from audio, though, there are several other ways that you can take advantage of the Echo Dot’s Bluetooth module.
A few smart home gadgets, like smart light bulbs, often need a hub to function. However, if the bulb has Bluetooth support, it can be connected to and controlled by an Echo Dot without an external hub, which makes it a handy option. Similarly, there are other such gadgets that can take advantage of the Bluetooth Low Energy (BLE) protocol of the Echo Dot to establish a connection. Here are some of the best and most useful gadgets that we’ve found that can enhance your life and home. All you have to do is put your Echo Dot in pairing mode and connect the required device with the help of the Alexa app on your smartphone.
Bluetooth speakers
While there are several handy uses for an Amazon Echo Dot speaker, arguably the most popular one is playing music. This is primarily because it’s so quick and simple to ask Alexa to play your favorite album or track without having to manually look for it on your phone. Convenience aside though, Echo devices are capable speakers by themselves, which means the sound output is loud and clear. However, the small form factor means that the bass can be lacking, and the sound may not be able to fill a large room. If you’re having a party with your friends, you might miss out on that extra oomph. This is where the Echo Dot’s ability to connect to an external speaker comes into play.
If you have a Bluetooth speaker lying around at home, all you have to do is put it in pairing mode, head to the Alexa app, and connect the speaker to your Echo Dot. This works with pretty much any Bluetooth speaker, right from budget options to large home theatre setups. As long as the speaker is connected to the Echo Dot, all its responses — not just the songs — will play via the speaker itself. That said, the Echo device will still use its onboard microphones to detect and register your voice queries. This is one of the simplest yet the most popular uses that we’re sure a lot of you will appreciate. In case you don’t already have a speaker, the Anker Soundcore 2, which retails for around $30, is a user-favorite with a rating of 4.5 from close to 150K reviews.
Smart bulbs
The issue with a lot of good smart lighting solutions is that the installation process can be a headache — especially if they need a hub. Bluetooth smart bulbs are an easy fix, offering a plug-and-play solution. Modern Bluetooth bulbs from brands like Philips Hue or GE connect directly to your Echo Dot right out of the box, instead of requiring a central hub. This integration capability makes it an easy entry point into smart home automation. The biggest advantage of a system like this is that you can use bulbs and other smart home gadgets from multiple brands without worrying about compatibility.
Having a brand-agnostic solution helps avoid multiple issues. Once you invest in a Philips hub, for example, you may not be able to use bulbs from other brands with the same hub. This means you’re locked into the Philips ecosystem, unless you splurge on another hub from a different brand. Wi-Fi bulbs can already tackle this problem, but they can sometimes bog down your home network. Bluetooth bulbs, on the other hand, communicate locally with your Echo Dot. The feature set remains the same; you can set up daily routines so your lights slowly turn warmer in the evening, or shut down the entire house with a single phrase as you walk out the door. Additionally, you can connect as many bulbs via Bluetooth and operate the all individually. The Philips Hue 60W smart LED bulb, with its 4.7-star rating across more than 16,000 reviews, is a good starting point for under $50.
Smart switches
If you’re looking for creative use cases for your old Amazon Echo, smart switches are a good investment. The Switchbot smart switch button is an excellent replacement for old appliances and gadgets that lack internet connectivity; stick it beneath a manual switch and suddenly you can control it with your smartphone or Amazon Alexa device. Lots of devices and appliances launched in recent years may have built-in smart functionality to turn them on and off remotely. However, an old coffee maker or air purifier may not have the feature, and that’s exactly where a device like the Switchbot smart switch comes in handy. Once you connect it via Bluetooth to your Echo Dot, you can turn an appliance on or off with just your voice.
This works well with push-button switches, but you can’t use a single Switchbot to operate a larger, more traditional switch like the kind that controls the lights in your house both on and off. If you want both functionalities, you will have to purchase two Switchbots and install them on either side of the switch. While the product description mentions that you need a hub to use the device with Alexa, it’s only applicable to older Echo devices that cannot behave like a Bluetooth hub. With over 28,000 reviews and a rating of 4.1 stars, users definitely seem to love the Switchbot smart button thanks to its ability to use older gadgets easier. There’s something to be said about having a fresh cup of coffee waiting for you right after stepping out of the shower in the morning, isn’t there?
Bluetooth turntables
For those who have a large collection of vinyl records from back in the day, a Bluetooth turntable is pretty much a must-have. If you have one lying around, you would be glad to know that you can easily connect it to your Echo Dot. Since a good number of Bluetooth turntables have built-in wireless transmitters, you can wirelessly use your Echo Dot as a speaker instead of relying on your turntable’s internal one. Thanks to this setup, you can place your turntable at a distance from the Echo Dot without running audio wires all through the room.
This is a pretty neat trick; while the Echo Dot is usually the brain sending audio out to other speakers, in this scenario, it acts as the wireless receiver instead. The Audio-Technica wireless turntable is an excellent option in case you don’t have one already and are looking to buy one. It is pricey at around $230, but it’s got a solid 4.6-star rating across more than 8,700 reviews. Apart from a turntable, pretty much any other audio device that has a built-in Bluetooth transmitter can be used with an Echo Dot as well, so don’t feel like you’re limited to just spinning records remotely.
How we picked these gadgets
The primary criteria for a gadget to make it to this list is the fact that it connects to an Echo Dot speaker purely via Bluetooth and not Wi-Fi. Hence, it’s vital to note that not all types of gadgets of a particular kind may work via Bluetooth. An example of this is that not all smart bulbs support Bluetooth Low Energy connectivity. That’s why we’ve included suggested products that support the technology at play here; the ones we do recommend all have a rating of at least 4.1 stars across thousands of reviews. Additionally, all Echo devices — including the Echo Dot — need to be first connected to a Wi-Fi network for their initial setup before they can be used to connect to Bluetooth devices. Therefore, all the gadgets have been recommended with the assumption that you have access to a Wi-Fi network and that your Echo device is set up.
Tech
The EU Killed Voluntary CSAM Scanning. West Virginia Is Trying To Compel It. Both Cause Problems.
from the tricky-problems dept
Last week, the European Parliament voted to let a temporary exemption lapse that had allowed tech companies to scan their services for child sexual abuse material (CSAM) without running afoul of strict EU privacy regulations. Meanwhile, here in the US, West Virginia’s Attorney General continues to press forward with a lawsuit designed to force Apple to scan iCloud for CSAM, apparently oblivious to the fact that succeeding would hand defense attorneys the best gift they’ve ever received.
Two different jurisdictions. Two diametrically opposed approaches, both claiming to protect children, and both making it harder to actually do so.
I’ll be generous and assume people pushing both of these views genuinely think they’re doing what’s best for children. This is a genuinely complex topic with real, painful tradeoffs, and reasonable people can weigh them differently. What’s frustrating is watching policymakers on both sides of the Atlantic charge forward with approaches that seem driven more by vibes than by any serious engagement with how the current system actually works — or why it was built the way it was.
The European Parliament just voted against extending a temporary regulation that had exempted tech platforms from GDPR-style privacy rules when they voluntarily scanned for CSAM. This exemption had been in place (and repeatedly extended) for years while Parliament tried to negotiate a permanent framework. Those negotiations have been going on since November 2023 without resolution, and on Thursday MEPs decided they were done extending the stopgap.
To be clear, Parliament didn’t pass a law banning CSAM scanning. Companies can still technically scan if they want to. But without the exemption, they’re now exposed to massive privacy liability under EU law for doing so. Scanning private messages and stored content to look for CSAM is, after all, mass surveillance — and European privacy law treats mass surveillance seriously (which, in most cases, it should!). So the practical effect is a chilling one: companies that were voluntarily scanning now face significant legal risk if they continue.
The digital rights organization eDRI framed the issue in stark terms:
“This is actually just enabling big tech companies to scan all of our private messages, our most intimate details, all our private chats so it constitutes a really, really serious interference with our right to privacy. It’s not targeted against people that are suspected of child abuse — It’s just targeting everyone, potentially all of the time.”
And that argument is compelling. Hash-matching systems that compare uploaded images against databases of known CSAM are more targeted than, say, keyword scanning of every message, but they still fundamentally involve examining every unencrypted piece of content that passes through the system. When eDRI says it targets “everyone, potentially all of the time,” that’s an accurate description of how the technology works.
But… the technology also works to find and catch CSAM. Europol’s executive director, Catherine De Bolle, pointed to concrete numbers:
Last year alone, Europol processed around 1.1 million of so-called CyberTips, originating from the National Center for Missing & Exploited Children (NCMEC), of relevance to 24 European countries. CyberTips contain multiple entities (files, videos, photos etc.) supporting criminal investigation efforts into child sexual abuse online.
If the current legal basis for voluntary detection by online platforms were to be removed, this is expected to result in a serious reduction of CyberTip referrals. This would undermine the capability to detect relevant investigative leads on CSAM, which in turn will severely impair the EU’s security interests of identifying victims and safeguarding children.
The companies that have been doing this scanning — Google, Microsoft, Meta, Snapchat, TikTok — released a joint statement saying they are “deeply concerned” and warning that the lapse will leave “children across Europe and around the world with fewer protections than they had before.”
So the EU’s privacy advocates aren’t wrong about the surveillance problem. Europol isn’t wrong about the child safety consequences. Both things are true — which is what makes this genuinely tricky rather than a case of one side being obviously right.
Now flip to the United States, where the problem is precisely inverted.
In the US, the existing system has been carefully constructed around a single, critical principle: companies voluntarily choose to scan for CSAM, and when they find it, they’re legally required to report it to NCMEC. The word “voluntarily” is doing enormous load-bearing work in that sentence — and most of the people currently shouting about CSAM don’t seem to know it. As Stanford’s Riana Pfefferkorn explained in detail on Techdirt when a private class action lawsuit against Apple tried to compel CSAM scanning:
While the Fourth Amendment applies only to the government and not to private actors, the government can’t use a private actor to carry out a search it couldn’t constitutionally do itself. If the government compels or pressures a private actor to search, or the private actor searches primarily to serve the government’s interests rather than its own, then the private actor counts as a government agent for purposes of the search, which must then abide by the Fourth Amendment, otherwise the remedy is exclusion.
If the government – legislative, executive, or judiciary – forces a cloud storage provider to scan users’ files for CSAM, that makes the provider a government agent, meaning the scans require a warrant, which a cloud services company has no power to get, making those scans unconstitutional searches. Any CSAM they find (plus any other downstream evidence stemming from the initial unlawful scan) will probably get excluded, but it’s hard to convict people for CSAM without using the CSAM as evidence, making acquittals likelier. Which defeats the purpose of compelling the scans in the first place.
In the US, if the government forces Apple to scan, that makes Apple a government agent. Government agents need warrants. Apple can’t get warrants. So the scans are unconstitutional. So the evidence gets thrown out. So the predators walk free. All because someone thought “just make them scan!” was a simple solution to a complex problem.
Congress apparently understood this when it wrote the federal reporting statute — that’s why the law explicitly disclaims any requirement that providers proactively search for CSAM. The voluntariness of the scanning is what preserves its legal viability. Everyone involved in the actual work of combating CSAM — prosecutors, investigators, NCMEC, trust and safety teams — understands this and takes great care to preserve it.
Everyone, apparently, except the Attorney General of West Virginia. As we discussed recently, West Virginia just filed a lawsuit demanding that a court order Apple to “implement effective CSAM detection measures” on iCloud. The remedy West Virginia seeks — a court order compelling scanning — would spring the constitutional trap that everyone who actually works on this issue has been carefully avoiding for years.
As Pfefferkorn put it:
Any competent plaintiff’s counsel should have figured this out before filing a lawsuit asking a federal court to make Apple start scanning iCloud for CSAM, thereby making Apple a government agent, thereby turning the compelled iCloud scans into unconstitutional searches, thereby making it likelier for any iCloud user who gets caught to walk free, thereby shooting themselves in the foot, doing a disservice to their client, making the situation worse than the status quo, and causing a major setback in the fight for child safety online.
The reason nobody’s filed a lawsuit like this against Apple to date, despite years of complaints from left, right, and center about Apple’s ostensibly lackadaisical approach to CSAM detection in iCloud, isn’t because nobody’s thought of it before. It’s because they thought of it and they did their fucking legal research first. And then they backed away slowly from the computer, grateful to have narrowly avoided turning themselves into useful idiots for pedophiles.
The West Virginia complaint also treats Apple’s abandoned NeuralHash client-side scanning project as evidence that Apple could scan but simply chose not to. What it skips over is why the security community reacted so strongly to NeuralHash in the first place. Apple’s own director of user privacy and child safety laid out the problem:
Scanning every user’s privately stored iCloud content would in our estimation pose serious unintended consequences for our users… Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types (such as images, videos, text, or audio) and content categories. How can users be assured that a tool for one type of surveillance has not been reconfigured to surveil for other content such as political activity or religious persecution? Tools of mass surveillance have widespread negative implications for freedom of speech and, by extension, democracy as a whole.
Once you create infrastructure capable of scanning every user’s private content for one category of material, you’ve created infrastructure capable of scanning for anything. The pipe doesn’t care what flows through it. Governments around the world — some of them not exactly champions of human rights — have a well-documented habit of demanding expanded use of existing surveillance capabilities. This connects directly to the perennial fights over end-to-end encryption backdoors, where the same argument applies: you cannot build a door that only the good guys can walk through.
And then there’s the scale problem. Even the best hash-matching systems can produce false positives, and at the scale of major platforms, even tiny error rates translate into enormous numbers of wrongly flagged users.
This is one of those frustrating stories where you can… kinda see all sides, and there’s no easy or obvious answer:
Scanning works, at least somewhat. 1.1 million CyberTips from Europol in a single year. Some number of children identified and rescued because platforms voluntarily detected CSAM and reported it. The system produces real results.
Scanning is mass surveillance. Every image, every message gets examined (algorithmically), not just those belonging to suspected offenders. The privacy intrusion is real, not hypothetical, and it falls on everyone.
Compelled scanning breaks prosecutions. In the US, the Fourth Amendment means that government-ordered scanning creates a get-out-of-jail card for the very predators everyone claims to be targeting. The voluntariness of the system is what makes it legally functional.
Scanning infrastructure is repurposable. A system built to detect CSAM can be retooled to detect political speech, religious content, or anything else. This concern is not paranoid; it’s an engineering reality.
False positives at scale are inevitable. Even highly accurate systems will flag innocent content when processing billions of items, and the consequences for wrongly accused individuals are severe.
People can and will weigh these tradeoffs differently, and that’s legitimate. The tension described in all this is real and doesn’t resolve neatly.
But what both the EU Parliament’s vote and West Virginia’s lawsuit share is an unwillingness to sit with that tension. The EU stripped legal cover from the voluntary system that was actually producing results, without having a workable replacement ready. West Virginia is trying to compel what must remain voluntary, apparently without bothering to read the constitutional case law that makes compelled scanning self-defeating. From opposite directions, both approaches attack the same fragile voluntary architecture that currently threads the needle between these competing interests.
The status quo in the United States — voluntary scanning, mandatory reporting, no government compulsion to search — is far from perfect. But the system functions: it produces leads, preserves prosecutorial viability, and does so precisely because it was designed by people who understood the tradeoffs and built accordingly.
It would be nice if more policymakers engaged with why the system works the way it does before trying to blow it up from either direction. In tech policy, the loudest voices in the room are rarely the ones who’ve done the reading.
Filed Under: 4th amendment, csam, csam scanning, eu, privacy, scanning, surveillance
Tech
Swiss finance minister files criminal charges over Grok-generated abuse on X
Karin Keller-Sutter, Switzerland’s finance minister and the country’s former president, has filed criminal charges for defamation and insult after Elon Musk’s AI chatbot Grok was prompted by an anonymous user to generate a torrent of sexist and vulgar remarks about her on X. The complaint, filed on 20 March with the Bern public prosecutor’s office, is directed against “persons unknown” because the X user who prompted Grok could not be identified beyond a screen name. It is, by all available evidence, the first time a serving head of a national finance ministry has pursued criminal action against an AI-generated statement.
The incident occurred on 10 March, when a user on X instructed Grok to “roast” a figure they described as “Federal Councillor KKS, my favourite chick,” urging the chatbot to attack her in crude street language. Grok complied. The resulting post, a barrage of misogynistic abuse attributed to the chatbot, was published on Keller-Sutter’s feed. A spokesperson for the minister told Politico that the post was not “a contribution protected by freedom of expression or part of the political debate, but rather a pure denigration of a woman.” The spokesperson added: “One must fundamentally defend oneself against such misogynistic statements.”
Keller-Sutter is no minor political figure. She heads the Federal Finance Department and is one of seven members of the Swiss Federal Council, the country’s highest executive authority. In 2025, she served as president of the Swiss Confederation, a role that rotates annually among the council members. Before entering federal politics, she studied political science in London and Montreal, served as a cantonal justice minister, and presided over the Council of States. Her decision to file criminal charges rather than simply delete the post signals an intent to test whether Swiss defamation law, which criminalises both defamation under Article 173 and slander under Article 174 of the penal code, can reach the operators of AI systems and the platforms that host them. The legal question at the heart of the complaint is whether social media companies and their operators, in addition to individual users, can be held criminally liable for content generated by their own AI tools.
That question has not been answered anywhere in the world, but courts are beginning to confront it. In the United States, conservative activist Robby Starbuck sued Meta in 2025 after its AI falsely linked him to the January 6 Capitol riot; Meta settled rather than litigate. A Georgia court dismissed a separate defamation case against OpenAI after ChatGPT fabricated claims about a radio host, ruling that the legal threshold for fault had not been met. No AI defamation case has reached a final judgment in any jurisdiction. Keller-Sutter’s complaint, filed under a criminal rather than civil framework and in a country whose defamation statute carries prison sentences of up to three years for deliberate slander, could establish the first binding precedent on AI platform liability for generated speech.
The filing arrives against the backdrop of what has become the most sustained regulatory crisis in Grok’s brief existence. Between 29 December 2025 and 8 January 2026, Grok’s image-generation tools created more than three million sexualised images, approximately 23,000 of which depicted minors, according to the Centre for Countering Digital Hate. The discovery triggered a cascade of legal and regulatory actions that has not stopped. On 2 January, French ministers reported the content to prosecutors, calling it “manifestly illegal.” On 12 January, the United Kingdom’s Ofcom opened a formal investigation into whether X had complied with the Online Safety Act, with potential penalties of up to £18 million or 10 per cent of global revenue. On 14 January, California’s attorney general announced a state investigation into whether xAI had violated California law. On 26 January, the European Commission opened a probe under the Digital Services Act into whether Grok’s deployment met the platform’s legal obligations regarding illegal content and harm to minors.
The enforcement actions escalated sharply in February. On 3 February, French prosecutors, accompanied by a cybercrime unit and Europol officers, raided X’s Paris offices. The investigation, originally opened over complaints about platform operation and data extraction, had widened to include charges of complicity in distributing child sexual abuse material, creating sexually explicit deepfakes, and Holocaust denial. Prosecutors have since summoned Musk and X’s former chief executive Linda Yaccarino for voluntary interviews on 20 April. A Dutch court separately ordered Grok banned from generating non-consensual intimate images. The EU had already fined X €120 million in December 2025 for violating the DSA’s transparency requirements, a penalty X is now challenging in what has become the first court test of the bloc’s landmark digital regulation.
In the United States, three Tennessee teenagers filed a class-action lawsuit against xAI on 16 March, alleging that Grok had been used to create sexualised images of them without their knowledge or consent. The images were reportedly shared on Discord and other platforms. On 25 March, Baltimore became the first American city to sue xAI over Grok-generated deepfake pornography, alleging violations of consumer protection law. A separate class action, filed by Lieff Cabraser Heimann & Bernstein, alleges that xAI knowingly designed and profited from an image generator used to produce and distribute child sexual abuse material while refusing to implement the content-safety measures adopted by every other major AI company.
The governance vacuum at xAI compounds the legal exposure. All 11 of xAI’s original co-founders have now departed the company, including researchers recruited from Google DeepMind, Google Brain, and Microsoft Research. Musk said in March that xAI was “not built right the first time around” and needed to be rebuilt from its foundations. The company was absorbed into SpaceX in February through an all-stock merger that raised immediate governance questions, creating a combined entity valued at $1.25 trillion that is now preparing for what would be the largest initial public offering in history. The regulatory and litigation risks surrounding Grok are, in effect, now embedded in the prospectus of a company seeking a $1.75 trillion public valuation.
What makes Keller-Sutter’s complaint distinct from the deepfake and CSAM cases is its simplicity. It does not involve image generation, undressing algorithms, or child exploitation. It involves a chatbot that was asked to insult a named public official and did so in language that, under Swiss law, constitutes a criminal offence. The factual question is narrow: who is responsible when an AI system, operating on a commercial platform, generates defamatory speech at a user’s request? If the user cannot be identified, does liability pass to the platform operator, to the AI developer, or to no one at all?
The answer to that question will shape the trajectory of AI governance far beyond Switzerland. Every major AI company operates chatbots capable of producing defamatory, abusive, or factually false statements about real people. Most have implemented guardrails designed to refuse such requests. Grok, by deliberate design, has operated with fewer restrictions than its competitors, a positioning Musk has marketed as a commitment to free expression. The Keller-Sutter case tests whether that positioning can survive contact with criminal law.
Switzerland is not the European Union and is not bound by the DSA. But Swiss defamation law is among the most stringent in Europe, and a criminal finding against an AI platform operator would reverberate through every jurisdiction currently weighing similar questions. The case is small in scope, involving a single post on a single platform about a single official. But the principle it seeks to establish, that the companies building these systems bear the kind of legal responsibility that the age of AI governance demands, is anything but small. If Grok can be prompted to defame a former president with impunity, the question is not what it says about the technology. It is what it says about the law.
-
Business6 days agoInstagram, YouTube Found Responsible for Teen’s Mental Health Struggle in Historic Ruling
-
Tech7 days agoIntercom’s new post-trained Fin Apex 1.0 beats GPT-5.4 and Claude Sonnet 4.6 at customer service resolutions
-
NewsBeat5 days agoThe Story hosts event on Durham’s historic registers
-
Sports5 days agoSweet Sixteen Game Thread: Tide vs Michigan
-
Entertainment3 days ago
Fans slam 'heartbreaking' Barbie Dream Fest convention debacle with 'cardboard cutout' experience
-
Entertainment4 days agoLana Del Rey Celebrates Her Husband’s 51st Birthday In New Post
-
Crypto World2 days ago
Dems press CFTC, ethics board on prediction-market insider trades
-
Tech3 days agoThe Pixel 10a doesn’t have a camera bump, and it’s great
-
Crypto World8 hours agoGold Price Prediction: Worst Month in 17 Years fo Save Haven Rock
-
Sports1 day agoTallest college basketball player ever, standing at 7-foot-9, entering transfer portal
-
Tech2 days agoEE TV is using AI to help you find something to watch
-
Tech3 days agoApple will hide your email address from apps and websites, but not cops
-
Tech2 days agoFlipsnack and the shift toward motion-first business content with living visuals
-
Tech2 days agoHow to back up your iPhone & iPad to your Mac before something goes wrong
-
Fashion7 days agoEn Vogue in Brown Leather and Tailored Neutrals by Atelier Savoir, Styled by J Bolin
-
Politics2 days agoShould Trump Be Scared Strait?
-
Crypto World2 days agoU.S. rule change may open trillions in 401(k) funds to crypto
-
Fashion7 days agoWhat Are Your Favorite T-Shirts for the Weekend?
-
Fashion5 days agoWeekly News Update, 3.27.26 – Corporette.com
-
Crypto World1 day agoBitcoin enters the public bond market as Moody’s gives a first-of-its-kind crypto deal a rating


You must be logged in to post a comment Login