Four separate RSAC 2026 keynotes arrived at the same conclusion without coordinating. Microsoft’s Vasu Jakkal told attendees that zero trust must extend to AI. Cisco’s Jeetu Patel called for a shift from access control to action control, saying in an exclusive interview with VentureBeat that agents behave “more like teenagers, supremely intelligent, but with no fear of consequence.” CrowdStrike’s George Kurtz identified AI governance as the biggest gap in enterprise technology. Splunk’s John Morgan called for an agentic trust and governance model. Four companies. Four stages. One problem.
Matt Caulfield, VP of Product for Identity and Duo at Cisco, put it bluntly in an exclusive VentureBeat interview at RSAC. “While the concept of zero trust is good, we need to take it a step further,” Caulfield said. “It’s not just about authenticating once and then letting the agent run wild. It’s about continuously verifying and scrutinizing every single action the agent’s trying to take, because at any moment, that agent can go rogue.”
Seventy-nine percent of organizations already use AI agents, according to PwC’s 2025 AI Agent Survey. Only 14.4% reported full security approval for their entire agent fleet, per the Gravitee State of AI Agent Security 2026 report of 919 organizations in February 2026. A CSA survey presented at RSAC found that only 26% have AI governance policies. CSA’s Agentic Trust Framework describes the resulting gap between deployment velocity and security readiness as a governance emergency.
Cybersecurity leaders and industry executives at RSAC agreed on the problem. Then two companies shipped architectures that answer the question differently. The gap between their designs reveals where the real risk sits.
Advertisement
The monolithic agent problem that security teams are inheriting
The default enterprise agent pattern is a monolithic container. The model reasons, calls tools, executes generated code, and holds credentials in one process. Every component trusts every other component. OAuth tokens, API keys, and git credentials sit in the same environment where the agent runs code it wrote seconds ago.
A prompt injection gives the attacker everything. Tokens are exfiltrable. Sessions are spawnable. The blast radius is not the agent. It is the entire container and every connected service.
The CSA and Aembit survey of 228 IT and security professionals quantifies how common this remains: 43% use shared service accounts for agents, 52% rely on workload identities rather than agent-specific credentials, and 68% cannot distinguish agent activity from human activity in their logs. No single function claimed ownership of AI agent access. Security said it was a developer’s responsibility. Developers said it was a security responsibility. Nobody owned it.
CrowdStrike CTO Elia Zaitsev, in an exclusive VentureBeat interview, said the pattern should look familiar. “A lot of what securing agents look like would be very similar to what it looks like to secure highly privileged users. They have identities, they have access to underlying systems, they reason, they take action,” Zaitsev said. “There’s rarely going to be one single solution that is the silver bullet. It’s a defense in depth strategy.”
Advertisement
CrowdStrike CEO George Kurtz highlighted ClawHavoc (a supply chain campaign targeting the OpenClaw agentic framework) at RSAC during his keynote. Koi Security named the campaign on February 1, 2026. Antiy CERT confirmed 1,184 malicious skills tied to 12 publisher accounts, according to multipleindependent analyses of the campaign. Snyk’s ToxicSkills research found that 36.8% of the 3,984 ClawHub skills scanned contain security flaws at any severity level, with 13.4% rated critical. Average breakout time has dropped to 29 minutes. Fastest observed: 27 seconds. (CrowdStrike 2026 Global Threat Report)
Anthropic separates the brain from the hands
Anthropic’s Managed Agents, launched April 8 in public beta, split every agent into three components that do not trust each other: a brain (Claude and the harness routing its decisions), hands (disposable Linux containers where code executes), and a session (an append-only event log outside both).
Separating instructions from execution is one of the oldest patterns in software. Microservices, serverless functions, and message queues.
Credentials never enter the sandbox. Anthropic stores OAuth tokens in an external vault. When the agent needs to call an MCP tool, it sends a session-bound token to a dedicated proxy. The proxy fetches real credentials from the vault, makes the external call, and returns the result. The agent never sees the actual token. Git tokens get wired into the local remote at sandbox initialization. Push and pull work without the agent touching the credential. For security directors, this means a compromised sandbox yields nothing an attacker can reuse.
Advertisement
The security gain arrived as a side effect of a performance fix. Anthropic decoupled the brain from the hands so inference could start before the container booted. Median time to first token dropped roughly 60%. The zero-trust design is also the fastest design. That kills the enterprise objection that security adds latency.
Session durability is the third structural gain. A container crash in the monolithic pattern means total state loss. In Managed Agents, the session log persists outside both brain and hands. If the harness crashes, a new one boots, reads the event log, and resumes. No state lost turns into a productivity gain over time. Managed Agents include built-in session tracing through the Claude Console.
Pricing: $0.08 per session-hour of active runtime, idle time excluded, plus standard API token costs. Security directors can now model agent compromise cost per session-hour against the cost of the architectural controls.
Nvidia locks the sandbox down and monitors everything inside it
Nvidia’s NemoClaw, released March 16 in early preview, takes the opposite approach. It does not separate the agent from its execution environment. It wraps the entire agent inside four stacked security layers and watches every move. Anthropic and Nvidia are the only two vendors to have shipped zero-trust agent architectures publicly as of this writing; others are in development.
Advertisement
NemoClaw stacks five enforcement layers between the agent and the host. Sandboxed execution uses Landlock, seccomp, and network namespace isolation at the kernel level. Default-deny outbound networking forces every external connection through explicit operator approval via YAML-based policy. Access runs with minimal privileges. A privacy router directs sensitive queries to locally-running Nemotron models, cutting token cost and data leakage to zero. The layer that matters most to security teams is intent verification: OpenShell’s policy engine intercepts every agent action before it touches the host. The trade-off for organizations evaluating NemoClaw is straightforward. Stronger runtime visibility costs more operator staffing.
The agent does not know it is inside NemoClaw. In-policy actions return normally. Out-of-policy actions get a configurable denial.
Observability is the strongest layer. A real-time Terminal User Interface logs every action, every network request, every blocked connection. The audit trail is complete. The problem is cost: operator load scales linearly with agent activity. Every new endpoint requires manual approval. Observation quality is high. Autonomy is low. That ratio gets expensive fast in production environments running dozens of agents.
Durability is the gap nobody’s talking about. Agent state persists as files inside the sandbox. If the sandbox fails, the state goes with it. No external session recovery mechanism exists. Long-running agent tasks carry a durability risk that security teams need to price into deployment planning before they hit production.
Advertisement
The credential proximity gap
Both architectures are a real step up from the monolithic default. Where they diverge is the question that matters most to security teams: how close do credentials sit to the execution environment?
Anthropic removes credentials from the blast radius entirely. If an attacker compromises the sandbox through prompt injection, they get a disposable container with no tokens and no persistent state. Exfiltrating credentials requires a two-hop attack: influence the brain’s reasoning, then convince it to act through a container that holds nothing worth stealing. Single-hop exfiltration is structurally eliminated.
NemoClaw constrains the blast radius and monitors every action inside it. Four security layers limit lateral movement. Default-deny networking blocks unauthorized connections. But the agent and generated code share the same sandbox. Nvidia’s privacy router keeps inference credentials on the host, outside the sandbox. But messaging and integration tokens (Telegram, Slack, Discord) are injected into the sandbox as runtime environment variables. Inference API keys are proxied through the privacy router and not passed into the sandbox directly. The exposure varies by credential type. Credentials are policy-gated, not structurally removed.
That distinction matters most for indirect prompt injection, where an adversary embeds instructions in content the agent queries as part of legitimate work. A poisoned web page. A manipulated API response. The intent verification layer evaluates what the agent proposes to do, not the content of data returned by external tools. Injected instructions enter the reasoning chain as trusted context. With proximity to execution.
Advertisement
In the Anthropic architecture, indirect injection can influence reasoning but cannot reach the credential vault. In the NemoClaw architecture, injected context sits next to both reasoning and execution inside the shared sandbox. That is the widest gap between the two designs.
NCC Group’s David Brauchler, Technical Director and Head of AI/ML Security, advocates for gated agent architectures built on trust segmentation principles where AI systems inherit the trust level of the data they process. Untrusted input, restricted capabilities. Both Anthropic and Nvidia move in this direction. Neither fully arrives.
The zero-trust architecture audit for AI agents
The audit grid covers three vendor patterns across six security dimensions, five actions per row. It distills to five priorities:
VentureBeat created with Imagen
Advertisement
Audit every deployed agent for the monolithic pattern. Flag any agent holding OAuth tokens in its execution environment. The CSA data shows 43% use shared service accounts. Those are the first targets.
Require credential isolation in agent deployment RFPs. Specify whether the vendor removes credentials structurally or gates them through policy. Both reduce risk. They reduce it by different amounts with different failure modes.
Test session recovery before production. Kill a sandbox mid-task. Verify state survives. If it does not, long-horizon work carries a data-loss risk that compounds with task duration.
Staff for the observability model. Anthropic’s console tracing integrates with existing observability workflows. NemoClaw’s TUI requires an operator-in-the-loop. The staffing math is different.
Track indirect prompt injection roadmaps. Neither architecture fully resolves this vector. Anthropic limits the blast radius of a successful injection. NemoClaw catches malicious proposed actions but not malicious returned data. Require vendor roadmap commitments on this specific gap.
Zero trust for AI agents stopped being a research topic the moment two architectures shipped. The monolithic default is a liability. The 65-point gap between deployment velocity and security approval is where the next class of breaches will start.
XChat is now on the App Store, where its listing says that it’s expected to be available for download on April 17. This isn’t the same IRC app from the early aughts, which you may remember if you’re of a certain age. This is a messaging app specifically for X users. X chief Elon Musk first talked about rolling out a new version of his social network’s direct messaging feature in mid-2025. In a series of posts back then, he said the new version would be encrypted and would feature a “whole new architecture.” He also said all X users were getting XChat in June last year, but Musk is pretty infamous for being overly optimistic about timelines.
Now, instead of an upgraded DM feature on X, users are getting a standalone app. It allows them to chat with anybody on X and call each other across devices. The app is end-to-end encrypted and will let users edit and delete their messages for all participants in the conversation. It will also allow users to block screenshots and enable disappearing messages if they want the sensitive details they send in-chat to vanish within five minutes. The app allows users to create massive group chats with up to 481 members, as well. X promises in the App Store listing that XChat will not have ads and will not be tracking users.
Users can now pre-order XChat for iPhones and iPads so that it automatically downloads on their device when it comes out.
The partnership is aiming to deliver a quantum computer that can calibrate and run itself without the need for manual oversight.
Dublin-based quantum computing start-up Equal1 is to partner with Californian quantum infrastructure software maker Q-Ctrl for the deployment of rack-mounted quantum computers in enterprise data centres.
The companies said that together, their technologies will deliver “truly autonomous operation” for “peak performance without manual oversight” to address evolving challenges around performance and maintenance of enterprise quantum computing systems.
By combining Q-Ctrl’s infrastructure software, ‘Boulder Opal Scale Up’, with Equal1’s scalable hardware, a quantum computer will be able to calibrate and run itself without the need for manual oversight and implementation by expert teams, the companies said.
Advertisement
“Equal1 has already proven that quantum hardware can be compact, rack-mounted and data centre-ready,” said Jason Lynch, CEO of Equal1.
“Our partnership with Q-Ctrl further accelerates our mission by providing a fully autonomous software stack. With Boulder Opal Scale Up integrated into our Bell-series systems, our customers gain a self-optimising quantum accelerator that fits seamlessly into existing IT infrastructure.”
Claimed features offered to prospective data centre customers by the strategic partnership include autonomous operation and calibration of hardware; real-time system monitoring and maintenance for performance; secure local deployments for operation while disconnected from the internet; and “algorithmic enhancement” through compatibility with other Q-Ctrl software.
“To scale quantum computing, we must transition from manual hardware operation by expert teams of PhDs to autonomous functionality when fully deployed in data centres and HPC [high-performance computing] facilities,” said Aravind Ratnam, chief strategy officer at Q-Ctrl.
Advertisement
“Our partnership with Equal1 achieves this by integrating Q-Ctrl’s AI-driven autonomous calibration directly into their silicon spin qubit quantum systems. Together, these technologies provide HPC users with a seamless experience, enabling quantum processors to operate on equal footing with GPUs and CPUs.”
Equal1, which was founded in 2017 at University College Dublin, says quantum computing using standard silicon is the way to overcome challenges posed by AI to the power and cost thresholds of traditional computers.
In January, it raised $60m through a funding round led by Ireland Strategic Investment Fund, with participation from Atlantic Bridge, the European Innovation Council Fund, Matterwave Ventures, Enterprise Ireland, Elkstone and TNO Ventures.
Its flagship ‘Bell-1’ device was launched in March 2025 and was described as the “first-ever” Irish-made quantum computer as well as the world’s first silicon-based quantum server designed for data centres and high-performance computing.
Advertisement
Q-Ctrl, founded in 2017, operates partnerships with companies such as IBM, Nvidia and AWS with the goal of making machines “thousands of times more powerful” using “AI-driven control solutions” for the enhancement of quantum computer performance.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
A financially motivated threat actor tracked as Storm-2755 is stealing Canadian employees’ salary payments after hijacking their accounts in payroll redirection (also known as payroll pirate) attacks.
The attackers used malicious Microsoft 365 sign-in pages to steal victims’ authentication tokens and session cookies by redirecting them to domains (e.g., bluegraintours[.]com) hosting malicious web pages (pushed to the top of search engine results through malvertising or SEO poisoning) that masqueraded as Microsoft 365 sign-in forms.
This allowed Storm-2755 to bypass multifactor authentication (MFA) in adversary‑in‑the‑middle (AiTM) attacks by replaying stolen session tokens rather than re-authenticating.
“Rather than harvesting only usernames and passwords, AiTM frameworks proxy the entire authentication flow in real time, enabling the capture session cookies and OAuth access tokens issued upon successful authentication,” Microsoft explained.
“Due to these tokens representing a fully authenticated session, threat actors can reuse them to gain access to Microsoft services without being prompted for credentials or MFA, effectively bypassing legacy MFA protections not designed to be phishing-resistant.”
Advertisement
Storm-2755 attack flow (Microsoft)
After gaining access to an employee’s account, the attacker created inbox rules that automatically moved messages from human resources staff containing the words “direct deposit” or “bank” to hidden folders, preventing the victim from seeing the correspondence.
In the next stage, they searched for “payroll,” “HR,” “direct deposit,” and “finance,” then sent emails to human resources staff with the subject line “Question about direct deposit” to trick staff into updating banking information.
Where social engineering failed, the attacker logged directly into HR software platforms such as Workday, using the stolen session to manually update direct deposit details.
Storm-2755 emailing HR staff (Microsoft)
To harden defenses against AiTM and payroll pirate attacks, Microsoft advises defenders to block legacy authentication protocols and implement phishing-resistant MFA.
If any signs of compromise are detected, they should also revoke compromised tokens and sessions immediately, remove malicious inbox rules, and reset MFA methods and credentials for all affected accounts.
In October, Microsoft disrupted another pirate payroll campaign targeting Workday accounts since March 2025, in which a cybercrime gang tracked as Storm-2657 targeted university employees across the United States to hijack their salary payments.
Advertisement
In these attacks, Storm-2657 breached the targets’ accounts via phishing emails and stole MFA codes using AITM tactics, which allowed the threat actors to compromise the victims’ Exchange Online accounts.
Payroll pirate attacks are a variant of business email compromise (BEC) scams that target businesses and individuals who regularly make wire transfers. Last year, the FBI’s Internet Crime Complaint Center (IC3) recorded over 24,000 BEC fraud complaints, resulting in losses exceeding $3 billion, making it the second most lucrative crime type behind investment scams.
Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.
This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.
Francois Ajenstat, Golden Analytics founder and CEO.
Francois Ajenstat has been in business intelligence long enough to see two generational shifts, from the early days at Cognos to the self-service revolution at Tableau, ultimately serving as chief product officer at the Seattle-based data visualization company.
Now he’s launching Golden Analytics, a Seattle-based startup built on the premise that a third shift is underway, and the incumbents aren’t in a position to keep up.
The company emerged from stealth Tuesday with $7 million in seed funding co-led by NEA and Madrona, with participation from Breakers. The company is building a web-based business intelligence and data visualization platform that Ajenstat says combines the analytical depth of Tableau, the design sensibility of Canva, and the AI-powered workflow of Cursor.
“The current leaders in the space are Tableau, Power BI, Looker, and they’re doing a great job. They’re fantastic products, but they’re bolting on AI as opposed to building with AI at the core,” said Ajenstat, the company’s founder and CEO, in an interview. “And it just doesn’t feel right.”
Core features: In a demo of Golden Analytics on our call, Ajenstat uploaded a raw e-commerce dataset and had a finished dashboard in two clicks. The platform automatically interpreted the data, surfaced insights, suggested questions, and generated visualizations.
Advertisement
When he wanted to go deeper, he asked the AI to add a region field to a sales chart, and it complied instantly. He also showed a storytelling agent that generates written narrative analyses, identifying patterns like regional profitability gaps.
A dashboard generated by Golden Analytics from a raw e-commerce dataset. (Golden Analytics Image, Click to Enlarge.)
Central to the platform is what Ajenstat calls the “slider of autonomy” — users can let the AI do everything, do it all themselves, or land somewhere in between.
It’s a deliberate contrast to the chatbot-style AI analytics tools that have emerged since ChatGPT, which tend to position themselves as replacements for human analysts. Ajenstat isn’t buying that framing. “It’s about empowering people — whatever they want to do,” he said.
Technical details: The system runs about 120 different large-language model (LLM) calls through an orchestration layer that routes tasks to whichever model fits best, such as Gemini for visual design, Anthropic’s Claude for data analysis, and others.
Advertisement
Ajenstat calls it a platform of “AI specialists” rather than a single agent.
Golden itself was built entirely using AI coding tools, with a small team of engineers. Ajenstat, a product leader by background, said he is also contributing to the development using Cursor and Claude Code. “It’s empowering to see how quickly you can build in this era.”
Availability: The company plans to follow a product-led growth model, letting individual users adopt the platform before it spreads within their organizations — similar to the path taken by tools like Cursor and Slack. About a dozen early users are already giving feedback, and Ajenstat said general availability is weeks away. Pricing has not been disclosed.
Background: Ajenstat, the startup’s sole founder, is a first-time CEO after a three-decade career in data analytics. He started out at Cognos, one of the original business intelligence companies, then spent a decade at Microsoft in product roles across SQL Server and Office.
Advertisement
He spent 13 years at Tableau, rising from senior director of product management to chief product officer, a role he held for more than seven years, helping guide the company through its IPO and its $15.7 billion acquisition by Salesforce in 2019.
Golden has five full-time employees and a fractional chief technology officer. The team includes engineers from Tableau, Snowflake, Apple, Microsoft, and other companies, drawn by the chance to rethink a category they felt had stalled, Ajenstat explained.
Investors: NEA’s role in the funding is notable. The firm was an early backer of Tableau. After Ajenstat left the company, he spent nearly two years as an NEA venture advisor while also serving as CPO at Amplitude, the San Francisco-based product analytics company.
From left, Madrona’s Tim Porter, Golden Analytics founder Francois Ajenstat, and Madrona’s Mark Nelson, flashing the “data rockstar” hand gesture that is a signature of Ajenstat’s keynotes. (Madrona Photo)
The Madrona side of the investment also has deep Tableau ties. Madrona venture partner Mark Nelson served as Tableau’s president and CEO from 2021 to 2022, and managing director Tim Porter has known Ajenstat since his Microsoft SQL Server days.
In a post on LinkedIn, Porter noted that big BI platforms are now controlled by Salesforce, Microsoft, and Google, and have evolved toward the priorities of their parent companies.
Advertisement
“That’s how large organizations work,” Porter wrote. “But it means the analyst — the person actually doing the work — has been an afterthought for a while now.”
It’s the second analytics startup with deep Tableau roots to emerge this week.
On Monday, we reported on Ridge AI, led by former Tableau product leader Ellie Fields and UW professor Jeffrey Heer, also backed by Madrona’s Porter and Nelson. Ridge is focused on embedded analytics for SaaS companies, a different market than Golden’s broader BI play.
An anonymous reader quotes a report from Ars Technica: The Trump administration has stepped up an effort to unmask a Reddit user who criticized Immigration and Customs Enforcement (ICE). After failing to obtain information through a summons issued (PDF) to Reddit, the government reportedly issued a subpoena demanding that Reddit provide the information and appear before a grand jury in Washington, DC. The Intercept described the subpoena today. “According to a subpoena obtained by The Intercept, Reddit has until April 14 to provide a wide range of personal data on one of its users, whom US Immigration and Customs Enforcement agents have been trying unsuccessfully to identify for more than a month,” the article said.
The legal saga began in US District Court for the Northern District of California. On March 12, the anonymous Reddit user whose information is being sought filed a motion (PDF) to quash a summons seeking a host of information from Reddit. The summons was issued by the Department of Homeland Security and directed Reddit to turn information over to an ICE senior special agent. The summons cited authority under 19 U.S. Code 1509, which is part of the Smoot-Hawley Tariff Act of 1930. The motion to quash said the summons is not authorized by the law, which deals with imports of boats, alcoholic drinks, and animals, among other things.
“J. Doe is a US citizen who has not traveled out of the country, is not engaged in any international commerce, has no business concerns outside the United States, and primarily uses their Reddit account to engage in political speech relevant to their local community,” said the filing by the Civil Liberties Defense Center (CLDC), which represents the Reddit user. “Yet the government claims the right to obtain Doe’s name, telephone number, home address, banking and credit card information, IP addresses, telephone model number(s), and the names of any other accounts associated with their Reddit account. The information sought by the government in no way pertains to customs or importing or exporting merchandise, and is clearly intended to chill free speech.” “We should be very, very, very concerned that they’ve now taken one of these to a grand jury,” said David Greene, senior counsel for the Electronic Frontier Foundation. “It’s something to be taken very seriously.”
A Reddit spokesperson told Ars today that “we seek to inform users of any legal process compelling disclosure of their data, as we did in this case, because users should have the agency to protect their own information and are often better positioned to challenge requests that impact them.”
Advertisement
“We do not voluntarily share information with any government, especially not on users exercising their rights to criticize the government or plan a protest. We review every inquiry for legal sufficiency and routinely object to requests that are overbroad or threaten civil rights. When legally compelled to disclose data, we provide only the minimum required and notify the user whenever possible so they can defend their interests.”
NASA’s Artemis II crew of four astronauts from the United States and Canada are set to return to Earth on Friday after their historic trip to the far side of the moon.
Reid Wiseman, Victor Glover, Christina Koch, and Jeremy Hansen have spent 10 days aboard the Orion spacecraft. They are expected to begin re-entry at 7:33 p.m. ET with a splashdown of 8:07 p.m.
NASA has a live feed for when the crew lands in the Pacific Ocean later today. The Orion spacecraft is expected to splash down off the coast of San Diego, California.
Advertisement
The Artemis II mission marks the first time humans have ventured to the moon’s orbit in more than 50 years. The crew traveled farther from Earth than any humans have before, reaching an estimated 252,760 miles from our planet. That’s the same distance as traveling between New York City and Los Angeles around 100 times, only the astronauts are inside a capsule with 330 cubic feet of habitable space, which is about the size of two minivans.
The objective of the Artemis II mission is to collect data and insights that will help NASA prepare for future lunar missions and landings — the astronauts put the Orion spacecraft through planned tests to evaluate how it performs with a crew in deep space. This involves testing communication systems with colleagues on Earth, making trajectory adjustments, and making a safe re-entry and splashdown.
The splashdown could be one of the most dangerous moments of the whole mission. On the Artemis I mission in 2022, which did not have a crew, Orion’s protective heat shield was unexpectedly damaged upon its return to Earth. The heat shield is made of AVCOAT — a material designed to slowly dissipate and protect the crew from temperatures approaching 5,000 degrees as it penetrates Earth’s atmosphere — but the shield was charred and cracking in places, which was not supposed to happen.
If humans had been aboard Artemis I, they would’ve still returned safely, NASA said. The agency has also conducted extensive research on how the heat shield was damaged in the first place. Still, the heat shield remains top of mind as people around the world hope to see these four astronauts return safely.
Advertisement
Techcrunch event
San Francisco, CA | October 13-15, 2026
Advertisement
The crew left Earth on April 1, and the astronauts quickly encountered some mundane mishaps, including issues with Microsoft Office and their toilet. But these early moments were easily overshadowed by the wonder of the images and information that the crew sent back from the moon. You can already see new photos from the lunar flyby on the dark side of the moon.
The astronauts also named new craters, including one that was named after mission commander Wiseman’s late wife Carroll, who died of cancer in 2020 at age 46.
The crew was also able to witness a total solar eclipse from just a few thousand miles away from the moon, a unique vantage point that no astronaut had experienced before.
“It wasn’t just an eclipse with the Sun hidden behind the Moon,” Koch, the crew’s mission specialist, explained. “We could also see earthshine, the Sun’s light reflecting off Earth, wrapping the Moon in a soft, borrowed glow.”
An anonymous reader quotes a report from the New York Times: As the Trump administration seeks to fill a national shortage of air traffic controllers, officials are targeting a new talent pool: gamers. The Federal Aviation Administration on Friday is making a recruiting push aimed at avid players of video games, as the agency strives to fill thousands of vacancies that lawmakers have said leave the traveling public less safe. In a new YouTube ad, the agency is using flashy graphics and the promise of six-figure salaries to convince video game enthusiasts to apply their trigger fingers in service of air safety.
In recent years, video gamers have emerged as a target demographic for recruiters at a number of federal agencies, including the military and the Department of Homeland Security. They are welcomed for their hand-eye coordination, quick decision-making in complex environments and ability to remain focused on screens for hours on end. “To reach the next generation of air traffic controllers, we need to adapt,” Transportation Secretary Sean Duffy said in a statement. Focusing recruiting efforts on gamers, he added, “taps into a growing demographic of young adults who have many of the hard skills it takes to be a successful controller.”
[…] The F.A.A. plans to begin prioritizing recruiting gamers over more traditional avenues like college fairs, officials said, pointing out that only 25 percent of controllers have a traditional college degree, while the vast majority appear to have logged hours gaming. During the presidential transition in 2024, incoming Trump administration officials polled about 250 new air traffic academy graduates over six weeks. Only two of those interviewed were not gamers, according to F.A.A. officials […]. Students who failed out of the training academy were not similarly queried, officials said, though they have plans to conduct more comprehensive exit interviews in the future. Still, the overwhelming presence of gaming habits among graduates tracked with what they were hearing anecdotally from controllers already certified to work in towers and other air traffic facilities, the officials said, many of whom liked to play video games during breaks in their shifts.
Amazon is ending support for third-party integrations on its Luna cloud gaming service. The most immediate changes mean that it’s no longer possible to buy Ubisoft+ and Jackbox Games subscriptions or standalone games through Luna.
Amazon will automatically any cancel active subscriptions bought through Luna at the end of customers’ next billing cycle. If you have a Ubisoft+ subscription that you bought directly from Ubisoft instead, you’ll still be able to access games on that service through Luna until June 10.
The Bring Your Own Library option — which allows users to play games they own on the likes of EA, GOG and Ubisoft on Luna — is going away too. You won’t be able to access games from on those storefronts via Amazon’s streaming service after June 3.
If you bought any games outright on Luna, you’ll still be able to play them there until June 10. Unlike Google did when it shut down Stadia, Amazon isn’t offering refunds for those purchases. However, you’ll still have access to them through the respective third-party platform that’s linked to your account, be it the EA App, GOG Galaxy or Ubisoft Connect.
Advertisement
That doesn’t exactly help folks who don’t have powerful-enough systems to play more demanding games and were relying on Luna. As such, some people might need to turn to the likes of GeForce Now in order to keep playing games they bought through Luna (and they’ll need to hope GFN actually supports their specific games).
Amazon has been reshaping Luna over the last several months. It rolled out a revamped version of the service back in October, with more of a focus on GameNight party games that you can play with a smartphone.
Prime subscribers will still be able to claim PC games and stream games on the Luna Standard tier at no extra cost. The Luna Premium subscription, which includes a wider range of third-party games, is still available too.
“We’re doubling down on a broad range of gaming experiences, including strong third-party titles, delivered in ways that make great games more accessible, as well as new and unique gaming experiences like GameNight,” Amazon wrote in an email to Luna users. The company also said it will offer some folks a free Luna Premium subscription.
Claude creator Anthropic is considering designing its own chips as advanced AI systems cause a shortage, sources told Reuters.
Anthropic continues to grab the headlines this week, as it fights the US administration in the courts and the power of its unreleased Claude Mythos model strikes fear into the hearts of much of the industry, given its ability to exploit security vulnerabilities.
Now Reuters is citing sources that say Anthropic is looking closely at the possibility of building its own chips, amid industry concerns that the supply of sophisticated chips required for new AI systems from itself and its competitors may not keep pace. Rivals Meta and OpenAI already have such projects underway.
Earlier this week, Anthropic announced a new expanded agreement that will allow it to tap 3.5GW of Google’s tensor processing unit (TPU) capacity from Broadcom.
Advertisement
In a regulatory filing on 6 April, Broadcom said that Anthropic’s consumption of TPU capacity is dependent on its continued commercial success. The multi-gigawatt capacity is expected to come online in 2027.
Last October, Anthropic and Google announced a deal worth “tens of billions of dollars” for 1m of Google’s TPUs. The deal is expected to bring more than 1GW of AI compute capacity online for Anthropic this year. The new agreement deepens that relationship, Anthropic said. Broadcom said that it is in a long-term agreement with Google to develop and supply custom TPUs.
Anthropic already has multibillion-dollar deals for compute capacity with companies such as Nvidia and Microsoft. It runs Claude on a range of AI hardware, including Amazon Web Sevices’ Trainium, Google TPUs and Nvidia GPUs. Amazon is Anthropic’s primary cloud provider and training partner.
Anthropic said that a vast majority of the new compute will be situated in the US, expanding on its $50bn commitment to strengthening the country’s computing infrastructure.
Advertisement
Demand for Anthropic’s AI tools has accelerated in 2026. Recent data shows that Anthropic is now capturing more than 73pc of all spending among companies buying AI tools for the first time, while its rival OpenAI is down to around 27pc.
According to the company, revenue run rate has already surpassed $30bn, up from around $9bn at the end of 2025. More than 1,000 of Anthropic’s business customers spend more than $1m on an annualised basis, doubling in less than two months, it added.
Given the growing fight for compute power, and the well-reported chips shortage, it would not be a surprise for Anthropic to look into the albeit extremely costly business of designing its own chips, but the sources admitted that no project team has yet been set up, and plans have not yet been set in place.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
In short:Estonia and Belgium are the only two EU member states to have declined the Jutland Declaration, an October 2025 pan-European commitment to restrict children’s access to social media. Estonia’s ministers argue that age-based bans are unenforceable, that children will find ways around them, and that the correct approach is to enforce the GDPR against the platforms themselves and invest in digital literacy rather than restricting young people’s participation in the information society.
The declaration most EU countries signed
On 10 October 2025, digital ministers from 25 of the European Union’s 27 member states signed the Jutland Declaration at an informal gathering in Horsens, Denmark. Norway and Iceland also signed. The declaration is a non-binding political commitment to introduce privacy-preserving age verification on social media platforms, protect minors from addictive design features and dark patterns, and work toward what the document describes as a “digital legal age” for access to online services. Estonia and Belgium were the two EU members that declined. Belgium’s refusal came from a veto by Flemish Media Minister Cieltje Van Achter, who described the declaration’s age verification requirements as disproportionate and objected to requiring children to use national identity systems such as Itsme to access services like YouTube or Instagram. Estonia’s refusal was substantively different: principled rather than procedural, and rooted in a broader argument about where Europe’s regulatory effort should be directed. The political momentum the declaration reflects is considerable.Europe’s social media age shift accelerated through 2025 and into 2026, with Australia implementing the world’s first ban on under-16s from December 2025, France passing legislation in January 2026 to prohibit under-15s, Spain enacting restrictions for under-16s in February 2026, and Austria moving to restrict children under 14.Greece announced it would ban under-15s from social media from 2027, part of a six-country EU grouping that also includes Denmark, France, Austria, Portugal, and Spain. On 20 November 2025, the European Parliament backed a non-binding resolution calling for an EU-wide digital minimum age of 16 by 483 votes to 92, with 86 abstentions, and called on the European Commission to incorporate the measure into the forthcoming Digital Fairness Act.
Why Estonia said no
Estonia’s dissent is articulated by two ministers who have approached the question from different but complementary angles. Kristina Kallas, Minister of Education and Research, has been the more outspoken critic of the ban consensus. At a Politico forum in Barcelona, Kallas argued that age restrictions place responsibility on the wrong party. “The way to approach this, to me, is not to make kids responsible for that harm and start self-regulating,” she said. Her corresponding argument is that the responsibility should fall on the platforms. “Europe pretends to be weak when it comes to big American and international corporations,” she told the forum, challenging the EU to “actually take this power and start regulating the big American corporations.” She was also direct about the practical limits of ban-based approaches: “kids will find very quickly the ways to go around and to still use social media.” That argument connects toEurope’s broader effort to assert its regulatory power over American technology companies, a project that has gathered considerable momentum since 2025 but has not yet been applied with comparable force to social media content governance. Liisa-Ly Pakosta, Minister of Justice and Digital Affairs, has framed the positive case for Estonia’s preferred approach. “Estonia believes in an information society and including young people in the information society,” she has said, emphasising digital participation rather than exclusion. Pakosta has pointed to the General Data Protection Regulation as the enforcement mechanism already available: the GDPR prohibits platforms from processing children’s personal data without appropriate consent and carries fines of up to 4% of global annual turnover for violations. Estonia’s argument, in essence, is that Europe has not exhausted its existing tools before reaching for a new and unproven one.
The enforcement problem Estonia is pointing to
Estonia’s critique of the ban model has a concrete reference point. Australia became the first country in the world to enforce a social media ban for minors on 10 December 2025, prohibiting anyone under 16 from holding accounts on platforms including Instagram, TikTok, YouTube, Snapchat, X, and Facebook. Platforms face fines of up to approximately A$50 million for failing to take reasonable steps to prevent underage access. In the months after the ban came into force,the eSafety Commissioner found Meta, TikTok, and YouTube were not complying with the ban, with the regulator proceeding to court action against the platforms. The compliance picture was bleak: seven in ten children who had held social media accounts before the ban still had active accounts after it took effect. Workarounds including VPNs, false birth dates, and the transfer of accounts to adult relatives proved straightforward and were widely adopted. Whether the Australian experience represents the definitive verdict on the ban model, or merely an early implementation struggle that stricter enforcement will eventually resolve, remains contested. What is not contested is that the world’s first and most closely watched age ban produced a high rate of non-compliance within months of introduction, and that this outcome was predicted in advance by critics who argued the compliance burden would be met by creative circumvention rather than by genuine restriction.
Advertisement
What comes next in Brussels
The practical arena for the contest between Estonia’s platform-enforcement approach and the ban-majority’s position is the Digital Fairness Act, the European Commission’s forthcoming legislation targeting addictive design, dark patterns, and manipulative commercial practices in digital services. The European Parliament’s November 2025 vote made explicit that it wants a 16-plus digital minimum age incorporated into the DFA text, along with bans on engagement-based recommender algorithms for users who are minors, restrictions on loot boxes, and a default-off requirement for infinite scroll, autoplay, and pull-to-refresh mechanisms on services used by young people. The Commission is expected to table the DFA proposal in the fourth quarter of 2026. That timeline gives Estonia a legislative window in which to argue for a platform-accountability framework to sit alongside, or in place of, an age-based access restriction. The two approaches are not necessarily mutually exclusive, but they reflect genuinely different theories of where regulatory leverage is most effectively applied: against the commercial platforms that build and profit from the systems in question, or against the young people who have grown up treating social media as ordinary infrastructure.2025 established AI as the defining technology of the decade, and as AI-powered recommendation systems become the primary mechanism by which young people encounter content online, the question of who bears legal and regulatory responsibility for what those systems serve to a 14-year-old is one that Europe will have to answer in law, not just in declarations.
You must be logged in to post a comment Login