Connect with us
DAPA Banner
DAPA Coin
DAPA
COIN PAYMENT ASSET
PRIVACY · BLOCKDAG · HOMOMORPHIC ENCRYPTION · RUST
ElGamal Encrypted MINE DAPA
🚫 GENESIS SOLD OUT
DAPAPAY COMING

Tech

Meta opens WhatsApp to rival AI chatbots to steer clear of EU ire

Published

on

Meta temporarily reverses course after EU says new rules are ‘equivalent to the previous access ban’.

Meta will allow rival AI chatbots free access to WhatsApp for a month as it navigates a way out of EU antitrust concerns.

The decision comes after the European Commission said last month that it would have to order Meta to reinstate third-party AI assistants access to WhatsApp under preexisting conditions.

Last October, the social media giant changed rules and blocked access to competing third-party AI providers from reaching their customers through WhatsApp.

Advertisement

Following this, in December, the EU opened its probe into Meta’s policies and informed the company by January that it was breaching the bloc’s antitrust rules.

Later in March, the company reversed course to reinstate access to WhatsApp for third-party AI assistants – but for a fee. However, in April, the Commission told Meta that its new rules were “equivalent to the previous access ban”.

A Meta spokesperson told news publications that the general-purpose AI chatbots ​operating in the European Economic Area (EEA) will be given “free access to the WhatsApp business ​API for one month” as part of ongoing discussions with the EU.

“This will ‌provide the ⁠Commission and Meta with time to achieve a quick and fair outcome to the investigation,” they added. SiliconRepublic.com has reached out to Meta for further comments.

Advertisement

The Commission’s investigation covers all of the EEA, but Italy, to avoid an overlap with the Italian competition authority’s ongoing investigations into the company over the same issue.

The EU welcomed Meta’s move to open up access to WhatsApp, telling the press that it believes this creates the “adequate conditions needed to discuss commitments” with the company.

“The window for this discussion is short, and the process is conditional on ​Meta’s genuine intention to address the Commission’s concerns,” it added.

Meta is on the hook for up to 10pc of its annual global turnover if the EU ultimately finds that it broke antitrust laws under the Treaty on the Functioning of the European Union and the EEA Agreement.

Advertisement

The company has faced an onslaught of legal issues in the past few months, with Ireland’s Coimisiún na Meán launching two investigations into Meta earlier this month over the company’s recommender systems and compliance with the Digital Services Act (DSA).

Meanwhile, in April, the EU – in a separate investigation – preliminarily found that Instagram and Facebook breached the DSA for failing to “diligently” identify and mitigate risks that children under 13 face when using these platforms.

In March, a landmark US legal case found that Meta’s platforms were designed to be addictive to children. While a different case that concluded a day prior, found that Meta’s platforms enable child sexual exploitation.

The Facebook-parent also launched a legal battle against UK’s media regulator Ofcom earlier this month over alleged “disproportionate” penalties introduced in the Online Safety Act.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Inside the Race to Develop a Test for the Rare Andes Hantavirus

Published

on

As passengers return to the US from the cruise that saw a rare hantavirus outbreak, much of the country is lacking a basic public health tool: a test to diagnose the illness in the earliest stages of infection. Nebraska may be the first state with the ability to do so.

In just a few days, a lab at the University of Nebraska Medical Center in Omaha developed its own diagnostic test for the Andes virus in anticipation of receiving 16 American passengers from the ship.

“I believe we might be the only lab in the nation that has this test available at the moment,” Peter Iwen, director of the Nebraska Public Health Laboratory tells WIRED, referring to polymerase chain reaction (PCR) testing, which was important during the Covid-19 pandemic. Its ability to detect tiny quantities of the virus before patients have full-blown symptoms makes it crucial for identifying cases quickly, getting patients prompt medical treatment, and preventing the spread of disease.

The university’s medical center is home to a highly specialized biocontainment unit designed to care for patients with severe infectious diseases that lack vaccines or treatments. Staff members previously treated patients during the 2014 Ebola outbreak and cared for some of the first Americans diagnosed with Covid in 2020.

Advertisement

When Nebraska was notified that it would be receiving some of the passengers, Iwen contacted the US Centers for Disease Control and Prevention to see if it had tests on hand. He learned that the CDC has the ability to run a serological test, which looks for the presence of hantavirus antibodies. But people don’t develop antibodies until they are actively sick and their body has had time to mount an immune response.

Andrew Nixon, a spokesperson for the US Department of Health and Human Services, told WIRED that the CDC has a PCR test for the Andes virus but that it’s a research test that cannot be used for patient management. Research tests are used in scientific experiments, while diagnostic tests that are meant to confirm or rule out a disease in patients need to be rigorously tested, or validated, to make sure they are capable of producing consistent results. Nixon said the agency is working on validating its PCR test.

Iwen’s lab mobilized quickly to track down the materials needed to build and validate a PCR test from scratch. They called a lab in California—a state that has previously seen hantavirus cases—but their test was for a specific strain found in the US. Andes virus has previously only been detected in South America and isn’t found in rodents native to the US.

“Tests that we have available in the US will not detect that virus that’s found in South America,” he says, noting that the Andes virus is very different genetically from the primary hantavirus strain found in the US, known as the Sin Nombre virus.

Advertisement

The Nebraska team reached out to Steven Bradfute, a hantavirus scientist at the University of New Mexico. Frannie Twohig, a graduate student in Bradfute’s lab, had developed an Andes virus PCR test for research purposes as part of her PhD work. Bradfute’s lab also has genetic material of the Andes virus that’s not capable of causing disease which the Nebraska lab would need to validate its test.

On Friday, Bradfute shipped the genetic material and a box of chemical reagents needed to detect the virus in blood samples overnight to Nebraska. By Saturday morning, Iwen’s team had what it needed to start assembling and validating its test.

It was enough to run about 300 tests, which took all day Saturday and Sunday, Iwen says. His team added Andes genetic material in various concentrations to samples of healthy human blood to see if their test could detect it. Then, they compared the results to control samples. The team used up about a third of its tests on the validation process and now has the capacity to conduct a few hundred tests on patient samples.

Source link

Advertisement
Continue Reading

Tech

VTech Toy Becomes PinkPad, The DIY Linux Laptop

Published

on

Originally envisioned as a simple DIY laptop project, [kati]’s PinkPad V1 ended up being considerably more involved than expected. But the end result is a perfectly usable, stunningly pink, and remarkably sturdy portable laptop that looks nothing like a hack job.

Originally a VTech toy, the PinkPad is a perfectly functional DIY laptop.

The PinkPad V1 started as a toy laptop for toddlers, repurposed into a DIY laptop running Linux while keeping the original clamshell design and cute aesthetic. As [kati] herself points out, while it may not seem particularly difficult to yank out a toy’s insides and stuff it with a Raspberry Pi, most of the real challenges were related to actually getting all the necessary parts and connectors and wiring to actually fit in a useful way. As anyone with experience in building something knows, working around existing enclosures or hardware almost always brings unexpected challenges.

The original toy laptop? Produced by none other than VTech, whose products have been hacked to create things like a punch card-reading cyberdeck and Z80 hacking station. Our own [Tom Nardi] has also shared his fondness for these devices in several teardowns over the years.

In the end, [kati]’s PinkPad ended up sporting a mini keyboard (whose black keys were turned pink with a little nail polish) and a 5 inch touchscreen LCD. Combined with a rechargeable power supply, it provides all the comforts of an Arch Linux ARM mini laptop.

Advertisement

Thanks [alex] for the tip!

Source link

Advertisement
Continue Reading

Tech

Perceptron Mk1 shocks with highly performant video analysis AI model 80-90% cheaper than Anthropic, OpenAI & Google

Published

on

AI that can see and understand what’s happening in a video — especially a live feed — is understandably an attractive product to lots of enterprises and organizations. Beyond acting as a security “watchdog” over sites and facilities, such an AI model could also be used to clip out the most exciting parts of marketing videos and repurpose them for social, identify inconsistencies and gaffs in videos and flag them for removal, and identify body language and actions of participants in controlled studies or candidates applying for new roles.

While there are some AI models that offer this type of functionality today, it’s far from a mainstream capability. The two-year-old startup Perceptron Inc. is seeking to change all that, however. Today, it announced the release of its flagship proprietary video analysis reasoning model, Mk1 (short for “Mark One”) at a cost — $0.15 per million tokens input / $1.50 per million output through its application programming interface (API) — that comes in about 80-90% less than other leading proprietary rivals, namely, Anthropic’s Claude Sonnet 4.5, OpenAI’s GPT-5, and Google’s Gemini 3.1 Pro.

Perceptron Mk1 cost Pareto chart

Perceptron Mk1 cost Pareto chart. Credit: Perceptron

Led by Co-founder and CEO Armen Aghajanyan, formerly of Meta FAIR and Microsoft, the company spent 16 months developing a “multi-modal recipe” from the ground up to address the complexities of the physical world.

Advertisement

This launch signals a new era where models are expected to understand cause-and-effect, object dynamics, and the laws of physics with the same fluency they once applied to grammar.

Interested users and potential enterprise customers can try it out for themselves on a public demo site from Perceptron here.

Performance across spatial and video benchmarks

The model’s performance is backed by a suite of industry-standard benchmarks focused on grounded understanding.

Perceptron Mk1 benchmark comparison table

Perceptron Mk1 benchmark comparison table. Credit: Perceptron

Advertisement

In spatial reasoning (ER Benchmarks), Mk1 achieved a score of 85.1 on EmbSpatialBench, surpassing Google’s Robotics-ER 1.5 (78.4) and Alibaba’s Q3.5-27B (approx. 84.5).

In the specialized RefSpatialBench, Mk1’s score of 72.4 represents a massive leap over competitors like GPT-5m (9.0) and Sonnet 4.5 (2.2), highlighting a significant advantage in referring expression comprehension.

Perceptron Mk1 video benchmark comparison chart

Perceptron Mk1 video benchmark comparison chart. Credit: Perceptron

Video benchmarks show similar dominance; on the EgoSchema “Hard Subset”—where first-and-last-frame inference is insufficient—Mk1 scored 41.4, matching Alibaba’s Q3.5-27B and significantly beating Gemini 3.1 Flash-Lite (25.0).

Advertisement

On the VSI-Bench, Mk1 reached 88.5, the highest recorded score among the compared models, further validating its ability to handle actual temporal reasoning tasks.

Market positioning and the efficiency frontier

Perceptron has explicitly targeted the “Efficiency Frontier,” a metric that plots mean scores across video and embodied reasoning benchmarks against the blended cost per million tokens.

Benchmarking data reveals that Mk1 occupies a unique position: it matches or exceeds the performance of “frontier” models like GPT-5 and Gemini 3.1 Pro while maintaining a cost profile closer to “Lite” or “Flash” versions.

Specifically, Perceptron Mk1 is priced at $0.15 per million input tokens and $1.50 per million output tokens. In comparison, the “Efficiency Frontier” chart shows GPT-5 at a significantly higher blended cost (near $2.00) and Gemini 3.1 Pro at approximately $3.00, while Mk1 sits at the $0.30 blended cost mark with superior reasoning scores.

Advertisement

This aggressive pricing strategy is intended to make high-end physical AI accessible for large-scale industrial use rather than just experimental research.

Architecture and temporal continuity

The technical core of Perceptron Mk1 is its ability to process native video at up to 2 frames per second (FPS) across a significant 32K token context window.

Unlike traditional vision-language models (VLMs) that often treat video as a disjointed sequence of still images, Mk1 is designed for temporal continuity.

This architecture allows the model to “watch” extended streams and maintain object identity even through occlusions, a critical requirement for robotics and surveillance applications.

Advertisement

Developers can query the model for specific moments in a long stream and receive structured time codes in return, streamlining the process of video clipping and event detection.

Reasoning with the laws of physics

A primary differentiator for Mk1 is its “Physical Reasoning” capability. Perceptron defines this as a high-precision spatial awareness that allows the model to understand object dynamics and physical interactions in real-world settings.

For example, the model can analyze a scene to determine if a basketball shot was taken before or after a buzzer by jointly reasoning over the ball’s position in the air and the readout on a shot clock.

This requires more than just pattern recognition; it requires an understanding of how objects move through space and time.

Advertisement

The model is capable of “pixel-precise” pointing and counting into the hundreds within dense, complex scenes. It can also read analog gauges and clocks, which have historically been difficult for purely digital vision systems to interpret with high reliability.

It also seems to have strong general world and historical knowledge. In my brief test, I uploaded a vintage public domain film of skyscraper construction in New York City dated 1906 from the U.S. Library of Congress, and Mk1 was able to not only correctly describe the contents of the footage — including odd, atypical sights as workers being suspended by ropes — but did so rapidly and even correctly identified the rough date (early 1900s) from the look of the footage alone.

Screenshot of Perceptron Mk1 VentureBeat demo test

Screenshot of Perceptron Mk1 VentureBeat demo test

A developer platform for physical AI

Accompanying the model release is an expanded developer platform designed to turn these high-level perception capabilities into functional applications with minimal code.

Advertisement

The Perceptron SDK, available via Python, introduces several specialized functions such as “Focus,” “Counting,” and “In-Context Learning”.

The Focus feature allows users to zoom and crop into specific regions of a frame automatically based on a natural language prompt, such as detecting and localizing personal protective equipment (PPE) on a construction site. The Counting function is optimized for dense scenes, such as identifying and pointing to every puppy in a group or individual items of produce.

Furthermore, the platform supports in-context learning, allowing developers to adapt Mk1 to specific tasks by providing just a few examples, such as showing an image of an apple and instructing the model to label every instance of Category 1 in a new scene.

Licensing strategies and the Isaac series

Perceptron is employing a dual-track strategy for its model weights and licensing. The flagship Perceptron Mk1 is a closed-source model accessed via API, designed for enterprise-grade performance and security.

Advertisement

However, the company is also maintaining its “Isaac” series, which kicked off with the launch of Isaac 0.1 in September 2025, as an open-weights alternative. Isaac 0.2-2b-preview, released in December 2025, is a 2-billion parameter vision-language model with reasoning capabilities that is available for edge and low-latency deployments.

While the weights for the Isaac models are open on the popular AI code sharing community Hugging Face, Perceptron offers commercial licenses for companies that require maximum control or on-premise deployment of the weights.

This approach allows the company to support both the open-source community and specialized industrial partners who need proprietary flexibility. The documentation notes that Isaac 0.2 models are specifically optimized for sub-200ms time-to-first-token, making them ideal for real-time edge devices.

Background on Perceptron founding and focus

Perceptron AI is a Bellevue, Washington-based physical AI startup founded by Aghajanyan and Akshat Shrivastava, both former research scientists at Meta’s Facebook AI Research (FAIR) lab.

Advertisement

The company’s public materials date its founding to November 2024, while a Washington corporate filing record for Perceptron.ai Inc. shows an earlier foreign registration filing on October 9, 2024, listing Shrivastava and Aghajanyan as governors.

In founder launch posts from late 2024, Aghajanyan said he had left Meta after nearly six years and “joined forces” with Shrivastava to build AI for the physical world, while Shrivastava said the company grew out of his work on efficiency, multimodality and new model architectures.

The founding appears to have followed directly from the pair’s work on multimodal foundation models at Meta. In May 2024, Meta researchers published Chameleon, a family of early-fusion models designed to understand and generate mixed sequences of text and images, work that Perceptron later described as part of the lineage behind its own models.

A July 2024 follow-on paper, MoMa, explored more efficient early-fusion training for mixed-modal models and listed both Shrivastava and Aghajanyan among the authors. Perceptron’s stated thesis extends that research direction into “physical AI”: models that can process real-world video and other sensory streams for use cases such as robotics, manufacturing, geospatial analysis, security and content moderation.

Advertisement

Partner ecosystems and future outlook

The real-world impact of Mk1 is already being demonstrated through Perceptron’s partner network. Early adopters are using the model for diverse applications, such as auto-clipping highlights from live sports, which leverages the model’s temporal understanding to identify key plays without human intervention.

In the robotics sector, partners are curating teleoperation episodes into training data, effectively automating the process of labeling and cleaning data for robotic arms and mobile units.

Other use cases include multimodal quality control agents on manufacturing lines, which can detect defects and verify assembly steps in real-time, and wearable assistants on smart glasses that provide context-aware help to users.

Aghajanyan stated that these releases are the culmination of research intended to make AI function best in the physical world, moving toward a future where “physical AI” is as ubiquitous as digital AI.

Advertisement

Source link

Continue Reading

Tech

Protect your enterprise now from the Shai-Hulud worm and npm vulnerability in 6 actionable steps

Published

on

Any development environment that installed or imported one of the 172 compromised npm or PyPI packages published since May 11 should be treated as potentially compromised. On affected developer workstations, the worm harvests credentials from over 100 file paths: AWS keys, SSH private keys, npm tokens, GitHub PATs, HashiCorp Vault tokens, Kubernetes service accounts, Docker configs, shell history, and cryptocurrency wallets. For the first time in a TeamPCP campaign, it targets password managers including 1Password and Bitwarden, according to SecurityWeek.

It steals Claude and Kiro AI agent configurations, including MCP server auth tokens for every external service an agent connects to. And it does not leave when the package is removed.

The worm installs persistence in Claude Code (.claude/settings.json) and VS Code (.vscode/tasks.json with runOn: folderOpen) that re-execute every project open, plus a system daemon (macOS LaunchAgent / Linux systemd) that survives reboots. These live in the project tree, not in node_modules. Uninstalling the package does not remove them. On CI runners, the worm reads runner process memory directly via /proc/pid/mem to extract secrets, including masked ones, on Linux-based runners. If you revoke tokens before isolating the machine, Wiz’s analysis found a destructive daemon wipes your home directory.

Between 19:20 and 19:26 UTC on May 11, the Mini Shai-Hulud worm published 84 malicious versions across 42 @tanstack/* npm packages. Within 48 hours the campaign expanded to 172 packages across 403 malicious versions spanning npm and PyPI, according to Mend’s tracking. @tanstack/react-router alone receives 12.7 million weekly downloads. CVE-2026-45321, CVSS 9.6. OX Security reported 518 million cumulative downloads affected. Every malicious version carried a valid SLSA Build Level 3 provenance attestation. The provenance was real. The packages were poisoned.

Advertisement

“TanStack had the right setup on paper: OIDC trusted publishing, signed provenance, 2FA on every maintainer account. The attack worked anyway,” Peyton Kennedy, senior security researcher at Endor Labs, told VentureBeat in an exclusive interview. “What the orphaned commit technique shows is that OIDC scope is the actual control that matters here, not provenance, not 2FA. If your publish pipeline trusts the entire repository rather than a specific workflow on a specific branch, a commit with no parent history and no branch association is enough to get a valid publish token. That’s a one-line configuration fix.”

Three vulnerabilities chained into one provenance-attested worm

TanStack’s postmortem lays out the kill chain. On May 10, the attacker forked TanStack/router under the name zblgg/configuration, chosen to avoid fork-list searches per Snyk’s analysis. A pull request triggered a pull_request_target workflow that checked out fork code and ran a build, giving the attacker code execution on TanStack’s runner. The attacker poisoned the GitHub Actions cache. When a legitimate maintainer merged to main, the release workflow restored the poisoned cache. Attacker binaries read /proc/pid/mem, extracted the OIDC token, and POSTed directly to registry.npmjs.org. Tests failed. Publish was skipped. 84 signed packages still reached the registry.

“Each vulnerability bridges the trust boundary the others assumed,” the postmortem states. Published tradecraft from the March 2025 tj-actions/changed-files compromise, recombined in a new context.

The worm crossed from npm into PyPI within hours

Microsoft Threat Intelligence confirmed the mistralai PyPI package v2.4.6 executes on import (not on install), downloading a payload disguised as Hugging Face Transformers. npm mitigations (lockfile enforcement, –ignore-scripts) do not cover Python import-time execution.

Advertisement

Mistral AI published a security advisory confirming the impact. Compromised npm packages were available between May 11 at 22:45 UTC and May 12 at 01:53 UTC (roughly three hours). The PyPI release mistralai==2.4.6 is quarantined. Mistral stated an affected developer device was involved but no Mistral infrastructure was compromised. SafeDep confirmed Mistral never released v2.4.6; no commits landed May 11 and no tag exists.

Wiz documented the full blast radius: 65 UiPath packages, Mistral AI SDKs, OpenSearch, Guardrails AI, 20 Squawk packages. StepSecurity attributes the campaign to TeamPCP, based on toolchain overlap with prior Shai-Hulud waves and the Bitwarden CLI/Trivy compromises. The worm runs under Bun rather than Node.js to evade Node.js security monitoring.

The attacker treated AI coding agents as part of the trusted execution environment

Socket’s technical analysis of the 2.3 MB router_init.js payload identifies ten credential-collection classes running in parallel. The worm writes persistence into .claude/ and .vscode/ directories, hooking Claude Code’s SessionStart config and VS Code’s folder-open task runner. StepSecurity’s deobfuscation confirmed the worm also harvests Claude and Kiro MCP server configurations (~/.claude.json, ~/.claude/mcp.json, ~/.kiro/settings/mcp.json), which store API keys and auth tokens for external services. This is an early but confirmed instance of supply-chain malware treating AI agent configurations as high-value credential targets. The npm token description the worm sets reads: “IfYouRevokeThisTokenItWillWipeTheComputerOfTheOwner.” It is not a bluff.

“What stood out to me about this payload is where it planted itself after running,” Kennedy told VentureBeat. “It wrote persistence hooks into Claude Code’s SessionStart config and VS Code’s folder-open task runner so it would re-execute every time a developer opened a project, even after the npm package was removed. The attacker treated the AI coding agent as part of the trusted execution environment, which it is. These tools read your repo, run shell commands, and have access to the same secrets a developer does. Securing a development environment now means thinking about the agents, not just the packages.”

Advertisement

CI/CD Trust-Chain Audit Grid

Six gaps Mini Shai-Hulud exploited. What your CI/CD does today. The control that closes each one.

Audit question

What your CI/CD does today

The gap

Advertisement

1. Pin OIDC trusted publishing to a specific workflow file on a specific protected branch. Constrain id-token: write to only the publish job. Ensure that job runs from a clean workspace with no restored untrusted cache

Most orgs grant OIDC trust at the repository level. Any workflow run in the repo can request a publish token. id-token: write is often set at the workflow level, not scoped to the publish job.

The worm achieved code execution inside the legitimate release workflow via cache poisoning, then extracted the OIDC token from runner process memory. Branch/workflow pinning alone would not have stopped this attack because the malicious code was already running inside the pinned workflow. The complete fix requires pinning PLUS constraining id-token: write to only the publish job PLUS ensuring that job uses a clean, unshared cache.

2. Treat SLSA provenance as necessary but not sufficient. Add behavioral analysis at install time

Advertisement

Teams treat a valid Sigstore provenance badge as proof a package is safe. npm audit signatures passes. The badge is green. Procurement and compliance workflows accept provenance as a gate.

All 84 malicious TanStack versions carry valid SLSA Build Level 3 provenance attestations. First widely reported npm worm with validly-attested packages. Provenance attests where a package was built, not whether the build was authorized. Socket’s AI scanner flagged all 84 artifacts within six minutes of publication. Provenance flagged zero.

3. Isolate GitHub Actions cache per trust boundary. Invalidate caches after suspicious PRs. Never check out and execute fork code in pull_request_target workflows

Fork-triggered workflows and release workflows share the same cache namespace. Closing or reverting a malicious PR is treated as restoring clean state. pull_request_target is widely used for benchmarking and bundle-size analysis with fork PR checkout.

Advertisement

Attacker poisoned pnpm store via fork-triggered pull_request_target that checked out and executed fork code on the base runner. Cache survived PR closure. The next legitimate release workflow restored the poisoned cache on merge. actions/cache@v5 uses a runner-internal token for cache saves, not the workflow’s GITHUB_TOKEN, so permissions: contents: read does not prevent mutation. Kennedy: ‘Branch protection rules don’t apply to commits that aren’t on any branch, so that whole layer of hardening didn’t help.’

4. Audit optionalDependencies in lockfiles and dependency graphs. Block github: refs pointing to non-release commits

Static analysis and lockfile enforcement focus on dependencies and devDependencies. optionalDependencies with github: commit refs are not flagged by most tools.

The worm injected optionalDependencies pointing to a github: orphan commit in the attacker’s fork. When npm resolves a github: dependency, it clones the referenced commit and runs lifecycle hooks (including prepare) automatically. The payload executed before the main package’s own install step completed. SafeDep confirmed Mistral never released v2.4.6; no commits landed and no tag exists.

Advertisement

5. Audit Python dependency imports separately from npm controls. Cover AI/ML pipelines consuming guardrails-ai, mistralai, or any compromised PyPI package

npm mitigations (lockfile enforcement, –ignore-scripts) are applied to the JavaScript stack. Python packages are assumed safe if pip install completes. AI/ML CI pipelines are treated as internal testing infrastructure, not as supply-chain attack targets.

Microsoft Threat Intelligence confirmed mistralai PyPI v2.4.6 executes on import, not install. Injected code in __init__.py downloads a payload disguised as Hugging Face Transformers. –ignore-scripts is irrelevant for Python import-time execution. guardrails-ai@0.10.1 also executes on import. Any agentic repo with GitHub Actions id-token: write is exposed to the same OIDC extraction technique. LLM API keys, vector DB credentials, and external service tokens all in the blast radius.

6. Isolate and image affected machines before revoking stolen tokens. Do not revoke npm tokens until the host is forensically preserved

Advertisement

Standard incident response: revoke compromised tokens first, then investigate. npm token list and immediate revocation is the instinctive first step.

The worm installs a persistent daemon (macOS LaunchAgent / Linux systemd) that polls GitHub every 60 seconds. On detecting token revocation (40X error), it triggers rm -rf ~/, wiping the home directory. The npm token description reads: ‘IfYouRevokeThisTokenItWillWipeTheComputerOfTheOwner.’ Microsoft reported geofenced destructive behavior: a 1-in-6 chance of rm -rf / on systems appearing to be in Israel or Iran. Kennedy: ‘Even after the package is gone, the payload may still be sitting in .claude/ with a SessionStart hook pointing at it. rm -rf node_modules doesn’t remove it.’

Sources: TanStack postmortem, StepSecurity, Socket, Snyk, Wiz, Microsoft Threat Intelligence, Mend, Endor Labs. May 12, 2026.

Security director action plan

  • Today: “The fastest check is find . -name ‘router_init.js’ -size +1M and grep -r ’79ac49eedf774dd4b0cfa308722bc463cfe5885c’ package-lock.json,” Kennedy said. If either returns a hit, isolate and image the machine immediately. Do not revoke tokens until the host is forensically preserved. The worm’s destructive daemon triggers on revocation. Once the machine is isolated, rotate credentials in this order: npm tokens first, then GitHub PATs, then cloud keys. Hunt for .claude/settings.json and .vscode/tasks.json persistence artifacts across every project that was open on the affected machine.

  • This week: Rotate every credential accessible from affected hosts: npm tokens, GitHub PATs, AWS keys, Vault tokens, K8s service accounts, SSH keys. Check your packages for unexpected versions after May 11 with commits by claude@users.noreply.github.com. Block filev2.getsession[.]org and git-tanstack[.]com.

  • This month: Audit every GitHub Actions workflow against the six gaps above. Pin OIDC publishing to specific workflows on protected branches. Isolate cache keys per trust boundary. Set npm config set min-release-age=7d. For AI/ML teams: check guardrails-ai and mistralai against compromised versions, audit CI pipelines for id-token: write exposure, and rotate every LLM API key and vector DB credential accessible from CI.

  • This quarter (board-level): Fund behavioral analysis at the package registry layer. Provenance verification alone is no longer a sufficient procurement criterion for supply-chain security tooling. Require CI/CD security audits as part of vendor risk assessments for any tool with publish access to your registries. Establish a policy that no workflow with id-token: write runs from a shared cache. Treat AI coding agent configurations (.claude/, .kiro/, .vscode/) as credential stores subject to the same access controls as cloud key vaults.

The worm is iterating. Defenders must, as well

This is the fifth Shai-Hulud wave in eight months. Four SAP packages became 84 TanStack packages in two weeks. intercom-client@7.0.4 fell 29 hours later, confirming active propagation through stolen CI/CD infrastructure. Late on May 12, malware research collective vx-underground reported that the fully weaponized Shai-Hulud worm code has been open-sourced. If confirmed, this means the attack is no longer limited to TeamPCP. Any threat actor can now deploy the same cache-poisoning, OIDC-extraction, and provenance-attested publishing chain against any npm or PyPI package with a misconfigured CI/CD pipeline.

Advertisement

“We’ve been tracking this campaign family since September 2025,” Kennedy said. “Each wave has picked a higher-download target and introduced a more technically interesting access vector. The orphaned commit technique here is genuinely novel. Branch protection rules don’t apply to commits that aren’t on any branch. The supply chain security space has spent a lot of energy on provenance and trusted publishing over the last two years. This attack walked straight through both of those controls because the gap wasn’t in the signing. It was in the scope.”

Provenance tells you where a package was built. It does not tell you whether the build was authorized. That is the gap this audit is designed to close.

Source link

Advertisement
Continue Reading

Tech

Cambridge Audio MSX Series Review

Published

on

Verdict

The Cambridge Audio MSX Series s a capable mix of satellite speakers and a subwoofer that provides an immersive sound with tight bass, a surprisingly wide soundstage and rich mids for such a compact set of units that can be placed virtually anywhere. The treble can feel a little smooth, though, and once you add a streamer and amp, it can get a little dearer than some active units.

  • Versatile and stylish looks

  • Surprisingly weighty bass

  • Immersive for such small units

  • Treble could do with more bite

  • Can be quite expensive once you add a streamer and amp

Key Features

  • Advertisement

    Versatile placement

    The smaller size of the main units and sub mean they can be placed in areas that other, more conventional, speakers may not be able to.

  • Advertisement

    2.1 system

    This Cambridge system includes both a set of stereo speakers and its own subwoofer to provide a more rounded feel.

Introduction

The Cambridge Audio MSX20 and MSX Sub 200 combo represents an intriguing proposition in the brand’s rather hefty hi-fi catalogue.

Advertisement

These products are essentially rehashes of the older Minx series of compact speakers and subwoofers, designed to be versatile and affordable without compromising on audio quality, so they can be placed in more challenging environments where otherwise ‘normal’ speakers couldn’t.

In that regard, it’s quite a unique option, not least for the price – the MSX20 speakers cost £99 each, with the MSX Sub 200 an additional £299. 

Advertisement

That isn’t accounting for an amp to power them, or a streamer for a complete system, although it’s still an interesting alternative to powered choices such as the Klipsch ProMedia Lumina 2.1, the dinky Kanto Uki, or even the Cambridge Audio L/R S, if you’re tight on space, or have a unique setup you want to add audio to.

Advertisement

I’ve been putting this combo through its paces for the last couple of weeks to see how well it performs on my sideboard.


Design

  • Surprisingly compact
  • Redesign provides a more modern look
  • Discrete colour choices

What immediately surprised me about this system was how tiny everything is – the MSX20 speakers are just 155mm high, 79mm wide and 97mm deep, meaning they can be placed virtually anywhere and take up little space on my sideboard against larger speakers.

The MSX Sub 200 is the smaller of the two Cambridge offers (there is the larger and beefier MSX Sub 300 available), but I was still quite surprised at how small it is compared to other subwoofers I’ve seen in sets with active speakers. 

Advertisement

Speakers - Cambridge Audio MSX20 and MSX Sub 200Speakers - Cambridge Audio MSX20 and MSX Sub 200
Image Credit (Trusted Reviews)

The satellite speakers can work either on a table on their own, as I had them, or raised up on their own desk stands – there is also wall mounting available with hardware included in the box to help you out. Cambridge even says you can stack them on top of each other, if you want to, although having them separate will be better for stereo immersion.

The MSX20 is available in either black or white, as is the MSX Sub 200, meaning they can carry a discrete look to blend into your space. I don’t mind this, although it is a shame they don’t also come in a matching silver finish for Cambridge’s other hardware for a more unified look – I appreciate that’s a little nitpicky, though.

Advertisement
Subwoofer - Cambridge Audio MSX20 and MSX Sub 200Subwoofer - Cambridge Audio MSX20 and MSX Sub 200
Image Credit (Trusted Reviews)

On the whole, I appreciate the little redesign these new models have undergone against the older Minx variants, with a new Cambridge logo and a redesigned grille on the front of both the satellite speakers and the sub, bringing them closer to Cambridge’s current portfolio.

Connectivity

  • Passive speakers connect by banana plugs
  • Subwoofer has RCA line-in and line-out
  • Best paired with a streamer and amp

Advertisement

The MSX20 is a passive speaker, and can connect to any amp or AV receiver using either the terminals on the rear, or these can unscrew to reveal slots for 4mm banana plugs, which I chose to use.

The sub houses more connectivity on its rear panel, admittedly, with RCA input and output options, plus a power cable. The input handles a streamer, for instance, while the output goes to the amplifier in this case. 

Subwoofer Connections - Cambridge Audio MSX20 and MSX Sub 200Subwoofer Connections - Cambridge Audio MSX20 and MSX Sub 200
Image Credit (Trusted Reviews)

For my testing, these were both Cambridge products, with the affordable MXN10 streamer (in pre-amp mode) and the MXW70 power amplifier, which is where the MSX20 speakers were plugged into.

With the MXN10 in tow, it means this system can work with the likes of Tidal Connect, Spotify Connect, Deezer, Qobuz, internet radio and a DLNA server over Cambridge’s Streammagic app, plus it can handle Bluetooth 5.0, Google Cast and AirPlay 2. It’s also Roon Ready, which is where I spent most of my time with this system.

Physical connections include the RCA line output, plus a coaxial out, optical out, USB-A port and wired Ethernet.

Advertisement

Advertisement

Streamer & Amp Stack - Cambridge Audio MSX20 and MSX Sub 200Streamer & Amp Stack - Cambridge Audio MSX20 and MSX Sub 200
Image Credit (Trusted Reviews)

The MXW70 provides 70 watts of power into 8 ohms with the MSX20, with Hypex NCore Class D amplification that’s tuned by Cambridge’s engineers. Connectivity here includes unbalanced RCA, a pair of XLR ports, a 12V trigger in, loudspeaker connections that accept 4mm banana plugs and a power cable.

If you’re about space-saving, I think this half-width combo works well with the MSX20 and MSX Sub 200, although with the streamer at £349 and the MXW70 at £499, it can increase the cost of the overall system.

A more affordable streaming app, such as the WiiM Amp Pro, WiiM Amp Ultra or Eversolo Play can cut costs and the amount of boxes you need down, depending on the physical constraints of your space.

Sound Quality

  • Strong bass from subwoofer
  • Forward mids and excellent width
  • Treble can sometimes feel a little lost

As much as the outside of this unit has changed, the core of the MSX20 isn’t too different to the Minx satellites that preceded it. This means they benefit from Cambridge’s fourth-gen Balanced Mode Radiator, or BMR, tech, which is designed to provide balanced and engaging results from wherever you are in a room.

Advertisement

Advertisement

On their own, these speakers only cover the mids and treble, as they only go down to 120Hz, with the MSX Sub 200 dialled in to handle anything below that. With the subwoofer in tow, I was pleasantly surprised by the amount of bass on offer from such a small set of units.

Of course, it works best when you use the dials on the rear to set crossover, the desired phase and volume of the pounding bass, but once I’d set that up, it was set-and-forget as far as I was concerned.

Speakers - Cambridge Audio MSX20 and MSX Sub 200Speakers - Cambridge Audio MSX20 and MSX Sub 200
Image Credit (Trusted Reviews)

A good example of this was Off The Wall from Michael Jackson, with its pounding groove from the subwoofer demonstrating a good extension and tight feel, while the satellite speakers handled vocals and the rest of the frequency range. Both here and with Earth, Wind & Fire’s Let’s Groove, the MSX Sub 200 felt more unified with the overall frequency response of the overall system, rather than feeling like a thud in the corner that doesn’t contribute too much.

Steven Wilson’s Luminol features some relentless bass grooves in the opening few minutes alongside a vicious drum groove and hints of guitar work that can be quite difficult for some systems to deal with. The MSX20 and MSX Sub 200 combo impressed me here with the power and strength of the bass, although it didn’t overpower the punch of the drums and guitar work.

Subwoofer Side Profile - Cambridge Audio MSX20 and MSX Sub 200Subwoofer Side Profile - Cambridge Audio MSX20 and MSX Sub 200
Image Credit (Trusted Reviews)

Advertisement

The entire system provides a sound with good width and depth, as demonstrated with Luminol and Peter Gabriel’s That Voice Again in my testing. This particular cut from So features a pounding bass, rich vocal and a lot of detailed cymbal work that can be lost with systems sometimes, which isn’t the case here.

Advertisement

I felt the mid-range that the MSX20 provided was rich, as demonstrated with James Taylor’s October Road; his vocals and a warm acoustic guitar sit right up front in the mix, with the ensemble built around it, which was demonstrated wonderfully. In Gloria Estefan’s Get On Your Feet, her vocals sit back in the mix against percussion and electric guitar work, although each was given space and room to breathe.

Profile - Cambridge Audio MSX20 and MSX Sub 200Profile - Cambridge Audio MSX20 and MSX Sub 200
Image Credit (Trusted Reviews)

The one area I was a bit disappointed by was that the top end was quite smoothed over, lacking bite and detail, and sometimes felt a little lost against the low-end and the mid-range. For instance, in Lock All The Doors from Noel Gallagher’s High Flying Birds, the cymbal and percussion hits lacked a bit of presence, feeling a little lost against his vocals, while the competing percussion intro in Steely Dan’s Do It Again had good separation but lacked a bit of punch against other systems.

Should you buy it?

A compact and versatile system

The MSX20 and MSX Sub 200 combo works well if you’re after speakers and a sub that can be placed virtually anywhere in a room, although you will need to budget for a streamer and amp for a complete system.

Advertisement

This system feels a little lacking in the top-end, though, as it lacks a certain bite and sharpness against other systems.

Advertisement

Advertisement

Final Thoughts

The Cambridge Audio MSX20 & MSX Sub 200 is a capable mix of satellite speakers and a subwoofer that provides an immersive sound with tight bass, a surprisingly wide soundstage and rich mids for such a compact set of units that can be placed virtually anywhere. The treble can feel a little smooth, though, and once you add a streamer and amp, it can get a little dearer than some active units.

The main package here is comparable in price to the active Klipsch ProMedia Lumina 2.1, which offers better compatibility with desktop systems with USB-C and the like, and requires less effort to set up.

Advertisement

With this in mind, I think the MSX20 and MSX Sub 200 offer a better overall sound, with better handling of the low-end alongside an immersive sound. For a more affordable and easy-to-use passive system, this combo works rather well, although sometimes you can’t beat the simplicity and versatility afforded by the Cambridge Audio L/R S for a similar price.

How We Test

We test every speaker setup we review thoroughly over an extended period of time. We use industry-standard tests to compare features properly. We’ll always tell you what we find. We never, ever, accept money to review a product.

  • Tested over several weeks
  • Tested with real world use

FAQs

Does the Cambridge Audio MSX20 and MSX Sub 200 system have a subwoofer?

Yes, the Cambridge Audio MSX20 and MSX Sub 200 have a subwoofer with the MSX Sub 200.

Advertisement
Does the Cambridge Audio MSX20 and MSX Sub 200 system have a control app?

On its own, no, as the Cambridge Audio MSX20 and MSX Sub 200 system is purely built of passive speakers and a subwoofer that need wiring to other components. If you use a streamer, such as Cambridge’s own MXN10, that will offer the brand’s Streammagic app, for instance, to send audio to the speakers and sub.

Advertisement

Full Specs

  Cambridge Audio MSX Series Review
UK RRP £497
USA RRP $657
Manufacturer Cambridge Audio
Size (Dimensions) 210 x 232 x 220 MM
Weight 6.5 KG
Release Date 2026
First Reviewed Date 10/05/2026
Driver (s) Two BMR drivers (main units), 6.5-inch active woofer and 2x passive radiators (subwoofer)
Ports Banana plugs/terminals (main units), RCA input and output (subwoofer)
Audio (Power output) 200 W
Colours Black, White
Frequency Range 36 20000 – Hz
Subwoofer Yes
Speaker Type Hi-Fi Speaker

Source link

Continue Reading

Tech

Renault 4 JP4X4 Concept Captures The Spirit Of Carefree Summer Days With An Electric Twist

Published

on

Renault 4 JP4X4 Concept
The JP4x4 is a new take on two of the original Renault 4s: the Plein Air version, built in 1969 for open-air fun, and the JP4 from 1981, which seemed to channel carefree days by the sea. The name JP4 is derived from Journée à la Plage, which translates to “a day at the beach.” The new name JP4x4 incorporates the four-wheel drive feature, which is self-explanatory.



On May 18, visitors to the 2026 Roland-Garros French Open will get their first look at the vehicle, which joins three previous concepts built on the same electric Renault 4 E-Tech platform, each of which explored new ways to use the compact hatchback. The most recent version focuses squarely on leisure and light adventure. The vehicle joins three previous prototypes based on the same electric Renault 4 E-Tech chassis, each exploring new ways to use the compact hatchback. This most recent edition focuses solely on leisure and minor adventure. Emerald green paint covers the bodywork in a somewhat iridescent tint that resembles the colors offered on the classic 4L in the 1970s. Bright orange fills the interior, creating a sharp, cheerful contrast that draws the eye from all sides. Half-doors replace the traditional five-door layout, stopping just short of the B-pillar enabling simple entry and departure. There are no side windows or a canvas roof, so the hut is always open to the breeze.

The openwork roof is made up of a cross-shaped structure that provides enough stiffness while allowing plenty of sky to be visible. The same frame supports a surfboard strapped securely on top. At the back, the tailgate folds flat like the side of a pickup truck, transforming the cargo area into a simple loading platform. Skateboards fit nicely into the free area behind the seats, ready for whatever happens next.

Renault 4 JP4X4 Concept Interior
Renault 4 JP4X4 Concept Interior
Renault 4 JP4X4 Concept Interior
Renault 4 JP4X4 Concept Interior
The dashboard and digital screens are carried over from the production car, but Renault added a passenger grab handle for rougher terrain and a floating center console to keep the space airy. Inside, the seats replicate the distinctive bucket style of 1970s Renault models, complete with integrated headrests that resemble wrapped Egyptian mummies. The seats are covered in mixed fabrics, combining a crepe base with diagonal mesh sections for a sporty yet comfortable feel. Orange accents appear. They are covered in a mix of fabrics, including a crepe base and diagonal mesh parts for a sporty yet comfortable feel. The dashboard and digital panels are carried over from the production car, but Renault has added a passenger grasp hold for rougher terrain and a floating center console to keep the area open. Orange accents emerge on the door panels and surrounding the console, bringing everything together.

Renault 4 JP4X4 Concept
The JP4x4 is mechanically similar to last year’s Savane 4×4 concept, with a second electric motor driving the rear wheels, giving the vehicle permanent all-wheel drive instead of the front-wheel-drive setup found on the standard Renault 4 E-Tech. The ground clearance rises by 15 millimeters, and each track widens by 10 millimeters for better stability. The 18-inch wheels wear a fresh design inspired by the original JP4, wrapped in Goodyear UltraGrip Performan A second electric motor powers the back wheels, providing the vehicle permanent all-wheel drive rather than the front-wheel-drive system seen on the ordinary Renault 4 E-Tech. The ground clearance increases by 15 millimeters, and each track expands by 10 millimeters to improve stability. The 18-inch wheels feature a new design inspired by the original JP4, as well as Goodyear UltraGrip Performance+ tires in the 225/55 size. The wheelbase remains at 2,624 millimeters, as it was on the production vehicle.

Renault 4 JP4X4 Concept
Renault built the entire package for sandy beaches, stony pathways, and unpaved treks where extra traction is critical. The combination of raised height, wider stance, and all-wheel drive gives the car a capable feel without making it a serious off-road vehicle. Nobody expects this particular vehicle to hit showrooms. Instead, it serves as a showcase for the electric Renault 4 platform’s versatility.

Source link

Advertisement
Continue Reading

Tech

Amazon says it isn’t making another phone, after burning itself with the Fire Phone

Published

on


Amazon’s devices chief says the company is not chasing a conventional smartphone, even as reports point to a mobile AI device inspired by Alexa.

Source link

Continue Reading

Tech

Two Figure F.03 Humanoid Robots Just Reset an Entire Bedroom in Under Two Minutes

Published

on

Figure F.03 Humanoid Robot Helix Making Bed
A recent Figure AI tech showcase depicts two F.03 humanoid robots walking into a clean but lived-in environment. One robot goes straight to a coat thrown on a bed and hangs it neatly on a wall hook. At the same time, the second robot closes a laptop on the desk and places a pair of headphones back onto their stand. They keep progressing without pausing, each catching up on what the other has previously accomplished. When they approach the unmade bed, they naturally split off, one on each side, and begin manipulating the sheets and comforter together until everything is level and smooth.



People have seen robots doing laundry and stacking boxes before, but this time, not one, but TWO machines went through a whole sequence of everyday jobs in the same room at the same time, not bad for a minute and a half of work. The list of tasks included opening doors, pushing a chair under the desk, closing a book, emptying a small trash can, and generally tidying up the joint so it looked ready for the next day. But this time, not one, but TWO computers performed a whole sequence of daily tasks in the same room at the same time, which is not bad for a minute and a half of effort. The list of activities included opening doors, moving a chair beneath the desk, closing a book, emptying a little trash can, and generally cleaning up the space so it looked ready for the next day.


Unitree G1 Humanoid Robot(No Secondary Development)
  • Sleek & Durable Design: Standing at 132cm tall and weighing only approx. 35kg, the G1 is constructed with aerospace-grade aluminum alloy and carbon…
  • High Flexibility & Safe Movement: Boasting 23 joint degrees of freedom (6 per leg, 5 per arm), it offers an extensive range of motion. For safety, it…
  • Smart Interaction & Connectivity: Powered by an 8-core high-performance CPU and equipped with a depth camera and 3D LiDAR. It supports Wi-Fi 6 and…

Figure F.03 Humanoid Robot Helix Making Bed
Each F.03 stands about five feet eight inches tall, walks around on two legs with arms that have hands made up of five individual fingers, and their heads have stereo cameras that feed live video straight into their central brain, with no need for any external sensors or additional computers to help them out. The trick is in a special bit of software called Helix-02, which Figure built in as a single rule that takes images from the cameras and then The trick is in a special piece of software called Helix-02, which Figure included as a single rule that takes images from the cameras and a simple aim and translates them into an infinite succession of joint movements, with no further planners or coding required.

Figure F.03 Humanoid Robot Helix Making Bed
Engineers basically gave the system thousands of hours of practice time through simulated runs, then threw in some real-world examples from their earlier tests of the robots doing grocery shopping and kitchen cleaning, and then they simply added some new data showing the robots working together in a room, and the model just learned the new pattern, no need for new code.

Figure F.03 Humanoid Robot Helix Making Bed
When the comforter becomes all bunched up and begins to slide out from under the robots, the one on the left will tilt its head slightly, and the one on the right will notice and adjust its grip just in time to catch it; because they don’t send any messages to each other, everything happens on the fly, with each watching the other’s body language and adjusting their own plan as it goes.

Figure F.03 Humanoid Robot Helix Making Bed
Engineers will tell you that the problem is that objects that can change shape, such as a blanket, provide a significant challenge. The policy predicts when the shape will change and adjusts accordingly, all in a fraction of a second! It also makes the robots steady when they reach, stride, and turn, as it was interesting to see one of them stand on one leg to press the pedal while the other just kept walking.
[Source]

Source link

Advertisement
Continue Reading

Tech

Japanese banks to get Anthropic’s vulnerability-hunting AI

Published

on

MUFG, Mizuho, and SMFG would be the first Japanese institutions added to Anthropic’s restricted Project Glasswing rollout, a source familiar with the matter told Reuters

Japan’s three megabanks are set to gain access to Claude Mythos, Anthropic’s vulnerability-hunting AI model, within roughly two weeks, a source familiar with the matter told Reuters on Tuesday.

It would be the first time a Japanese company has been granted entry to the restricted preview, which has so far been confined to Anthropic’s American and a handful of European partners.

Mitsubishi UFJ Financial Group, Mizuho Financial Group, and Sumitomo Mitsui Financial Group were informed of the move during meetings in Tokyo this week with US Treasury Secretary Scott Bessent. The three lenders are expected to be onboarded by the end of May.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Mythos has been treated by regulators and chief executives as a category-shifting event since Anthropic disclosed its existence earlier this month.

The model has discovered thousands of previously unknown zero-day vulnerabilities across every major operating system and every major web browser, and in internal testing it wrote working exploits, including chains that escape both renderer and operating-system sandboxes in a browser.

Advertisement

Mozilla last week shipped Firefox 150 with fixes for 271 vulnerabilities found by Mythos in a single evaluation pass.

Anthropic has not released the model publicly. Instead, it has run a controlled rollout under what it calls Project Glasswing, with 12 named launch partners, including AWS, Apple, Cisco, Google, JPMorganChase, Microsoft, Nvidia, and Palo Alto Networks, and around 40 further institutions granted access on a case-by-case basis.

Japan’s inclusion comes weeks after the Fed and US Treasury convened American bank chief executives on the same cyber-risk briefing, and after UK regulators committed to briefing major British banks within days.

Tokyo is moving in parallel. Finance Minister Satsuki Katayama announced the formation of a 36-entity public-private working group on Mythos-class risks, comprising the country’s major banks, the Bank of Japan, and the Japanese units of Anthropic and OpenAI.

Advertisement

The group is chaired by Mizuho’s chief information security officer and is charged with identifying exposures, implementing defensive measures, and drafting contingency plans for what would amount to a co-ordinated patching push across the Japanese financial system.

For the three banks involved, the immediate question is operational. Mythos under Glasswing terms is delivered with restrictions on output disclosure, with the model used to find vulnerabilities in a partner’s own systems and to draft remediation, not to publish exploits.

The Mozilla case offers a template: 271 vulnerabilities patched in a single Firefox release after a Mythos sweep, with the model’s findings handed back to Mozilla engineers under non-disclosure rather than published.

The geopolitical layer is unusually visible. Bessent’s role in conveying the access decision in Tokyo aligns Mythos rollout with US Treasury statecraft rather than with Anthropic’s commercial channel, an arrangement that has drawn complaints from European capitals.

Advertisement

Eurozone finance ministers raised the issue at an Ecofin meeting last week, where no EU government had access to the model while the White House was reported to be blocking further expansion of the partner list.

Industry views on Mythos remain split. Some cybersecurity researchers have argued that the vulnerabilities Mythos surfaced are reachable through clever orchestration of public models, and that the bigger story is the rate of improvement of frontier AI in offensive cyber, not Mythos itself.

Others, including Anthropic chief executive Dario Amodei, have described the moment as a “cyber moment of danger” that justifies the access controls.

Anthropic and the three Japanese banks did not immediately respond to requests for comment, according to the Reuters source’s account.

Advertisement

Source link

Continue Reading

Tech

How to watch Lady Gaga concert on May 14 on Apple Music

Published

on

Lady Gaga’s “Mayhem Requiem” filmed live performance will stream on Thursday, May 14, via Apple Music Live and at select AMC theaters across the United States.

At 11:00 p.m. Eastern / 8:00 p.m. Pacific, Lady Gaga fans can head to the Apple Music app on their iPhone, iPad, Mac, Apple TV, or in-browser at music.apple.com to tune into an exclusive stream of the Mayhem Requiem filmed live performance. In addition to streaming on the app, 15 select AMC theaters across the U.S. will show the performance at the same time.

The premiere is free for anyone to watch; no Apple Music subscription is required. However, Apple Music subscribers will be able to watch the performance on demand after the event is over.

Apple Music describes the event:

Advertisement

“The opera house from Lady Gaga’s MAYHEM Ball has been reduced to rubble— and now it’s time for MAYHEM Requiem, a celebration and musical reimagining of her sixth album.”

It’s worth noting that the filmed live performance isn’t actually live, either. It was recorded on January 14 at Los Angeles’ Walter Theater.

A live album of all songs mastered in spatial audio will be available on Apple Music. Fans can unlock bonus content, like wallpapers and Apple Watch faces, through the Shazam app by identifying any Lady Gaga song.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025