Connect with us
DAPA Banner

Tech

Should my enterprise AI agent do that? NanoClaw and Vercel launch easier agentic policy setting and approval dialogs across 15 messaging apps

Published

on

For the past year, early adopters of autonomous AI agents have been forced to play a murky game of chance: keep the agent in a useless sandbox or give it the keys to the kingdom and hope it doesn’t hallucinate a catastrophic “delete all” command.

To unlock the true utility of an agent—scheduling meetings, triaging emails, or managing cloud infrastructure—users have had to grant these models raw API keys and broad permissions, raising the risk of their systems being disrupted by an accidental agent mistake.

That tradeoff ends today. The creators of the open source sandboxed NanoClaw agent framework — now known under their new private startup named NanoCo — have announced a landmark partnership with Vercel and OneCLI to introduce a standardized, infrastructure-level approval system.

By integrating Vercel’s Chat SDK and OneCLI’s open source credentials vault, NanoClaw 2.0 ensures that no sensitive action occurs without explicit human consent, delivered natively through the messaging apps where users already live.

Advertisement

The specific use cases that stand to benefit most are those involving high-consequence “write” actions. That is, in DevOps, an agent could propose a cloud infrastructure change that only goes live once a senior engineer taps “Approve” in Slack.

For finance teams, an agent could prepare batch payments or invoice triaging, with the final disbursement requiring a human signature via a WhatsApp card.

Technology: security by isolation

The fundamental shift in NanoClaw 2.0 is the move away from “application-level” security to “infrastructure-level” enforcement. In traditional agent frameworks, the model itself is often responsible for asking for permission—a flow that Gavriel Cohen, co-founder of NanoCo, describes as inherently flawed.

“The agent could potentially be malicious or compromised,” Cohen noted in a recent interview. “If the agent is generating the UI for the approval request, it could trick you by swapping the ‘Accept’ and ‘Reject’ buttons.”

Advertisement

NanoClaw solves this by running agents in strictly isolated Docker or Apple Containers. The agent never sees a real API key; instead, it uses “placeholder” keys. When the agent attempts an outbound request, the request is intercepted by the OneCLI Rust Gateway. The gateway checks a set of user-defined policies (e.g., “Read-only access is okay, but sending an email requires approval”).

If the action is sensitive, the gateway pauses the request and triggers a notification to the user. Only after the user approves does the gateway inject the real, encrypted credential and allow the request to reach the service.

Product: bringing the ‘human’ into the loop

While security is the engine, Vercel’s Chat SDK is the dashboard. Integrating with different messaging platforms is notoriously difficult because every app—Slack, Teams, WhatsApp, Telegram—uses different APIs for interactive elements like buttons and cards.

By leveraging Vercel’s unified SDK, NanoClaw can now deploy to 15 different channels from a single TypeScript codebase. When an agent wants to perform a protected action, the user receives a rich interactive card on their phone. “The approval shows up as a rich, native card right inside Slack or WhatsApp or Teams, and the user taps once to approve or deny,” said Cohen. This “seamless UX” is what makes human-in-the-loop oversight practical rather than a productivity bottleneck.

Advertisement

The full list of 15 supported messaging apps/channels contains many favored by enterprise knowledge workers, including:

  • Slack

  • WhatsApp

  • Telegram

  • Microsoft Teams

  • Discord

  • Google Chat

  • iMessage

  • Facebook Messenger

  • Instagram

  • X (Twitter)

  • GitHub

  • Linear

  • Matrix

  • Email

  • Webex

Background on NanoClaw

NanoClaw launched on January 31, 2026, as a minimalist and security-focused response to the “security nightmare” inherent in complex, non-sandboxed agent frameworks.

Created by Cohen, a former Wix.com engineer, and marketed by his brother Lazer, CEO of B2B tech public relations firm Concrete Media, the project was designed to solve the auditability crisis found in competing platforms like OpenClaw, which had grown to nearly 400,000 lines of code.

By contrast, NanoClaw condensed its core logic into roughly 500 lines of TypeScript—a size that, according to VentureBeat, allows the entire system to be audited by a human or a secondary AI in approximately eight minutes.

Advertisement

The platform’s primary technical defense is its use of operating system-level isolation. Every agent is placed inside an isolated Linux container—utilizing Apple Containers for high performance on macOS or Docker for Linux—to ensure that the AI only interacts with directories explicitly mounted by the user.

As detailed in VentureBeat’s reporting on the project’s infrastructure, this approach confines the “blast radius” of potential prompt injections strictly to the container and its specific communication channel.

In March 2026, NanoClaw further matured this security posture through an official partnership with the software container firm Docker to run agents inside “Docker Sandboxes”.

This integration utilizes MicroVM-based isolation to provide an enterprise-ready environment for agents that, by their nature, must mutate their environments by installing packages, modifying files, and launching processes—actions that typically break traditional container immutability assumptions.

Advertisement

Operationally, NanoClaw rejects the traditional “feature-rich” software model in favor of a “Skills over Features” philosophy. Instead of maintaining a bloated main branch with dozens of unused modules, the project encourages users to contribute “Skills”—modular instructions that teach a local AI assistant how to transform and customize the codebase for specific needs, such as adding Telegram or Gmail support.

This methodology, as described on NanoClaw’s website and in VentureBeat interviews, ensures that users only maintain the exact code required for their specific implementation.

Furthermore, the framework natively supports “Agent Swarms” via the Anthropic Agent SDK, allowing specialized agents to collaborate in parallel while maintaining isolated memory contexts for different business functions.

Licensing and open source strategy

NanoClaw remains firmly committed to the open source MIT License, encouraging users to fork the project and customize it for their own needs. This stands in stark contrast to “monolithic” frameworks.

Advertisement

NanoClaw’s codebase is remarkably lean, consisting of only 15 source files and roughly 3,900 lines of code, compared to the hundreds of thousands of lines found in competitors like OpenClaw.

The partnership also highlights the strength of the “Open Source Avengers” coalition.

By combining NanoClaw (agent orchestration), Vercel Chat SDK (UI/UX), and OneCLI (security/secrets), the project demonstrates that modular, open-source tools can outpace proprietary labs in building the application layer for AI.

Community reactions

As shown on the NanoClaw website, the project has amassed more than 27,400 stars on GitHub and maintains an active Discord community.

Advertisement

A core claim on the NanoClaw site is that the codebase is small enough to understand in “8 minutes,” a feature targeted at security-conscious users who want to audit their assistant.

In an interview, Cohen noted that iMessage support via Vercel’s Photon project addresses a common community hurdle: previously, users often had to maintain a separate Mac Mini to connect agents to an iMessage account.

The enterprise perspective: should you adopt?

For enterprises, NanoClaw 2.0 represents a shift from speculative experimentation to safe operationalization.

Historically, IT departments have blocked agent usage due to the “all-or-nothing” nature of credential access. By decoupling the agent from the secret, NanoClaw provides a middle ground that mirrors existing corporate security protocols—specifically the principle of least privilege.

Advertisement

Enterprises should consider this framework if they require high-auditability and have strict compliance needs regarding data exfiltration. According to Cohen, many businesses have not been ready to grant agents access to calendars or emails because of security concerns. This framework addresses that by ensuring the agent structurally cannot act without permission.

Enterprises stand to benefit specifically in use cases involving “high-stakes” actions. As illustrated in the OneCLI dashboard, a user can set a policy where an agent can read emails freely but must trigger a manual approval dialog to “delete” or “send” one.

Because NanoClaw runs as a single Node.js process with isolated containers , it allows enterprise security teams to verify that the gateway is the only path for outbound traffic. This architecture transforms the AI from an unmonitored operator into a supervised junior staffer, providing the productivity of autonomous agents without forgoing executive control.

Ultimately, NanoClaw is a recommendation for organizations that want the productivity of autonomous agents without the “black box” risk of traditional LLM wrappers. It turns the AI from a potentially rogue operator into a highly capable junior staffer who always asks for permission before hitting the “send” or “buy” button.

Advertisement

As AI-native setups become the standard, this partnership establishes the blueprint for how trust will be managed in the age of the autonomous workforce.

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

OpenAI debuts GPT-Rosalind, a new limited access model for life sciences, and broader Codex plugin on Github

Published

on

The journey from a laboratory hypothesis to a pharmacy shelf is one of the most grueling marathons in modern industry, typically spanning 10 to 15 years and billions of dollars in investment.

Progress is often stymied not just by the inherent mysteries of biology, but by the “fragmented and difficult to scale” workflows that force researchers to manually pivot between the actual experimental design equipment, software, and databases.

But OpenAI is releasing a new specialized model GPT-Rosalind specifically to speed up this process and make it more efficient, easier, and ideally, more productive. Named after the pioneering chemist Rosalind Franklin, whose work was vital to the discovery of DNA’s structure (and was often overlooked for her male colleagues James Watson and Francis Crick), this new frontier reasoning model is purpose-built to act as a specialized intelligence layer for life sciences research.

By shifting AI’s role from a general-purpose assistant to a domain-specific “reasoning” partner, OpenAI is signaling a long-term commitment to biological and chemical discovery.

Advertisement

What GPT-Rosalind offers

GPT-Rosalind isn’t just about faster text generation; it is designed to synthesize evidence, generate biological hypotheses, and plan experiments—tasks that have traditionally required years of expert human synthesis.

At its core, GPT-Rosalind is the first in a new series of models optimized for scientific workflows. While previous iterations of GPT excelled at general language tasks, this model is fine-tuned for deeper understanding across genomics, protein engineering, and chemistry.

To validate its capabilities, OpenAI tested the model against several industry benchmarks. On BixBench, a metric for real-world bioinformatics and data analysis, GPT-Rosalind achieved leading performance among models with published scores.

In more granular testing via LABBench2, the model outperformed GPT-5.4 on six out of eleven tasks, with the most significant gains appearing in CloningQA—a task requiring the end-to-end design of reagents for molecular cloning protocols.

Advertisement

The model’s most striking performance signal came from a partnership with Dyno Therapeutics. In an evaluation using unpublished, “uncontaminated” RNA sequences, GPT-Rosalind was tasked with sequence-to-function prediction and generation.

When evaluated directly in the Codex environment, the model’s submissions ranked above the 95th percentile of human experts on prediction tasks and reached the 84th percentile for sequence generation.

This level of expertise suggests the model can serve as a high-level collaborator capable of identifying “expert-relevant patterns” that generalist models often overlook.

The new lab workflow

OpenAI is not just releasing a model; it is launching an ecosystem designed to integrate with the tools scientists already use. Central to this is a new Life Sciences research plugin for Codex, available on GitHub.

Advertisement

Scientific research is famously siloed. A single project might require a researcher to consult a protein structure database, search through 20 years of clinical literature, and then use a separate tool for sequence manipulation. The new plugin acts as an “orchestration layer,” providing a unified starting point for these multi-step questions.

  • Skill Set: The package includes modular skills for biochemistry, human genetics, functional genomics, and clinical evidence.

  • Connectivity: It connects models to over 50 public multi-omics databases and literature sources.

  • Efficiency: This approach targets “long-horizon, tool-heavy scientific workflows,” allowing researchers to automate repeatable tasks like protein structure lookups and sequence searches.

Limited and gated access

Given the potential power of a model capable of redesigning biological structures, OpenAI is eschewing a broad “open-source” or general public release in favor of a Trusted Access program.

The model is launching as a research preview specifically for qualified Enterprise customers in the United States. This restricted deployment is built on three core principles: beneficial use, strong governance, and controlled access.

Organizations requesting access must undergo a qualification and safety review to ensure they are conducting legitimate research with a clear public benefit.

Advertisement

Unlike general-use models, GPT-Rosalind was developed with heightened enterprise-grade security controls. For the end-user, this means:

  • Restricted Access: Usage is limited to approved users within secure, well-managed environments.

  • Governance: Participating organizations must maintain strict misuse-prevention controls and agree to specific life sciences research preview terms.

  • Cost: During the preview phase, the model will not consume existing credits or tokens, allowing researchers to experiment without immediate budgetary constraints (subject to abuse guardrails).

Warm reception from initial industry partners

The announcement garnered significant buy-in from OpenAI parnters across the pharmaceutical and technology sectors.

Sean Bruich, SVP of AI and Data at Amgen, noted that the collaboration allows the company to apply advanced tools in ways that could “accelerate how we deliver medicines to patients”.The impact is also being felt in the specialized tech infrastructure that supports labs:

  • NVIDIA: Kimberly Powell, VP of Healthcare and Life Sciences, described the convergence of domain reasoning and accelerated computing as a way to “compress years of traditional R&D into immediate, actionable scientific insights”.

  • Moderna: CEO Stéphane Bancel highlighted the model’s ability to “reason across complex biological evidence” to help teams translate insights into experimental workflows.

  • The Allen Institute: CTO Andy Hickl emphasized that GPT-Rosalind stands out for making manual steps—like finding and aligning data—more “consistent and repeatable in an agentic workflow”.

This builds on tangible results OpenAI has already seen in the field, such as its collaboration with Ginkgo Bioworks, where AI models helped achieve a 40% reduction in protein production costs.

Advertisement

What’s next for Rosalind and OpenAI in life sciences?

OpenAI’s mission with GPT-Rosalind is to narrow the gap between a “promising scientific idea” and the actual “evidence, experiments, and decisions” required for medical progress.

By partnering with institutions like Los Alamos National Laboratory to explore AI-guided catalyst design and biological structure modification, the company is positioning GPT-Rosalind as more than a tool—it is meant to be a “capable partner in discovery”.

As the life sciences field becomes increasingly data-dense, the move toward specialized “reasoning” models like Rosalind may become the standard for navigating the “vast search spaces” of biology and chemistry.

Source link

Advertisement
Continue Reading

Tech

Physicists share the glory and the wealth after winning $3M for exploring muon mysteries

Published

on

David Hertzog with Muon g-2 storage ring at Fermilab
University of Washington physicist David Hertzog checks out the 50-foot-wide superconducting magnetic ring for the Muon g-2 experiment at the time of its startup at Fermilab in 2018. (Photo Courtesy of David Hertzog)

University of Washington physicist David Hertzog can’t wait to find out how hundreds of researchers who worked on a geeky project known as the Muon g-2 Collaboration will react when they hear they’ve each won thousands of dollars for that work.

The money is coming from this year’s $3 million Breakthrough Prize for fundamental physics, which was awarded tonight during a gala ceremony in Los Angeles. Hertzog and his colleagues decided that the prize should be divided equally among everyone who was an author on research papers relating to the decades-long series of muon experiments.

“There are students who were in and out of this thing — two years or less,” Hertzog said. “They’re going to be shocked out of their lives about something they did a long time ago that they don’t remember doing. They’re going to get a phone call or email from the Breakthrough people, and they’re going to go, ‘What!?’ That’s kind of fun.”

Hertzog said the money will be shared by about 400 researchers who were involved in the Muon g-2 experiments at Fermilab in Illinois and at the Brookhaven National Laboratory in New York. The prize also honors the role played by Europe’s CERN research center, going as far back as 1959. “There was one very, very old man who was still alive from the 1970s experiment, but I think he has died,” Hertzog said.

Although the precise math hasn’t yet been worked out, dividing $3 million among 400 people would give each recipient $7,500. “That’s nothing to throw around if you’re a student or a young postdoc,” Hertzog said.

Advertisement

A big moment for the muon

Russian-born tech investor Yuri Milner and his wife, Julia Milner, established the Breakthrough Prize in 2012 to recognize achievements in fundamental physics, mathematics and the life sciences. They also wanted to add some Hollywood-style pizazz to the public perception of scientists, going so far as to spread out a red carpet for celebrities at the “Oscars of Science.” The host for this year’s ceremony was James Corden, and the guest list included Robert Downey Jr., Eileen Gu, Anne Hathaway, Paris Hilton, Salma Hayek Pinault and Michelle Yeoh.

The $3 million Breakthrough Prize is the world’s richest scientific award, outdoing the roughly $1.2 million prize given to Nobel laureates. More than $344 million has been handed out since the creation of the prize program. Past winners from the University of Washington include physicists Eric Adelberger, Lukasz Fidkowski, Jens Gundlach and Blayne Heckel, plus biochemist David Baker.

This year’s prize in fundamental physics touches on a long-running effort to reconcile experimental findings with one of history’s most successful scientific theories: the Standard Model of particle physics. The theory lays out a framework for classifying and understanding a menagerie of subatomic particles — including the muon, which is similar to the electron but 207 times heavier.

The Standard Model predicts the various properties of the muon. One such property is the strength and orientation of the muon’s magnetic field, known as its magnetic moment. The theory’s simplest formulation calls for the value of the muon’s magnetic moment, represented in equations by the letter g, to be equal to 2.

Advertisement

Few things in particle physics are that simple, however. Experimental tests measured the g-factor to be slightly more than 2, and that discrepancy became the focus of the Muon g-2 (pronounced “mew-on gee-minus-two”) experiments.

If there was a confirmed mismatch between the Standard Model and experimental results, that could open the door to new physics. For example, perhaps whole new sets of subatomic particles not predicted by theory had somehow eluded direct observation. So, physicists across the globe marshaled their forces to determine the value of g, either to fill in the gap between experiment and theory or to zero in on a new frontier in physics.

Over the years, physicists have been conducting increasingly fine-tuned experimental runs using powerful magnets at CERN, Brookhaven and Fermilab. Hertzog has been in on the quest since Brookhaven joined in, about 30 years ago, and he was part of the team in 2013 when the experiment’s massive main magnet was moved from Brookhaven to Fermilab.

Each run narrowed the uncertainty surrounding the precise value of g. The crowning achievement came from Fermilab’s version of the experiment in 2025.

Advertisement

“We set the goal at 140 parts per billion, and we got 127 parts per billion,” Hertzog said. “When we wrote the proposal, we were ambitious as we could get in our minds, because we wanted to get people to take us on. Then we just blew away all the systematic errors, better than we expected. And then new ones came along, which caused us to have a little bit of a struggle.”

Researchers install the storage ring and magnets for the first Muon g-2 experiment at CERN in 1960. (CERN PhotoLab)

At the same time, other physicists were wrestling with theoretical models. They factored in the ever-so-subtle effects of particles popping in and out of the quantum foam that’s thought to make up the fabric of spacetime at its smallest scale. Last year, one of the models came up with a range of theoretical values for g that overlapped with the Muon g-2 Collaboration’s range of experimental values.

That led some to claim that there was no discrepancy after all. “A famous particle physics experiment has ended not with a bang, but a whimper,” Science magazine reported. But once again, few things in particle physics are that simple. Hertzog insisted that reports of the muon mystery’s death have been greatly exaggerated.

“I just throw up my hands, because after 30-some years of working on this, it’s a little disappointing that it’s not clear,” he said. “Not only has the number that they recommended shifted, but the certainty of their number got way wider. The uncertainty on the theory recommendation is actually pretty big. It’s shifted, but it’s also pretty large.”

Hertzog said the Breakthrough Prize recognizes a scientific quest that’s still in progress. “This story is not finished,” he said. “The story is really about the extraordinary achievement of the precision of this delicate measurement which probes nature to such a deep, deep level.”

Advertisement

Will there ever be a definitive answer to the muon mystery?

“We don’t know it yet, but it’s knowable, as opposed to walking out into a vast cloud of ambiguity,” Hertzog said. “So, I think we will find out in a couple of years where that finally lands. … Who knows whether that’ll lead us to another chapter in this business. But I’m confident that we’ll know it.”

A big night for breakthroughs

The Muon g-2 Collaboration’s Breakthrough Prize was awarded to hundreds of researchers from 31 institutions in seven countries, but just four team members were selected to take the stage for tonight’s award ceremony. Hertzog was joined by Chris Polly from Fermilab, William Morse from Brookhaven, and Lee Roberts from Brookhaven and Boston University.

A special lifetime prize for fundamental physics went to David Gross, a theorist at the Kavli Institute of Theoretical Physics at the University of California at Santa Barbara. Gross won a share of the 2004 Nobel Prize in Physics for filling gaps in the Standard Model relating to the strong nuclear force. More recently, he helped write a landmark 40-year national plan for particle physics.

Advertisement

Three prizes were given in the life sciences:

Frank Merle of the Institut des Hautes Études Scientifiques in Paris was awarded this year’s prize in mathematics for achieving breakthroughs in nonlinear evolution equations. His work could have implications from aeronautical engineering and safety to astrophysics.

For his part, Hertzog doesn’t intend to rest on his laurels. Even as the Muon g-2 Collaboration is winding down, he has joined the team for another particle physics experiment called PIONEER. That experiment will probe inconsistencies between the Standard Model and observations of pion decay. As was the case with the Muon g-2 experiments, there’s a chance that PIONEER could point the way to physics beyond the Standard Model.

“This is a stock market golden opportunity,” Hertzog said. “That’s how I look at it.”

The Breakthrough Prize website has the full list of this year’s honorees, including the winners of New Horizons Prizes for early-career physicists and mathematicians, Maryam Mirzakhani New Frontiers Prizes for women mathematicians and the inaugural Vera Rubin New Frontiers Prize for women physicists. The recorded awards show is due to air at noon PT on April 26 via YouTube.

Advertisement

Hertzog and University of Bern physicist Martin Hoferichter review the results of the Muon g-2 experiments in a preprint paper titled “The Anomalous Magnetic Moment of the Muon: Status and Perspectives.”

Source link

Continue Reading

Tech

Danish finance AI start-up Spektr raises $20m

Published

on

The new funding will be used to expand the Copenhagen-based company’s AI platform for banks and fintech companies, and accelerate adoption across financial institutions globally.

Danish financial compliance AI start-up Spektr has raised $20m in a Series A funding round led by New Enterprise Associates (NEA) with participation from existing investors including Northzone, Seedcamp and PSV Tech.

The new funding will be used to expand the Copenhagen-based company’s AI platform for banks and fintech companies, and accelerate adoption across financial institutions globally, according to the company.

According to Spektr, its specialised AI agents are designed to perform the work financial analysts typically do during compliance reviews – such as researching companies, interpreting information, verifying business activity and generating structured risk assessments – and instead of analysts spending hours gathering and interpreting data, the agents complete the work in minutes so compliance teams can review and approve the results.

Advertisement

“Compliance technology has mostly focused on workflow and data collection,” said Mikkel Skarnager, CEO and co-founder of Spektr.

“But the real bottleneck has always been the work itself – analysts researching companies, interpreting information and documenting decisions. Spektr automates those tasks with AI agents designed specifically for KYC and KYB compliance.”

Spektr was co-founded by Skarnager, CTO Ciprian Florescu, CRO Jan-Erik Wagner and CPO Jeremy Joly. Its live customers include Santander Leasing, Pleo, Mercuryo, Monta and Phantom.

“Financial institutions are under constant pressure to do more compliance work with fewer resources,” said Luke Pappas, partner at NEA.

Advertisement

“Spektr is tackling the most manual part of compliance operations in financial services. Their approach has the potential to redefine how compliance operations are run.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Microsoft Teams right-click paste broken by Edge update bug

Published

on

Microsoft Teams

Microsoft is warning that a recent Microsoft Edge browser update introduced a bug that breaks right-click paste in chats in the Microsoft Teams desktop client.

In an advisory published on April 14, Microsoft says users are reporting that they are unable to paste URLs, text, or images into Teams chats when using right-click context menus, with the “Paste” option greyed out.

To work around the bug, Microsoft says users can still copy and paste content using keyboard shortcuts: Ctrl + C and Ctrl + V on Windows, or Cmd + C and Cmd + V on macOS.

Wiz

“Impacted users report that they are unable to copy and paste URLs, text, and images in Microsoft Teams desktop client chats, as the paste option appears greyed out when using the right-click dropdown menu method,” explains Microsoft.

“To bypass impact, we recommended that users attempt to copy the intended URLs, text, and images using Ctrl + C and paste using Ctrl + V for Windows, and corresponding Cmd + C and Cmd + V for Mac.”

Advertisement

Microsoft says the bug is caused by a recent browser update that introduced a code regression in Microsoft Edge, which Microsoft Teams uses for certain functionality.

Admins on Reddit and the Microsoft forums report that the problem is affecting users in corporate environments as well as individual users.

“I have multiple users on version 26072.519.4556.7438 experiencing this issue, including myself. Cannot right-click Paste, but CTRL+V and paste as text are allowed,” an admin posted to the Microsoft Forums.

Paste option in Microsoft Teams is greyed out
Paste option in Microsoft Teams is greyed out

Other users said that reinstalling Teams or clearing the cache did not fix the problem.

Microsoft says it identified the cause and is rolling out a fix in stages while monitoring telemetry to confirm that systems are recovering.

Advertisement

As of the latest update on April 16, Microsoft has not provided an exact timeline for when the fix will be fully rolled out.

AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.

At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.

Source link

Advertisement
Continue Reading

Tech

Seattle mayor floats moratorium on new data centers in city limits

Published

on

Seattle Mayor Katie Wilson. (Campaign Photo)

Seattle Mayor Katie Wilson addressed concerns about a potential wave of new data centers in the city and raised the possibility of a moratorium, citing economic and environmental issues.

Wilson’s public statement Saturday followed a Seattle Times report April 10 that four companies have approached Seattle City Light about building five large-scale data centers with a combined peak demand of 369 megawatts, equal to roughly a third of Seattle’s average daily power consumption. 

“I share community concerns about environmental justice, economic resilience, and impacts of increased costs for Seattle rate payers,” Wilson wrote on Facebook. “That’s why my team is working closely with Seattle City Light, City Council and stakeholders to identify a range of long-term policy approaches, including exploring a moratorium on siting new centers.”

Seattle already has about 30 data centers, but they’re relatively small. The proposed facilities would be the first at this scale in the city and could consume nearly 10 times more power than the existing ones at full capacity, according to the Seattle Times report.

The world’s biggest tech companies, including hometown tech giants Microsoft and Amazon, have been spending hundreds of billions of dollars building data centers to scale up artificial intelligence.

Advertisement

Those facilities have historically gone up in rural areas, but power availability has grown scarce in many markets, driving developers to look at cities with their own utility resources. 

It’s not clear who the proposed data centers would be built for. Seattle City Light hasn’t disclosed the companies involved or proposed locations due to nondisclosure agreements.

Seattle City Light is rewriting its contract terms for large-load customers and plans to require data center operators to secure their own power generation and pay for infrastructure upgrades rather than passing costs to ratepayers. The companies are expected to decide in the next two to three months whether to formally apply for service.

Source link

Advertisement
Continue Reading

Tech

Chesky Audio LC2 Is a $1,995 Speaker Built for Your Den and Desktop Use: AXPONA 2026

Published

on

Sometimes the best finds at AXPONA 2026 aren’t planned. I walked into Chesky Audio’s room chasing Schiit Audio gear in Room 709; there was plenty of it, including the Yggdrasil Singular DAC, Loki Max, Kara, and a pair of Tyr monoblocks driving the new Chesky LC2 loudspeakers, but no one from Schiit to talk shop. So I stayed put, listened, and let the room tell its own story.

That story changed fast when the pricing banner came into focus: $1,995. Not each. Per pair. In a show full of six-figure loudspeakers, the Chesky LC2 doesn’t just feel affordable; it feels like a direct challenge to how high-end audio defines itself.

And that’s where this gets more interesting. If high-end audio wants a future, it needs more designers like Lucca Chesky. He comes from a family name that carries real weight in the music world, but he’s not coasting on it. He’s studying engineering at Carnegie Mellon University, and it shows in how he approaches both design and people.

There’s no gatekeeping here, no “you don’t belong in this room” energy. The LC1 and now the LC2 are priced where actual listeners can engage, and he speaks about them in a way that makes you feel like you’re part of the conversation and not being lectured from behind a stack of gear you can’t afford.

Advertisement

The kid gets it. And judging by what I heard in that room, he’s not just talking a good game.

Admittedly, a $1,995 price tag only matters if the speakers can actually deliver. The original Chesky LC1 set a high bar, earning multiple “Best of Show” nods from the eCoustics team at previous events; something Chesky made no effort to hide with the awards laid out on the table. So yes, I was a bit late to the party.

Better late than never.

I stayed for several tracks to get a clearer sense of what the team had already heard in the Chesky LC1, and what that might mean for the new Chesky LC2. It didn’t take long to recognize a familiar foundation, but with more scale and a bit more weight behind it, suggesting this isn’t a departure so much as a more developed version of the same idea.

Advertisement
chesky-schiit-room-axpona-2026
Chesky Audio LC2 Stand-mount Speakers with Schiit electronics at AXPONA 2026

An Affordable Speaker With Real Ambition

Much like the original Chesky LC1, the Chesky LC2 sticks to a compact two way monitor format. It pairs a dual chamber aperiodic 1 inch tweeter with a roughly 6.5-inch mid bass driver, both modified in house rather than pulled off a shelf. The familiar passive radiator approach is still here as well, now using larger 8-inch radiators on either side to extend low frequency output without relying on a traditional port.

Where things diverge, and where Chesky is clearly doing its own thing, is the cabinet. The front baffle is a 5/8-inch thick slab of machined aluminum, and the rest of the enclosure is 3D printed around that structure. It is an unusual approach, but the result is a cabinet that feels both rigid and relatively lightweight for its size. Each speaker measures roughly 13 x 9 x 13 inches and comes in just under 30 pounds.

It is also worth noting that these are not outsourced, mass produced boxes. Chesky Audio assembles, finishes, and tests the speakers in New Jersey before they ship. In a category where “designed here, built somewhere else” is the norm, these are actually made in the United States, and that still matters.

Advertisement. Scroll to continue reading.
Advertisement

Lucca Chesky is also quick to point out that the drivers are not an afterthought. The mid-bass unit uses a cast-basket high-definition design more commonly found in higher-priced speakers, and the tweeter follows that same philosophy. He stops short of naming suppliers, but the implication is clear this is not generic OEM hardware.

chesky-schiit-axpona-2026
Schiit Audio stack powered the Chesky Audio LC2 Speakers at AXPONA 2026.

The crossover is designed in house, although Chesky remains somewhat tight-lipped on specifics. Instead of locking into a fixed number, the crossover point is described as falling somewhere in the 3 to 5 kHz range. On paper, the speaker is rated at 86 dB sensitivity with a 4 ohm impedance that does not dip below 3.1 ohms across a stated 40 Hz to 20 kHz frequency range.

That combination suggests an easy enough load for most modern amplifiers, whether it is a vintage Kenwood receiver, a newer NAD integrated, or even a well-sorted ST-70 style tube amp build. But if our experience with the Chesky LC1 taught us anything, it is that specs do not tell the whole story. The LC1 benefited from more power than you might expect, and giving it better amplification paid off.

Until we get the Chesky LC2 in for a full review, it is too early to say how closely it follows that pattern.

Chesky LC2 in a Real Room at AXPONA 2026

Sound wise, the Chesky LC2 delivers clean mid-bass with solid detail and impact for a speaker of this size, but sub-bass is limited. That is not a surprise given the form factor. In a nearfield setup such as a desktop or small studio, there is enough low end to get by without a subwoofer, but in a larger room, adding one would make sense.

Advertisement

The midrange is where things come into better focus. There is a clear emphasis on clarity and balance, which aligns with what you would expect from anything carrying the Chesky name. Vocals come through naturally without sounding nasal or forced, and strings have enough presence to avoid sounding thin. That is not always a given with compact speakers, where cabinet limitations can work against natural timbre. The construction here likely plays a role, but that is something that needs more controlled listening to fully evaluate.

The top end had good energy and dynamic presence, but this is where the limitations of the show environment start to creep in. Between room noise and less than ideal setup conditions, it would be premature to draw firm conclusions without spending more time with the speakers in a more controlled space.

The Bottom Line

I can see several use cases for the Chesky LC2. Those looking for unpowered monitors for nearfield use will find them easy to live with as a standalone pair, and they also make sense in smaller rooms where space is limited. For larger spaces or mixed use systems that pull double duty for music and home theater, Chesky offers two, three, and five speaker packages that can be built out as needed.

Adding a subwoofer would round things out in those scenarios. Models like the REL Tzero or SVS 3000 Micro R|Evolution come to mind as good matches, offering tight, controlled low end without taking over the room or the budget.

Advertisement

With that kind of setup, the LC2 starts to make a lot of sense for multi purpose spaces where flexibility matters just as much as performance.

Where to buy: $1,995/pair at Chesky Audio

Source link

Advertisement
Continue Reading

Tech

Critical flaw in Protobuf library enables JavaScript code execution

Published

on

Critical flaw in Protobuf library enables JavaScript code execution

Proof-of-concept exploit code has been published for a critical remote code execution flaw in protobuf.js, a widely used JavaScript implementation of Google’s Protocol Buffers.

The tool is highly popular in the Node Package Manager (npm) registry, with an average of nearly 50 million weekly downloads. It is used for inter-service communication, in real-time applications, and for efficient storage of structured data in databases and cloud environments.

In a report on Friday, application security company Endor Labs says that the remote code execution vulnerability (RCE) in protobuf.js is caused by unsafe dynamic code generation.

Wiz

The security issue has not received an official CVE number and is currently being tracked as GHSA-xq3m-2v4x-88gg, the identifier assigned by GitHub.

Endor Labs explains that the library builds JavaScript functions from protobuf schemas by concatenating strings and executing them via the Function() constructor, but it fails to validate schema-derived identifiers, such as message names.

Advertisement

This lets an attacker supply a malicious schema that injects arbitrary code into the generated function, which is then executed when the application processes a message using that schema.

This opens the path to RCE on servers or applications that load attacker-influenced schemas, granting access to environment variables, credentials, databases, and internal systems, and even allowing lateral movement within the infrastructure.

The attack could also affect developer machines if those load and decode untrusted schemas locally.

The flaw impacts protobuf.js versions 8.0.0/7.5.4 and lower. Endor Labs recommends upgrading to 8.0.1 and 7.5.5, which address the issue.

Advertisement

The patch sanitizes type names by stripping non-alphanumeric characters, preventing the attacker from closing the synthetic function. However, Endor comments that a longer-term fix would be to stop round-tripping attacker-reachable identifiers through Function at all.

Endor Labs is warning that “exploitation is straightforward,” and that the minimal proof-of-concept (PoC) included in the security advisory reflects this. However, no active exploitation in the wild has been observed to date.

The vulnerability was reported by Endor Labs researcher and security bug bounty hunter Cristian Staicu on March 2, and the protobuf.js maintainers released a patch on  GitHub on March 11. Fixes to the npm packages were made available on April 4 for the 8.x branch and on April 15 for the 7.x branch.

Apart from upgrading to patched versions, Endor Labs also recommends that system administrators audit transitive dependencies, treat schema-loading as untrusted input, and prefer precompiled/static schemas in production.

Advertisement

AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.

At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.

Source link

Continue Reading

Tech

Three-Monitor Ridge Racer Machine Emulated In MAME

Published

on

When Ridge Racer hit the arcades in the early 1990s, it came in a few different versions. The last variant used three large CRTs to create a wraparound display for the player. Incredibly rare, it’s believed that only a single-digit number of machines remain in existence. [beaumotplage] has secured a remaining example, and been working to preserve this historical artifact.

The first mission when it comes to this machine was to dump the ROMs, which have thus far not been preserved in any major archive. With that done, [beaumotplage] worked to hack a version of MAME that could emulate the Three Monitor Version’s unique mode of operation. As it turns out, each screen is driven by its own arcade board, with the three boards linked via C139 serial links. To emulate this, the trick was simply to write some C139 linkup code and run three versions of MAME all at once, letting them communicate with each other as the original boards would have. It’s a little janky in operation right now, but it does work!

You can download the hacked version of MAME for three-monitor operation here, though note that this does not include the ROM dumps from the machine itself. We look forward to seeing if the hardware ends up getting a full restoration back to operational standard, too.

Advertisement

Overall, this work goes to show that arcade preservation and archival work sometimes requires getting deep into the nitty-gritty technical stuff.

Advertisement

Source link

Continue Reading

Tech

IPv6 traffic reaches parity with IPv4 for the first time, Google data shows

Published

on


Statistics from Google show a steady rise in global IPv6 usage, climbing from near zero in early 2012 to 50.1% on March 28, briefly surpassing IPv4. Although the milestone did not hold, usage now hovers between 45% and 50%.
Read Entire Article
Source link

Continue Reading

Tech

Apple avoids a second import ban for its redesigned smartwatches in latest court ruling

Published

on

Apple has secured a major victory for its redesigned smartwatches as per the latest decision from the US International Trade Commission. The federal agency ruled against reinstating an import ban on Apple Watches, allowing the tech giant to continue selling its devices with a reworked blood-oxygen monitoring technology.

The ITC decided to terminate the case and refer to a preliminary ruling from one of its judges in March that claimed that Apple’s redesigned smartwatches don’t infringe on patents held by Masimo, the medical tech company that has long been embroiled in lawsuits surrounding the Apple Watch. Apple thanked the ITC in a statement, adding that “Masimo has waged a relentless legal campaign against Apple and nearly all of its claims have been rejected.” We reached out to Masimo for comment and will update the story when we hear back.

The latest decision could offer some closure to the longstanding legal feud between Masimo and Apple. The patent battle dates back to 2021 with Masimo’s first filing against Apple that requested an import ban on Apple Watches. The ITC ended up ruling that Apple violated Masimo’s patents, resulting in the previous import ban and the Apple Watch maker redesigning the blood-oxygen reading feature in certain models. However, Masimo wasn’t satisfied with this conclusion and sought another import ban on the updated Apple Watch models. Now that the ITC has ruled against that, Masimo is left with the option to appeal the decision with the US Court of Appeals for the Federal Circuit.

While Masimo may currently be on the losing side of this legal battle, it’s confronting Apple on multiple fronts. In November, a federal jury sided with Masimo and ruled that Apple has to pay $634 million in a separate patent infringement case.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025