Connect with us
DAPA Banner

Tech

Valve's Proton 11 beta boosts Linux gaming with better performance and classic game support

Published

on


Valve has released a new beta version of Proton, the company’s official compatibility layer for improving Linux gaming. Proton 11.0-beta1 is a notable update for several reasons, including improved support for running classic games from the 90s. The release also lays the groundwork for further improvements expected in the near…
Read Entire Article
Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Should my enterprise AI agent do that? NanoClaw and Vercel launch easier agentic policy setting and approval dialogs across 15 messaging apps

Published

on

For the past year, early adopters of autonomous AI agents have been forced to play a murky game of chance: keep the agent in a useless sandbox or give it the keys to the kingdom and hope it doesn’t hallucinate a catastrophic “delete all” command.

To unlock the true utility of an agent—scheduling meetings, triaging emails, or managing cloud infrastructure—users have had to grant these models raw API keys and broad permissions, raising the risk of their systems being disrupted by an accidental agent mistake.

That tradeoff ends today. The creators of the open source sandboxed NanoClaw agent framework — now known under their new private startup named NanoCo — have announced a landmark partnership with Vercel and OneCLI to introduce a standardized, infrastructure-level approval system.

By integrating Vercel’s Chat SDK and OneCLI’s open source credentials vault, NanoClaw 2.0 ensures that no sensitive action occurs without explicit human consent, delivered natively through the messaging apps where users already live.

Advertisement

The specific use cases that stand to benefit most are those involving high-consequence “write” actions. That is, in DevOps, an agent could propose a cloud infrastructure change that only goes live once a senior engineer taps “Approve” in Slack.

For finance teams, an agent could prepare batch payments or invoice triaging, with the final disbursement requiring a human signature via a WhatsApp card.

Technology: security by isolation

The fundamental shift in NanoClaw 2.0 is the move away from “application-level” security to “infrastructure-level” enforcement. In traditional agent frameworks, the model itself is often responsible for asking for permission—a flow that Gavriel Cohen, co-founder of NanoCo, describes as inherently flawed.

“The agent could potentially be malicious or compromised,” Cohen noted in a recent interview. “If the agent is generating the UI for the approval request, it could trick you by swapping the ‘Accept’ and ‘Reject’ buttons.”

Advertisement

NanoClaw solves this by running agents in strictly isolated Docker or Apple Containers. The agent never sees a real API key; instead, it uses “placeholder” keys. When the agent attempts an outbound request, the request is intercepted by the OneCLI Rust Gateway. The gateway checks a set of user-defined policies (e.g., “Read-only access is okay, but sending an email requires approval”).

If the action is sensitive, the gateway pauses the request and triggers a notification to the user. Only after the user approves does the gateway inject the real, encrypted credential and allow the request to reach the service.

Product: bringing the ‘human’ into the loop

While security is the engine, Vercel’s Chat SDK is the dashboard. Integrating with different messaging platforms is notoriously difficult because every app—Slack, Teams, WhatsApp, Telegram—uses different APIs for interactive elements like buttons and cards.

By leveraging Vercel’s unified SDK, NanoClaw can now deploy to 15 different channels from a single TypeScript codebase. When an agent wants to perform a protected action, the user receives a rich interactive card on their phone. “The approval shows up as a rich, native card right inside Slack or WhatsApp or Teams, and the user taps once to approve or deny,” said Cohen. This “seamless UX” is what makes human-in-the-loop oversight practical rather than a productivity bottleneck.

Advertisement

The full list of 15 supported messaging apps/channels contains many favored by enterprise knowledge workers, including:

  • Slack

  • WhatsApp

  • Telegram

  • Microsoft Teams

  • Discord

  • Google Chat

  • iMessage

  • Facebook Messenger

  • Instagram

  • X (Twitter)

  • GitHub

  • Linear

  • Matrix

  • Email

  • Webex

Background on NanoClaw

NanoClaw launched on January 31, 2026, as a minimalist and security-focused response to the “security nightmare” inherent in complex, non-sandboxed agent frameworks.

Created by Cohen, a former Wix.com engineer, and marketed by his brother Lazer, CEO of B2B tech public relations firm Concrete Media, the project was designed to solve the auditability crisis found in competing platforms like OpenClaw, which had grown to nearly 400,000 lines of code.

By contrast, NanoClaw condensed its core logic into roughly 500 lines of TypeScript—a size that, according to VentureBeat, allows the entire system to be audited by a human or a secondary AI in approximately eight minutes.

Advertisement

The platform’s primary technical defense is its use of operating system-level isolation. Every agent is placed inside an isolated Linux container—utilizing Apple Containers for high performance on macOS or Docker for Linux—to ensure that the AI only interacts with directories explicitly mounted by the user.

As detailed in VentureBeat’s reporting on the project’s infrastructure, this approach confines the “blast radius” of potential prompt injections strictly to the container and its specific communication channel.

In March 2026, NanoClaw further matured this security posture through an official partnership with the software container firm Docker to run agents inside “Docker Sandboxes”.

This integration utilizes MicroVM-based isolation to provide an enterprise-ready environment for agents that, by their nature, must mutate their environments by installing packages, modifying files, and launching processes—actions that typically break traditional container immutability assumptions.

Advertisement

Operationally, NanoClaw rejects the traditional “feature-rich” software model in favor of a “Skills over Features” philosophy. Instead of maintaining a bloated main branch with dozens of unused modules, the project encourages users to contribute “Skills”—modular instructions that teach a local AI assistant how to transform and customize the codebase for specific needs, such as adding Telegram or Gmail support.

This methodology, as described on NanoClaw’s website and in VentureBeat interviews, ensures that users only maintain the exact code required for their specific implementation.

Furthermore, the framework natively supports “Agent Swarms” via the Anthropic Agent SDK, allowing specialized agents to collaborate in parallel while maintaining isolated memory contexts for different business functions.

Licensing and open source strategy

NanoClaw remains firmly committed to the open source MIT License, encouraging users to fork the project and customize it for their own needs. This stands in stark contrast to “monolithic” frameworks.

Advertisement

NanoClaw’s codebase is remarkably lean, consisting of only 15 source files and roughly 3,900 lines of code, compared to the hundreds of thousands of lines found in competitors like OpenClaw.

The partnership also highlights the strength of the “Open Source Avengers” coalition.

By combining NanoClaw (agent orchestration), Vercel Chat SDK (UI/UX), and OneCLI (security/secrets), the project demonstrates that modular, open-source tools can outpace proprietary labs in building the application layer for AI.

Community reactions

As shown on the NanoClaw website, the project has amassed more than 27,400 stars on GitHub and maintains an active Discord community.

Advertisement

A core claim on the NanoClaw site is that the codebase is small enough to understand in “8 minutes,” a feature targeted at security-conscious users who want to audit their assistant.

In an interview, Cohen noted that iMessage support via Vercel’s Photon project addresses a common community hurdle: previously, users often had to maintain a separate Mac Mini to connect agents to an iMessage account.

The enterprise perspective: should you adopt?

For enterprises, NanoClaw 2.0 represents a shift from speculative experimentation to safe operationalization.

Historically, IT departments have blocked agent usage due to the “all-or-nothing” nature of credential access. By decoupling the agent from the secret, NanoClaw provides a middle ground that mirrors existing corporate security protocols—specifically the principle of least privilege.

Advertisement

Enterprises should consider this framework if they require high-auditability and have strict compliance needs regarding data exfiltration. According to Cohen, many businesses have not been ready to grant agents access to calendars or emails because of security concerns. This framework addresses that by ensuring the agent structurally cannot act without permission.

Enterprises stand to benefit specifically in use cases involving “high-stakes” actions. As illustrated in the OneCLI dashboard, a user can set a policy where an agent can read emails freely but must trigger a manual approval dialog to “delete” or “send” one.

Because NanoClaw runs as a single Node.js process with isolated containers , it allows enterprise security teams to verify that the gateway is the only path for outbound traffic. This architecture transforms the AI from an unmonitored operator into a supervised junior staffer, providing the productivity of autonomous agents without forgoing executive control.

Ultimately, NanoClaw is a recommendation for organizations that want the productivity of autonomous agents without the “black box” risk of traditional LLM wrappers. It turns the AI from a potentially rogue operator into a highly capable junior staffer who always asks for permission before hitting the “send” or “buy” button.

Advertisement

As AI-native setups become the standard, this partnership establishes the blueprint for how trust will be managed in the age of the autonomous workforce.

Source link

Continue Reading

Tech

This record-breaking ultraviolet crystal may unlock nuclear clocks and change how submarines, spacecraft, and missiles navigate without external signals

Published

on


  • Nuclear clocks promise accuracy far beyond existing atomic timekeeping systems
  • Thorium 229 offers a rare pathway to practical nuclear time measurement
  • Ultraviolet breakthrough reduces one of the hardest barriers in nuclear clock development

A new crystal developed by Chinese scientists has broken the world record for ultraviolet light conversion, bringing nuclear clock technology closer to reality.

The fluorinated borate compound pushes laser light to a wavelength of 145.2nm, beating the previous benchmark of 150nm set by a Chinese crystal from the 1990s.

Source link

Continue Reading

Tech

AI chip startup Cerebras files for IPO

Published

on

Cerebras Systems, a startup building what CEO Andrew Feldman describes as “the fastest AI hardware for training and inference,” has filed to go public.

The company previously filed for an initial public offering in 2024, but that was delayed due to a federal review of an investment from Abu Dhabi-based G42 and was ultimately withdrawn. Cerebras raised a $1.1 billion Series G last year, followed by a $1 billion Series H in February at a $23 billion valuation, according to the Wall Street Journal.

In recent months, the company announced an agreement with Amazon Web Services to use Cerebras chips in Amazon data centers, as well as a deal with OpenAI reportedly worth more than $10 billion.

In a recent interview with the WSJ, Feldman boasted, “Obviously, [Nvidia] didn’t want to lose the fast inference business at OpenAI, and we took that from them.”

Advertisement

Cerebras brought in $510 million in revenue in 2025, according to the filing, with a net income of $237.8 million (excluding certain one-time items, it was a non-GAAP net loss of $75.7 million).

A company has not disclosed how much it hopes to raise in the IPO. A spokesperson said the offering is planned for mid-May.

Source link

Advertisement
Continue Reading

Tech

All Gemini users can now access Notebook projects on the web without paying a dime

Published

on

Google just made one of Gemini’s most useful features available to everyone. The Notebooks feature, initially rolled out to paid AI subscribers earlier this month, is now available to all free users on the web. If you use Gemini regularly, this is a pretty big deal.

Notebooks in @GeminiApp are now available to Free users on web!

Access your personal, unshared notebooks directly in Gemini *and* use your chats with Gemini as sources in new or existing unshared notebooks.

Let us know what you think! https://t.co/BT8B3gktPR

— NotebookLM (@NotebookLM) April 17, 2026

Advertisement

What is Gemini’s Notebooks feature and what can you do with it?

Think of Notebooks as a dedicated project workspace inside Gemini. Instead of starting fresh every time you open the app, you can store your conversations, files, and sources all in one place under a single topic. Gemini then uses everything in that notebook as context when you ask your next question.

The feature shows up as a new Notebooks section in Gemini’s side panel, right between Gems and Chats. Any conversation you have inside Gemini can be saved to a notebook using the three dots menu.

You can also set custom instructions to control the tone, format, and style of responses. If you prefer Gemini to answer without referencing your saved chats, there is also an option to turn off notebook memory entirely.

What makes this genuinely exciting is the NotebookLM integration. These are the same notebooks used in NotebookLM, Google’s standalone research tool. Since the two sync automatically, any source you add in one app instantly appears in the other. That means you can research something in Gemini and then use NotebookLM’s Video Overviews and Infographics features on the same material, without any manual transfers.

How many sources can free Gemini users add to a notebook?

Free users can add up to 50 sources per notebook. If you are on a paid plan, the limits scale up considerably: AI Plus subscribers get 100 sources, Pro users get 300, and Ultra subscribers can go up to 600. The feature currently supports Gemini’s full toolkit, including web search and other AI-powered functions.

For now, Notebooks is live on the web only. It has not yet reached mobile or Mac apps, though broader availability is expected in the coming weeks.

Advertisement

Source link

Continue Reading

Tech

ICE monitoring app takedowns violated the First Amendment

Published

on

A court has stopped the U.S. government from forcing Apple to take down ICE reporting apps from the App Store, due to it being a violation of the First Amendment.

Stylized collage of two women flanking a cracked smartphone screen with a large red warning triangle and exclamation mark, set against blue and black abstract background
Image credit: TheFire.org

In February, a lawsuit from the Foundation for Individual Rights and Expression (FIRE) took aim at the U.S. government over the right to report the activities of the Immigration and Customs Enforcement agency (ICE).
The preliminary finding, issued on April 17, lands in FIRE’s favor, with the Department of Homeland Security and Department of Justice being prevented from coercing Apple and Facebook into removing apps and interfering with communications.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Sam Altman’s project World looks to scale its human verification empire. First stop: Tinder.

Published

on

At a trendy venue near the San Francisco pier, Sam Altman’s verification project World celebrated its next evolution and rapid expansion of its ambitions.  And it’s starting with Tinder.

Tools for Humanity (TFH), the company behind the World project, announced Friday plans to integrate its verification tech into dating apps, event and concert ticketing systems, business organizations, email, and other arenas of public life.

“The world is getting close to very powerful AI, and this is doing a lot of wonderful things,” said Altman, speaking before a packed crowd at The Midway. “We are also heading to a world now where there’s going to be more stuff generated by AI than by humans,” he added. “I’m sure many of you [have had moments] where you’re like, ‘Am I interacting with an AI or a person, or how much of each, and how do I know?”

World (formerly Worldcoin) distinguishes itself from many of its ID verification peers by offering the ability to verify that a real, living human is using a digital service while still protecting that person’s anonymity. There is some complex cryptographic alchemy behind this (something called “zero-knowledge proof-based authentication”). The upshot: The company is creating what it calls “proof of human” tools, which are mechanisms that can verify human activity in a world rife with AI agents and bots.

Advertisement

Its chief tool for verification is a spherical digital reader called the Orb that scans a user’s eyes, converting their iris into a unique and anonymous cryptographic identifier (known as a verified World ID). This can then be used to access World’s services, although users can also access World’s app without one.

Altman kept his remarks brief on Friday (TFH’s co-founder and CEO, Alex Blania, was absent due to a last-minute hand surgery, Altman said). He then turned much of the presentation over to World’s chief product officer, Tiago Sada, and his team.

Sada explained that World was launching the newest version of its app (the last version was launched at an event in December), along with a plethora of new integrations for its technology.

World has been preparing, for some time, to deploy a verification service for dating apps — most notably, Tinder. Last year, Tinder launched a World ID pilot program in Japan. That pilot was apparently a success because World announced that Tinder would be launching its verification integration in global markets —including the U.S. The program integrates a World ID emblem into the profiles of users who have gone through its verification processes, thus authenticating them as a real person.

Advertisement
Image Credits:World

World is also courting the entertainment industry by launching a new feature called Concert Kit, where musical artists can reserve a certain number of concert tickets for World ID-verified humans. This is designed to ensure that fans are safe from scalpers who often use automated ticket-buying bots to scarf up seats. Concert Kit is compatible with major ticketing systems, including Ticketmaster and Eventbrite, and the company is promoting it via partnerships with 30 Seconds to Mars and Bruno Mars — both of whom plan to use it for their upcoming tours.

The event was full of many other announcements, including some aimed at businesses. A Zoom/World ID verification integration seeks to battle a supposed deepfake threat to business calls, and a Docusign partnership is designed to ensure signatures come from authentic users.

The company is also working on a number of features in anticipation of the Wild West of the agentic web, including one called “agent delegation,” in which a person can delegate their World ID to an agent to carry out online activities on their behalf. A partnership with authentication firm Okta has also created a system (currently in beta) that verifies that an agent is acting on behalf of a human. The system is set up so that a World ID can be tied to a specific agent and then, when the agent goes out into the web to operate on that person’s behalf, websites will know a verified person is behind the behavior, said Okta’s chief product officer, Gareth Davies, at the event.

So far, it’s been difficult for World to scale, due largely to the verification process itself. For much of the company’s history, to get its gold standard, you had to travel to one of its offices and have your eyeballs scanned by an Orb — a fairly inconvenient (not to mention weird) experience.

Image Credits:World

However, World has continually made moves to increase the ease and incentive structure for verification. In the past, it offered its crypto asset, Worldcoin, to some members who signed up and has distributed its Orbs into big retail chains so that users can verify themselves while they’re out shopping or getting a coffee. Now the company is announcing that it is significantly expanding its Orb saturation in New York, Los Angeles, and San Francisco. The company also promoted a service where interested users could have World bring an Orb to their location for remote verification.

In a conversation with TechCrunch, Sada also shared that World has attempted to solve the scaling problem by creating different tiers of verification. The highest tier is Orb verification, but below that, World has previously offered a mid-level tier, which uses an anonymized scan of an official government ID via the card’s NFC chip.

Advertisement

The company also introduced a low-level tier, or what Sada called “low friction”— meaning low effort, I guess, but also “low security” — which involves merely taking a selfie.

Selfie Check, which Sada’s team presented during the event, is designed to maintain user privacy.

“Selfie is private by design,” said Daniel Shorr, one of TFH’s executives, during the presentation. “That means that we maximize the local processing that’s happening on your device, on your phone, which means that your images are yours.”

Selfie verification obviously isn’t new, and fraudsters have long managed to spoof it. “Obviously, we do our best, and it’s like one of the best systems that you’ll see for this. But it has limits,” Sada told TechCrunch. Developers looking to integrate World’s services can choose from the three different verification tiers depending on the level of security that’s important to them, he noted.

Advertisement

Source link

Continue Reading

Tech

How to actually make a difference with your life

Published

on

Devon Fritz had his midlife crisis a little early.

He spent his 20s writing tax software, staying on track to hit all the life targets he’d set for himself: house, kids, financial security. And then, one day, he did the math and projected forward what the next 20 years of his life would look like. But instead of relief, “I had this weird feeling that I’d totally missed the target,” he told me recently.

”I looked around at my colleagues, who kind of felt stuck in this place,” he said. “They had gotten to this cushy job where things were good, pay was good, benefits were good, but nobody seemed happy.”

This might sound familiar. Who among us hasn’t had the occasional crisis of meaning, perhaps mentally scored to the Talking Heads’ “Once in a Lifetime”? (The last part might just be me.) But most of us shake off those existential doubts and press on, for better or for worse.

Advertisement

Devon Fritz, however, is not like you or me. Searching for a more meaningful life and career, he tried volunteering with refugee-aid groups in Germany during the 2015 migrant crisis — only to be discouraged by how slow, unresponsive, and ineffective he found the nonprofit world.

Eventually, at a conference in Oxford, England, he discovered effective altruism, or EA. EA is built on the idea that we should use rigorous evidence and cost-benefit analysis to do the most good possible, very much including how we donate to charity. A dollar to one organization might save a life; a dollar to another might buy a commemorative tote bag. EA takes that gap in impact seriously and follows the math wherever it leads, always searching for the donation or the act that can create the most measurable positive impact, especially in terms of lives saved.

The idea clicked with Fritz, and over the next several years, he rebuilt his career around a single, very EA-inflected question: How can you build a career that really matters? The result is his book The High-Impact Professional’s Playbook, the manual Fritz says he wished he’d had during his early existential crisis. The book lays out concrete paths through which a person with a regular job can actually create outsized positive impact on the world.

What follows are five of the most useful ideas from it. And while Fritz’s framework comes out of effective altruism — which, with all its hyper-rationality, can sometimes seem cold or weird to outsiders — he argues that the lessons have value for everyone.

Advertisement

“Being impactful — in its best form — doesn’t tell you what to do,” he told me. “It just says do stuff. Figure out what’s good, and do something that’s really good.”

Next best may be better than best

The intellectual spine of Fritz’s book is a concept called “counterfactuality,” which, I’ll admit, may make you want to stop reading now. But while it’s a 22-point word in Scrabble, counterfactuality is actually pretty simple. For any action meant to do good, ask yourself: What would have happened if I hadn’t done it? If the honest answer is “basically the same thing,” your actual impact is smaller than you think.

Haindavi Kandarpa, one of the case studies in Fritz’s book, was at Boston Consulting Group working on public health and education projects in India and Bangladesh. That sounds both important and good, but when Kandarpa asked the counterfactual question about her own role, the answer was devastating: Nothing would really change. If she wasn’t doing it, someone equally competent would have taken her slot and done roughly the same work. That realization led her to leave for a charity startup incubator.

Advertisement

A lot of the standard advice about doing good falters when faced with the counterfactual. If 500 people apply for a job at an elite nonprofit and one gets it, the actual impact of the hire is the often-small gap between them and the closet runner-up. Fritz’s paradoxical conclusion is that you can have more counterfactual impact in obscure places nobody is looking — like the charity ranked fifth on the effectiveness list, not first. That can be hard to hear, especially for high performers used to competing for every top prize, but the status hit is worth it for the sake of actually making a difference.

It’s not just what you do — it’s what you do with your money

Unless you’re a full-time volunteer or are extremely bad at salary negotiation, you get money for your work. And what you do with that money can be just as impactful as what you did to get it.

According to a 2024 GiveWell analysis cited in his book, you can statistically save one human life if you give just $3,000 — provided it’s to the most effective charity. Switching just 10 percent of your charitable giving from a typical charity to an evidence-backed one can help up to 100 times more people or animals, all for the same cost. That is a life-saving impact.

Advertisement

This is the move with the lowest barrier to entry in the entire book, and the one most influenced by effective altruism. You don’t have to quit your job, move countries, or learn a new skillset. You keep doing what you’re doing but write the check — or, better, set up a recurring transfer — to an organization on a credible evaluator’s list. (GiveWell is a great place to begin.) You can start at 1 percent of income and see how it feels.

Your workplace is a lever

Most people don’t think of their workplace as something they can change. But if you have any influence over procurement, hiring, 401(k) match programs, charitable giving policies, or the company’s public positions, you have access to budgets and decisions that could dwarf what you can do on your own.

A mid-level manager who convinces their company to enroll in a workplace-giving program that defaults to effective charities can route more money in a single policy change than they could personally donate over a decade.

Advertisement

Nonprofits desperately need people who know how things work

The most consistently surprising path in Fritz’s book is trusteeship and advisory work. Charities and NGOs are often filled with well-meaning people who desperately want to do good, Fritz told me, but “they don’t have anybody even thinking” about quotidian details like finance. Luciana Vilar, another case study in the book, spent years in corporate finance before joining two nonprofit boards and was routinely the only person in the room who knew how to build a real budget.

If you are a competent finance person, lawyer, HR professional, or operations manager — which includes basically anyone who has worked inside a functioning company — you probably have skills that even well-funded nonprofits are desperate for. Giving few hours of your week to board or advisory time can unlock capacity an organization can’t buy, and it doesn’t require a career switch.

Your network has more leverage than you think

Advertisement

Fritz’s most striking claim is that the most time-efficient path to making a difference isn’t your career or your donations; it’s the people you already know.

If an effective but under-resourced charity is trying to fill a role, and you spend an hour emailing the five people in your network who’d be a good fit, and one gets hired, the counterfactual math of what you’ve done is absurdly high. And it didn’t require you to change jobs or write a check. All you had to do was send some emails.

It’s the path Fritz himself has taken, starting High Impact Professionals, which has placed dozens of mid-career people into higher-impact roles, all while rigorously measuring its own counterfactual impact. (When a candidate in the network takes a job, they ask the employer how good the next-best candidate was. When it’s very close, they count less impact.)

The same network effects can work with donations. Fritz describes people raising $1,000 or more by posting on social media a few weeks before their birthday, asking friends to donate to an effective charity instead of sending a gift. A lot of “how can I make a difference” agonizing is really about not wanting to look at the lever that’s already in your hand.
I’ve talked to enough people lately, including myself in the mirror, to know that low-grade despair is becoming our default setting. The problems of the world feel too large, individual action feels too small, and it can feel like the honest move is to just tend your garden. But when I pushed Fritz on this, he gave me an answer I keep coming back to. “There are big problems,” he acknowledged. “But that means it’s a great time to jump in and try to solve them.”

Advertisement

That can sound naive — but it’s also right. A world without problems wouldn’t need any of us. The world we actually have needs all the help it can get, and the bar for being useful in it is lower than we think.

Source link

Advertisement
Continue Reading

Tech

Victrola x Third Man Records Limited-Edition Turntable and Speaker Set Debuts for RSD 2026

Published

on

Victrola and Third Man Records have officially pulled the trigger on their limited-edition collaboration for Record Store Day 2026, adding a hardware angle to a weekend we’ve already covered heavily from the music side including some of the most sought-after releases hitting bins today. This time, it’s not about what you’re chasing on vinyl, it’s what you’ll be spinning it on.

The release pairs the custom Victrola VPT 1520 TMR turntable with a matching set of VPS 400 TMR Tempo bookshelf speakers, both finished in Third Man’s signature yellow-and-black. The collection is available separately or as a bundle via Third Man’s official site and Victrola’s dedicated page, along with in-store availability at Third Man Records locations in Detroit, Nashville, and London.

The launch coincides with Record Store Day events, including a celebration at the label’s Detroit pressing plant.

I’ve had the opportunity to visit Third Man many times, and if you haven’t been to one of their locations, you need to go,” said Scott Hagen, CEO of Victrola. “What Jack and his team have built is something truly magical,” Hagen continued. “They aren’t just painting things yellow and black—it’s as if they’re bringing inanimate objects to life, each one with its own character and soul. Maybe that’s why vinyl and vinyl culture fit so naturally within the Third Man world. It’s certainly why Victrola was incredibly honored when Third Man asked us to design and bring these new products to market together. I’m super proud of the outcome. Anyone who loves Third Man is going to need this new turntable and speakers in their life.”

Advertisement

Third Man Records x Victrola Wave VPT 1520 TMR  Turntable 

victorla-thrid-man-records-turntable

The Third Man Records x Victrola Wave VPT 1520 TMR turntable features an MDF plinth, adjustable counterweight, removable headshell, and high-resolution Bluetooth with Auracast support. It’s finished in a custom black-and-yellow color scheme with the Third Man globe logo prominently integrated into the design.

  • Drive Method: Belt drive
  • Speeds: 33 ⅓ and 45 RPM
  • Cartridge: Ortofon OM5E
  • Bluetooth Version: 5.4
  • Bluetooth Codecs: aptX Adaptive, aptX HD, SBC, LC3 (Auracast supported)
  • Profiles: A2DP, AVRCP
  • Outputs:
    • RCA (built-in preamp: 200–280 mV)
    • Phono (2.5 mV ±3 dB)
  • Power Input: AC 100–240V, 50/60Hz
  • Dimensions: 16.93″ W x 14.84″ L x 4.72″ H
  • Weight: 13 lbs (with cover)
  • Included: Dust cover, platter, silicone slipmat, 45 RPM adapter, 6′ RCA cable, manual

Third Man Records x Victrola VPS 400 TMR Wireless Powered Speakers

victorla-thrid-man-records-speakers

The VPS 400 TMR Tempo powered bookshelf speakers round out the system with flexible wired and wireless connectivity, including Bluetooth and Auracast support for multi-room playback with up to 10 speakers.

  • Inputs: Bluetooth, Auracast Broadcast Audio, 3.5mm AUX, RCA, USB-C (PC/flash drive), Optical (Toslink)
  • Outputs: Auracast Broadcast Audio, Subwoofer out (with hi-pass “Bass Filter”)
  • Dimensions (Right/Primary): 5.91″ W x 7.76″ D x 8.86″ H (150 x 197 x 225 mm)
  • Dimensions (Left/Secondary): 5.91″ W x 8.07″ D x 8.86″ H (150 x 205 x 225 mm)
  • Weight (Pair): 9.47 lbs (4.30 kg)
  • Included: Right and left speakers, power cord (1.5m), speaker interconnect cable (4m), RCA cable (1.5m), 3.5mm cable (1.5m), instruction manual

Third Man Records x Victrola Vinyl Collection

To round out the collaboration, Victrola has curated a capsule collection of Third Man Records titles available at Victrola.com/thirdmanrecords, pairing the hardware launch with some of the label’s most recognizable releases.

  • Featured Titles:
    • Elephant – The White Stripes
    • Blunderbuss – Jack White
    • Broken Boy Soldiers – The Raconteurs
victrola-multi-zones

The Bottom Line 

The collaboration between Victrola and Third Man Records is a straightforward, practical play tied to Record Store Day 2026. Instead of focusing only on records, it connects the purchase of vinyl with a ready-to-use playback system.

What stands out is the combination of a belt-drive turntable with a known cartridge and powered speakers that include both wired inputs and modern wireless features like Bluetooth and Auracast. That flexibility allows users to play records, stream from a phone, or send audio to other compatible speakers without adding more components.

At $499.99 for the turntable, $249.99 for the speakers, or $649.98 for the bundle, this is aimed at newer vinyl buyers or casual listeners who want a complete system without building one piece by piece. It lowers the barrier to entry while offering more functionality than entry-level all-in-one options.

It’s not intended for more advanced systems or users looking to upgrade individual components over time, but for its target audience, the value is in simplicity and convenience rather than maximum performance.

Advertisement. Scroll to continue reading.
Advertisement

Price & Availability

  • VPT 1520 TMR Turntable$499.99 
  • VPS 400 TMR Powered Wireless Speakers$249.99
  • Turntable/Speaker bundle$649.98

Tip: The Victrola Wave turntable (regular edition) is available at Amazon, as well as the Victrola Tempo speakers, which are the same models without yellow branding.

Source link

Continue Reading

Tech

What Is The 6-12 Rule For Electrical Outlets?

Published

on





There are various rules and regulations in home construction. Each discipline involved in the process — carpentry, plumbing, and electrical, to name a few — has codes to follow, which are intended to prevent shoddy workmanship or ensure homeowners’ safety. The 6-12 rule (or, as it’s sometimes known, the 2-6-12 rule) is definitely for the latter.

The rule mandates a certain spacing for electrical outlets and covers all the various types of outlets. Under the 2-6-12 rule, an electrical outlet must be installed in any wall space longer than 2 feet. A wall space is any continuous wall that is not broken up by a door or fireplace. Further, those outlets cannot be spaced more than 12 feet apart. This is to make sure that no point along the wall is more than 6 feet from an outlet.

The rule makes a lot of sense because most appliances that a homeowner will plug in have 6-foot cords. The rule is in place to ensure that there will always be an outlet in reach, no matter where you place a TV, stereo, lamp, or other electrical appliance. It’s designed to discourage the use of extension cords wherever possible.

Advertisement

Kitchens follow different rules

The one room in your house that isn’t subject to the same rule is the kitchen. Those rules are modified to accommodate the shorter 2-foot cables that come with kitchen appliances such as coffee makers, blenders, and the like. In a kitchen, electrical outlets must be placed no more than 2 feet from the edge of a counter and no more than 4 feet apart. Those outlets will typically be required to be Ground-Fault Circuit Interrupt or GFCI outlets (the outlets with the reset button). These can be wall-mounted or mounted within the counter.

Kitchen islands have still more rules. Islands are not required to have an electrical outlet, but they still need to have all the necessary equipment for power to be added later. Often, this means a closed electrical box located in one of the island’s cabinets. As long as it can be added later, it’s allowed.

Advertisement

Other rooms, like foyers and bathrooms, have similar but different rules, and you’ll have to be aware of what your local regulations require. This is especially true since those rules can vary between regions. If you’re a DIYer, be sure to research what’s applicable in your city or county.



Advertisement

Source link

Continue Reading

Tech

Most enterprises can’t stop stage-three AI agent threats, VentureBeat survey finds

Published

on

A rogue AI agent at Meta passed every identity check and still exposed sensitive data to unauthorized employees in March. Two weeks later, Mercor, a $10 billion AI startup, confirmed a supply-chain breach through LiteLLM. Both are traced to the same structural gap. Monitoring without enforcement, enforcement without isolation. A VentureBeat three-wave survey of 108 qualified enterprises found that the gap is not an edge case. It is the most common security architecture in production today.

Gravitee’s State of AI Agent Security 2026 survey of 919 executives and practitioners quantifies the disconnect. 82% of executives say their policies protect them from unauthorized agent actions. Eighty-eight percent reported AI agent security incidents in the last twelve months. Only 21% have runtime visibility into what their agents are doing. Arkose Labs’ 2026 Agentic AI Security Report found 97% of enterprise security leaders expect a material AI-agent-driven incident within 12 months. Only 6% of security budgets address the risk.

VentureBeat’s survey results show that monitoring investment snapped back to 45% of security budgets in March after dropping to 24% in February, when early movers shifted dollars into runtime enforcement and sandboxing. The March wave (n=20) is directional, but the pattern is consistent with February’s larger sample (n=50): enterprises are stuck at observation while their agents already need isolation. CrowdStrike’s Falcon sensors detect more than 1,800 distinct AI applications across enterprise endpoints. The fastest recorded adversary breakout time has dropped to 27 seconds. Monitoring dashboards built for human-speed workflows cannot keep pace with machine-speed threats.

The audit that follows maps three stages. Stage one is observe. Stage two is enforce, where IAM integration and cross-provider controls turn observation into action. Stage three is isolate, sandboxed execution that bounds blast radius when guardrails fail. VentureBeat Pulse data from 108 qualified enterprises ties each stage to an investment signal, an OWASP ASI threat vector, a regulatory surface, and immediate steps security leaders can take.

Advertisement

The threat surface stage-one security cannot see

The OWASP Top 10 for Agentic Applications 2026 formalized the attack surface last December. The ten risks are: goal hijack (ASI01), tool misuse (ASI02), identity and privilege abuse (ASI03), agentic supply chain vulnerabilities (ASI04), unexpected code execution (ASI05), memory poisoning (ASI06), insecure inter-agent communication (ASI07), cascading failures (ASI08), human-agent trust exploitation (ASI09), and rogue agents (ASI10). Most have no analog in traditional LLM applications. The audit below maps six of these to the stages where they are most likely to surface and the controls that address them.

Invariant Labs disclosed the MCP Tool Poisoning Attack in April 2025: malicious instructions in an MCP server’s tool description cause an agent to exfiltrate files or hijack a trusted server. CyberArk extended it to Full-Schema Poisoning. The mcp-remote OAuth proxy patched CVE-2025-6514 after a command-injection flaw put 437,000 downloads at risk.

Merritt Baer, CSO at Enkrypt AI and former AWS Deputy CISO, framed the gap in an exclusive VentureBeat interview: “Enterprises believe they’ve ‘approved’ AI vendors, but what they’ve actually approved is an interface, not the underlying system. The real dependencies are one or two layers deeper, and those are the ones that fail under stress.”

CrowdStrike CTO Elia Zaitsev put the visibility problem in operational terms in an exclusive VentureBeat interview at RSAC 2026: “It looks indistinguishable if an agent runs your web browser versus if you run your browser.” Distinguishing the two requires walking the process tree, tracing whether Chrome was launched by a human from the desktop or spawned by an agent in the background. Most enterprise logging configurations cannot make that distinction.

Advertisement

The regulatory clock and the identity architecture

Auditability priority tells the same story in miniature. In January, 50% of respondents ranked it a top concern. By February, that dropped to 28% as teams sprinted to deploy. In March, it surged to 65% when those same teams realized they had no forensic trail for what their agents did.

HIPAA’s 2026 Tier 4 willful-neglect maximum is $2.19M per violation category per year. In healthcare, Gravitee’s survey found 92.7% of organizations reported AI agent security incidents versus the 88% all-industry average. For a health system running agents that touch PHI, that ratio is the difference between a reportable breach and an uncontested finding of willful neglect. FINRA’s 2026 Oversight Report recommends explicit human checkpoints before agents that can act or transact execute, along with narrow scope, granular permissions, and complete audit trails of agent actions.

Mike Riemer, Field CISO at Ivanti, quantified the speed problem in a recent VentureBeat interview: “Threat actors are reverse engineering patches within 72 hours. If a customer doesn’t patch within 72 hours of release, they’re open to exploit.” Most enterprises take weeks. Agents operating at machine speed widen that window into a permanent exposure.

The identity problem is architectural. Gravitee’s survey of 919 practitioners found only 21.9% of teams treat agents as identity-bearing entities, 45.6% still use shared API keys, and 25.5% of deployed agents can create and task other agents. A quarter of enterprises can spawn agents that their security team never provisioned. That is ASI08 as architecture.

Advertisement

Guardrails alone are not a strategy

A 2025 paper by Kazdan and colleagues (Stanford, ServiceNow Research, Toronto, FAR AI) showed a fine-tuning attack that bypasses model-level guardrails in 72% of attempts against Claude 3 Haiku and 57% against GPT-4o. The attack received a $2,000 bug bounty from OpenAI and was acknowledged as a vulnerability by Anthropic. Guardrails constrain what an agent is told to do, not what a compromised agent can reach.

CISOs already know this. In VentureBeat’s three-wave survey, prevention of unauthorized actions ranked as the top capability priority in every wave at 68% to 72%, the most stable high-conviction signal in the dataset. The demand is for permissioning, not prompting. Guardrails address the wrong control surface.

Zaitsev framed the identity shift at RSAC 2026: “AI agents and non-human identities will explode across the enterprise, expanding exponentially and dwarfing human identities. Each agent will operate as a privileged super-human with OAuth tokens, API keys, and continuous access to previously siloed data sets.” Identity security built for humans will not survive this shift. Cisco President Jeetu Patel offered the operational analogy in an exclusive VentureBeat interview: agents behave “more like teenagers, supremely intelligent, but with no fear of consequence.”

VentureBeat Prescriptive Matrix: AI Agent Security Maturity Audit

Stage

Advertisement

Attack Scenario

What Breaks

Detection Test

Blast Radius

Advertisement

Recommended Control

1: Observe

Attacker embeds goal-hijack payload in forwarded email (ASI01). Agent summarizes email and silently exfiltrates credentials to an external endpoint. See: Meta March 2026 incident.

No runtime log captures the exfiltration. SIEM never sees the API call. The security team learns from the victim. Zaitsev: agent activity is “indistinguishable” from human activity in default logging.

Advertisement

Inject a canary token into a test document. Route it through your agent. If the token leaves your network, stage one failed.

Single agent, single session. With shared API keys (45.6% of enterprises): unlimited lateral movement.

Deploy agent API call logging to SIEM. Baseline normal tool-call patterns per agent role. Alert on the first outbound call to an unrecognized endpoint.

2: Enforce

Advertisement

Compromised MCP server poisons tool description (ASI04). Agent invokes poisoned tool, writes attacker payload to production DB using inherited service-account credentials. See: Mercor/LiteLLM April 2026 supply-chain breach.

IAM allows write because agent uses shared service account. No approval gate on write ops. Poisoned tool indistinguishable from clean tool in logs. Riemer: “72-hour patch window” collapses to zero when agents auto-invoke.

Register a test MCP server with a benign-looking poisoned description. Confirm your policy engine blocks the tool call before execution reaches the database. Run mcp-scan on all registered servers.

Production database integrity. If agent holds DBA-level credentials: full schema compromise. Lateral movement via trust relationships to downstream agents.

Advertisement

Assign scoped identity per agent. Require approval workflow for all write ops. Revoke every shared API key. Run mcp-scan on all MCP servers weekly.

3: Isolate

Agent A spawns Agent B to handle subtask (ASI08). Agent B inherits Agent A’s permissions, escalates to admin, rewrites org security policy. Every identity check passes. Source: CrowdStrike CEO George Kurtz, RSAC 2026 keynote.

No sandbox boundary between agents. No human gate on agent-to-agent delegation. Security policy modification is a valid action for admin-credentialed process. CrowdStrike CEO George Kurtz disclosed at RSAC 2026 that the agent “wanted to fix a problem, lacked permissions, and removed the restriction itself.”

Advertisement

Spawn a child agent from a sandboxed parent. Child should inherit zero permissions by default and require explicit human approval for each capability grant.

Organizational security posture. A rogue policy rewrite disables controls for every subsequent agent. 97% of enterprise leaders expect a material incident within 12 months (Arkose Labs 2026).

Sandbox all agent execution. Zero-trust for agent-to-agent delegation: spawned agents inherit nothing. Human sign-off before any agent modifies security controls. Kill switch per OWASP ASI10.

Sources: OWASP Top 10 for Agentic Applications 2026; Invariant Labs MCP Tool Poisoning (April 2025); CrowdStrike RSAC 2026 Fortune 50 disclosure; Meta March 2026 incident (The Information/Engadget); Mercor/LiteLLM breach (Fortune, April 2, 2026); Arkose Labs 2026 Agentic AI Security Report; VentureBeat Pulse Q1 2026.

Advertisement

The stage-one attack scenario in this matrix is not hypothetical. Unauthorized tool or data access ranked as the most feared failure mode in every wave of VentureBeat’s survey, growing from 42% in January to 50% in March. That trajectory and the 70%-plus priority rating for prevention of unauthorized actions are the two most mutually reinforcing signals in the entire dataset. CISOs fear the exact attack this matrix describes, and most have not deployed the controls to stop it.

Hyperscaler stage readiness: observe, enforce, isolate

The maturity audit tells you where your security program stands. The next question is whether your cloud platform can get you to stage two and stage three, or whether you are building those capabilities yourself. Patel put it bluntly: “It’s not just about authenticating once and then letting the agent run wild.” A stage-three platform running a stage-one deployment pattern gives you stage-one risk.

VentureBeat Pulse data surfaces a structural tension in this grid. OpenAI leads enterprise AI security deployments at 21% to 26% across the three survey waves, making the same provider that creates the AI risk also the primary security layer. The provider-as-security-vendor pattern holds across Azure, Google, and AWS. Zero-incremental-procurement convenience is winning by default. Whether that concentration is a feature or a single point of failure depends on how far the enterprise has progressed past stage one.

Provider

Advertisement

Identity Primitive (Stage 2)

Enforcement Control (Stage 2)

Isolation Primitive (Stage 3)

Gap as of April 2026

Advertisement

Microsoft Azure

Entra ID agent scoping. Agent 365 maps agents to owners. GA.

Copilot Studio DLP policies. Purview for agent output classification. GA.

Azure Confidential Containers for agent workloads. Preview. No per-agent sandbox at GA.

Advertisement

No agent-to-agent identity verification. No MCP governance layer. Agent 365 monitors but cannot block in-flight tool calls.

Anthropic

Managed Agents: per-agent scoped permissions, credential mgmt. Beta (April 8, 2026). $0.08/session-hour.

Tool-use permissions, system prompt enforcement, and built-in guardrails. GA.

Advertisement

Managed Agents sandbox: isolated containers per session, execution-chain auditability. Beta. Allianz, Asana, Rakuten, and Sentry are in production.

Beta pricing/SLA not public. Session data in Anthropic-managed DB (lock-in risk per VentureBeat research). GA timing TBD.

Google Cloud

Vertex AI service accounts for model endpoints. IAM Conditions for agent traffic. GA.

Advertisement

VPC Service Controls for agent network boundaries. Model Armor for prompt/response filtering. GA.

Confidential VMs for agent workloads. GA. Agent-specific sandbox in preview.

Agent identity ships as a service account, not an agent-native principal. No agent-to-agent delegation audit. Model Armor does not inspect tool-call payloads.

OpenAI

Advertisement

Assistants API: function-call permissions, structured outputs. Agents SDK. GA.

Agents SDK guardrails, input/output validation. GA.

Agents SDK Python sandbox. Beta (API and defaults subject to change before GA per OpenAI docs). TypeScript sandbox confirmed, not shipped.

No cross-provider identity federation. Agent memory forensics limited to session scope. No kill switch API. No MCP tool-description inspection.

Advertisement

AWS

Bedrock model invocation logging. IAM policies for model access. CloudTrail for agent API calls. GA.

Bedrock Guardrails for content filtering. Lambda resource policies for agent functions. GA.

Lambda isolation per agent function. GA. Bedrock agent-level sandboxing on roadmap, not shipped.

Advertisement

No unified agent control plane across Bedrock + SageMaker + Lambda. No agent identity standard. Guardrails do not inspect MCP tool descriptions.

Status as of April 15, 2026. GA = generally available. Preview/Beta = not production-hardened. “What’s Missing” column reflects VentureBeat’s analysis of publicly documented capabilities; gaps may narrow as vendors ship updates.

No provider in this grid ships a complete stage-three stack today. Most enterprises assemble isolation from existing cloud building blocks. That is a defensible choice if it is a deliberate one. Waiting for a vendor to close the gap without acknowledging the gap is not a strategy.

The grid above covers hyperscaler-native SDKs. A large segment of AI builders deploys through open-source orchestration frameworks like LangChain, CrewAI, and LlamaIndex that bypass hyperscaler IAM entirely. These frameworks lack native stage-two primitives. There is no scoped agent identity, no tool-call approval workflow, and no built-in audit trails. Enterprises running agents through open-source orchestration need to layer enforcement and isolation on top, not assume the framework provides it.

Advertisement

VentureBeat’s survey quantifies the pressure. Policy enforcement consistency grew from 39.5% to 46% between January and February, the largest consistent gain of any capability criterion. Enterprises running agents across OpenAI, Anthropic, and Azure need enforcement that works the same way regardless of which model executes the task. Provider-native controls enforce policy within that provider’s runtime only. Open-source orchestration frameworks enforce it nowhere.

One counterargument deserves acknowledgment: not every agent deployment needs stage three. A read-only summarization agent with no tool access and no write permissions may rationally stop at stage one. The sequencing failure this audit addresses is not that monitoring exists. It is that enterprises running agents with write access, shared credentials, and agent-to-agent delegation are treating monitoring as sufficient. For those deployments, stage one is not a strategy. It is a gap.

Allianz shows stage-three in production

Allianz, one of the world’s largest insurance and asset management companies, is running Claude Managed Agents across insurance workflows, with Claude Code deployed to technical teams and a dedicated AI logging system for regulatory transparency, per Anthropic’s April 8 announcement. Asana, Rakuten, Sentry, and Notion are in production on the same beta. Stage-three isolation, per-agent permissioning, and execution-chain auditability are deployable now, not roadmap. The gating question is whether the enterprise has sequenced the work to use them.

The 90-day remediation sequence

Days 1–30: Inventory and baseline. Map every agent to a named owner. Log all tool calls. Revoke shared API keys. Deploy read-only monitoring across all agent API traffic. Run mcp-scan against every registered MCP server. CrowdStrike detects 1,800 AI applications across enterprise endpoints; your inventory should be equally comprehensive. Output: agent registry with permission matrix, MCP scan report.

Advertisement

Days 31–60: Enforce and scope. Assign scoped identities to every agent. Deploy tool-call approval workflows for write operations. Integrate agent activity logs into existing SIEM. Run a tabletop exercise: What happens when an agent spawns an agent? Conduct a canary-token test from the prescriptive matrix. Output: IAM policy set, approval workflow, SIEM integration, canary-token test results.

Days 61–90: Isolate and test. Sandbox high-risk agent workloads (PHI, PII, financial transactions). Enforce per-session least privilege. Require human sign-off for agent-to-agent delegation. Red-team the isolation boundary using the stage-three detection test from the matrix. Output: sandboxed execution environment, red-team report, board-ready risk summary with regulatory exposure mapped to HIPAA tier and FINRA guidance.

What changes in the next 30 days

EU AI Act Article 14 human-oversight obligations take effect August 2, 2026. Programs without named owners and execution trace capability face enforcement, not operational risk.

Anthropic’s Claude Managed Agents is in public beta at $0.08 per session-hour. GA timing, production SLAs, and final pricing have not been announced.

Advertisement

OpenAI Agents SDK ships TypeScript support for sandbox and harness capabilities in a future release, per the company’s April 15 announcement. Stage-three sandbox becomes available to JavaScript agent stacks when it ships.

What the sequence requires

McKinsey’s 2026 AI Trust Maturity Survey pegs the average enterprise at 2.3 out of 4.0 on its RAI maturity model, up from 2.0 in 2025 but still an enforcement-stage number; only one-third of the ~500 organizations surveyed report maturity levels of three or higher in governance. Seventy percent have not finished the transition to stage three. ARMO’s progressive enforcement methodology gives you the path: behavioral profiles in observation, permission baselines in selective enforcement, and full least privilege once baselines stabilize. Monitoring investment was not wasted. It was stage one of three. The organizations stuck in the data treated it as the destination.

The budget data makes the constraint explicit. The share of enterprises reporting flat AI security budgets doubled from 7.9% in January to 16% in February in VentureBeat’s survey, with the March directional reading at 20%. Organizations expanding agent deployments without increasing security investment are accumulating security debt at machine speed. Meanwhile, the share reporting no agent security tooling at all fell from 13% in January to 5% in March. Progress, but one in twenty enterprises running agents in production still has zero dedicated security infrastructure around them.

About this research

Total qualified respondents: 108. VentureBeat Pulse AI Security and Trust is a three-wave VentureBeat survey run January 6 through March 15, 2026. Qualified sample (organizations 100+ employees): January n=38, February n=50, March n=20. Primary analysis runs from January to February; March is directional. Industry mix: Tech/Software 52.8%, Financial Services 10.2%, Healthcare 8.3%, Education 6.5%, Telecom/Media 4.6%, Manufacturing 4.6%, Retail 3.7%, other 9.3%. Seniority: VP/Director 34.3%, Manager 29.6%, IC 22.2%, C-Suite 9.3%.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025