Connect with us
DAPA Banner

Tech

ViewSonic LX60HD Smart LED Projector Delivers 1080p Big Screen Streaming for $299

Published

on

The lifestyle projector category is no longer a niche sideshow in home theater. It is one of the fastest growing segments in the display market, driven by mobility, improving image quality, lower prices, and the simple reality that a 100-inch picture is more fun than a 55-inch TV when friends come over. Consumers want something they can move from the living room to the bedroom, take outside for movie night, or toss in a bag for a weekend away without hiring an installer.

The ViewSonic LX60HD lands squarely in that conversation. Known primarily for its PC monitors and business and home theater projectors, ViewSonic is leaning into the lifestyle trend with a portable Smart LED model that focuses on flexible placement, easy setup, and built in content access right out of the box. It is designed to make big screen viewing less intimidating, less permanent, and far more accessible at a price that does not require a second mortgage.

ViewSonic LX60HD Features & Specifications:

Product Design: The LX60HD uses the familiar cube-style chassis that’s become the default look for lifestyle projectors—compact, portable, and designed to sit just about anywhere without looking like “serious home theater equipment.”

Imaging Chip and Light Output: Inside, the LX60HD uses a single TFT LCD imaging chip paired with an LED light source rated at 630 ANSI lumens. ViewSonic also uses a sealed light engine to reduce the impact of dust and moisture over time.

Advertisement

Resolution: Native 1080p (Full HD).

Optical Engine: The sealed optical engine is designed to help keep dust and moisture from entering the light path—important for a projector that’s likely to be moved around, used in different rooms, or taken on the road.

viewsonic-lx60hd-rear-inputs

Connectivity: The LX60HD covers both wireless and wired use cases. Wireless support includes built-in Wi-Fi and Bluetooth. For physical connections, it offers HDMI, USB-C, AV-in, and an audio out port for external speakers or headphones.

viewsonic-lx60hd-setup

Easy Setup: The LX60HD includes a suite of automated setup tools designed to simplify placement and alignment. These features include auto four-corner adjustment, automatic horizontal and vertical keystone correction, auto screen fit, instant autofocus, and obstacle avoidance to help maintain a properly sized and aligned image with minimal manual intervention.

viewsonic-lx60hd-screen-sizes

Image Size Options: ViewSonic states that the LX60HD can project images up to 140 inches. In practical terms, it can produce an approximately 50 inch image from about 5 feet away, or scale up to around 100 inches from roughly 9 feet. As with any projector rated at 630 ANSI lumens, overall picture quality will vary depending on ambient light conditions, with best results achieved in dim or darkened rooms.

Advertisement
viewsonic-lx60hd-google-tv

Google TV: The LX60HD runs on the built-in Google TV platform, providing direct access to a wide range of streaming services, including Netflix, Amazon Prime Video, YouTube, Disney+, Max, and others. This allows users to stream content without needing an external media device, keeping setup simple and self-contained.

viewsonic-pj-wpd-700

Wireless Screen Casting Dongle (Optional): ViewSonic also offers the optional PJ-WPD-700 plug and play dongle, which enables wireless screen casting from compatible smartphones and laptops directly to the LX60HD. It is a practical add on for classrooms, meetings, or quick presentations where running cables is not ideal.

Advertisement. Scroll to continue reading.

ViewSonic LX60HD Projector Specifications

ViewSonic Model LX60HD
Projector Type Compact LED Video Projector
Price $299.99
Display Type TFT LCD x 1
Light Source Type LED
Light Source Life, Normal  20,000 Hours
Color Depth 16.7 Million Colors
Display Resolution Full HD (1920×1080)
Brightness (ANSI Lumens) 630
Dynamic Contrast Ratio 4,200:1
Screen Size 50″-140″
Aspect Ratio 16:9
Throw Distance 1.42m-3.8m (100″@2.28m)
Throw Ratio 1.2
Keystone Correction Vertical (+/- 40º) 
Horizontal (+/- 40º)
Horizontal Scan Rate 15K-135KHz
Vertical Scan Rate 23-85Hz
PC Resolution (max) VGA (640 x 480) to
Full HD (1920 x 1080)
Mac® Resolution (max) 480i, 480p, 576i, 576p, 720p, 1080i, 1080p
Wired Inputs USB 2.0 Type A: 1
HDMI 1.4 (with HDCP 1.4): 1
AV In: 1
Wired Outputs 3.5mm Audio Out: 1
WiFi 5Gn
Bluetooth Version 5.0
Bluetooth Audio-In 1 (BT5.0) – Direct streaming from compatible smartphones, PCs, etc..
Bluetooth Audio-out 1 (BT5.0 – compatible with Bluetooth headphones or speakers
Power Supply 100-240V+/- 10%, 50/60Hz AC
Stand-by <0.5W
Physical Control Keypad, Power key
On-Screen Display Display Image
Power Management
Basic and Advanced System Information (See user guide for full OSD functionality)
Operating Temperature 32-104º F (0 – 40 °C)
Kensington Lock Slot 1
Dimensions  9.0 x 8.9 x 6.3 inches

228 x 227 x 159mm

Advertisement
Net Weight 6.8 lbs
Package Contents Projector
Power Cable
Remote Control
Quick Start Guide
Warranty: One-year limited warranty on parts and labor

The Bottom Line 

The lifestyle projector space is crowded with inexpensive models that promise the world and deliver a dim flashlight. ViewSonic is at least playing this one straight. The LX60HD’s 630 ANSI lumens puts it ahead of portable competitors like the Xgimi MoGo 4 (450 lumens) and Samsung Freestyle (550 lumens), while still landing under the $300 mark. That matters.

You’re getting native 1080p, solid auto setup tools, built in Bluetooth, and Google TV in one compact cube. For a bedroom, dorm, office, or casual movie night, it makes a lot of sense. Setup is simple. Streaming is built in. Portability is the point.

But let’s keep expectations grounded. 630 lumens is not enough for a large screen home theater in a bright living room. This projector needs dim or near dark conditions to look its best, especially at 100 inches or larger. If you want a daylight TV replacement, this is not it.

Advertisement

The design is clean and easy to move, although a built in carry handle or optional floor stand would have made it even more flexible.

For under $300, the LX60HD offers a portable, affordable lifestyle projector that delivers usable brightness, smart features, and convenience without pretending it can replace a dedicated home theater setup.

Price & Availability

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Grab Apple's M5 MacBook Air for $949 this weekend, record low price

Published

on

Thanks to a $150 discount, shoppers can grab Apple’s 2026 M5 MacBook Air 13-inch for a record low $949.

Open Midnight MacBook Air 13-inch laptop with blue abstract wallpaper on screen, large white text reading M5 AIR $949 over a bright pink, yellow, and teal gradient background.
Get the lowest 13-inch MacBook Air price this weekend at Amazon – Image credit: Apple

The 13-inch MacBook Air (2026) is now equipped with Apple’s M5 chip that features a 10-core CPU with 4 super cores and 6 efficiency cores. This allows a performance boost over the M4 model. In the standard spec, which is on sale for $949 at Amazon this weekend, you’ll also get an 8-core GPU, 16GB of unified memory, and 512GB of storage.
Get 13″ MacBook Air M5 from $949
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

5 Telltale Signs You’re Probably A Bad Driver

Published

on





Few people believe they are bad drivers, which is exactly why terrible drivers remain blissfully unaware that they are menacing the road. In 1981, a Stockholm University study found that the majority of drivers reported having “above average” driving and safety skills. This wasn’t a one-off, either, as a 2021 study by five researchers at the University of Hong Kong and Linköping University reaffirmed the widespread tendency to overstate one’s abilities. 

Try an experiment the next time you’re in a group setting. Ask people what they’d rate their own driving skills, and you’ll probably receive answers ranging from “above average” to “excellent,” which can’t be true. By math and logic, most drivers have to be “average”, as that’s the definition of the word. 

This cognitive dissonance — as the researchers call it — happens because bad driving rarely results in fiery crashes and police chases on TV. It happens every day, through many small failures like poor spacing, inconsistent speeds, late or harsh braking, hesitant decisions, and more such minor problems. Together, these small, irritating problems endanger everyone on the road. Also, all of the signs on this list are objectively measurable failures in vehicle control, not just driving preferences. With all that said, here are five worryingly common signs of a bad driver.

Advertisement

Thinking everyone else is the problem

Perhaps the most definitive metric of what defines a bad driver is the “I’m never in the wrong” attitude. If someone you know is constantly bemoaning the state of drivers on the road, then it’s extraordinarily likely that they are the bad driver themselves. Furthermore, if anyone says something along the lines of “that crash was unavoidable,” that indicates a poor or inexperienced driver. In 2016, a Proceedings of the National Academy of Sciences study found that driver-related issues were to blame in over 90% of cited crashes.

While most literature on driver confidence is published outside the U.S., a 2013 National Library of Medicine (NLM) study by two researchers from NYU and Elizabethtown College found that Americans are prone to thinking they are better drivers than average.

Advertisement

Tailgating other drivers

Many people don’t realize that even if you’re in front of someone going the speed limit, the law requires giving way to someone faster than you. That is why it can be very frustrating to be stuck behind a driver who is camping in the left lane on a highway, especially if you’re in a rush. However, this is not an excuse to tailgate the slowpoke in the left lane, and doing so is dangerous and a telltale sign of a bad driver. Studies have shown that tailgating drastically impacts reaction time and road safety, should an incident occur. 

In most cases, the two or three-second rule should be applied, wherein you look at a fixed object on the road, and ensure at least three seconds pass between your passing that fixed object, and the car in front of you. 

This leaves adequate braking distance should something require a quick stop of the car ahead of you. Furthermore, the evidence overwhelmingly suggests that younger drivers are more likely to be tailgaters than older drivers, though it is one of many common mistakes that even experienced drivers make

Advertisement

Never missing an exit

A lot of you must have seen the “I turn now, good luck everybody else” snippet from “Family Guy”, Seth MacFarlane’s Disney-owned running animated sitcom. There’s a famous saying that goes along the lines of “bad drivers never miss their/an exit”, which is what that snippet plays on. The idea is that someone who is an objectively bad driver will do dangerous things, like cutting across several lanes of traffic, crossing solid yellow or white lines, or braking extremely hard before taking an off-ramp in order not to miss their exit. 

The underlying assumption is that someone who is a “good” driver will prioritize road safety, and if that means adding time and distance to their journey, they’d do it over making a hazardous exit. Of course, the situation can be quite frustrating, especially in certain areas of the U.S. where a single missed exit can result in 15 or even more minutes of extra driving time each journey. The easiest way to not miss exits is to be prepared for them, which might sound intuitive, but is easier said than done. You could be on a new road, visibility could be bad, road markings and signs could be faded, and if you’re going fast, GPS callouts might be a bit delayed. Nonetheless, it’s always better to have a bit more driving time and not cause an accident than to make a risky turn to save a bit of time.

Advertisement

Hard or late braking

Arguably, knowing when to brake (and how much to brake) is the most important skill that a driver can possess, and having a car with a good stopping distance goes a long way in keeping you safe. If you think back to your driving classes, many instructors would have emphasized checking at least the rearview mirror before braking hard, though this may not be possible all the time. On that note, it’s worth taking a look at our guide on how to minimize blind spots in your car, as many drivers fail to set up their mirrors properly.

Anyway, smooth braking is a skill that not a lot of drivers have, because it does take a fair bit of time to develop. Highway traffic can often meet standstill cars, especially near major interchanges in and out of the city. An example would be the Mass Pike interchange in Massachusetts (the U.S. state with the worst drivers, statistically). It is at places like these where you’ll typically hear tires squealing, and more than one person moving into the emergency lane to avoid a crash. 

Advertisement

If your passengers are constantly doing the invisible passenger-side brake stomp, it’s probably worth taking a closer look at your braking habits. The easiest fix to this problem is to drive slower, as you would have more control over the vehicle.

Advertisement

They hesitate at predictable situations

We’ve all been at an intersection, free-right, stuck behind a new driver who cannot judge the speed of an oncoming vehicle before merging onto the road. This either causes frustration among the people waiting in line to turn, or downright danger as the oncoming vehicles need to brake or swerve to avoid an incident. These situations often freak people out, especially beginner drivers. Examples that spring to mind are four-way stops, California stops, free right turns, U-turn areas, roundabouts, and, of course, the notorious zipper merges. 

Poor decision-making in these situations is a telltale sign of a bad driver, such as not matching speed during on-ramp merging, waiting too long to enter a roundabout, taking a U-turn without gauging oncoming traffic, and more. There is strong evidence to suggest that this hesitation disproportionately affects newer drivers. 

A study conducted by four researchers from Jilin University and Yanshan University in January 2021 found a moderate relation between the driver’s total experience and driving violations. This suggests that the more one drives, the easier it becomes to gauge and judge road situations and react to them appropriately. It also means that if you find yourself hesitating with right-of-way and safety decisions, you shouldn’t be too hard on yourself, and that things will get better the more you drive.

Advertisement



Source link

Continue Reading

Tech

Folding iPhone unveiling & shipment date rumors are all over the place

Published

on

It’s been a wild week for folding iPhone rumors, with battles about what it will be called, release timing, when orders will ship, and more. On Friday, one prolific leaker jumped in and claims the device will ship in October at the latest.

Silver foldable smartphone concept showing dual rear cameras with flash on one side and a vivid wavy abstract pattern on the unfolded front display against a dark gradient background
Apple’s foldable iPhone is now closer to release than ever

This comes following numerous back-and-forth reports that foldable iPhone buyers would have to wait until as late as December for their new devices. Writing in a post on the Weibo social network, leaker Instant Digital says that the most likely outcome is that Apple will be able to debut the foldable iPhone in September.
However, if Apple does choose to split the releases, the leaker doesn’t anticipate a long wait. They say the iPhone Fold will ship a month after the iPhone 18 Pro.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Chimpanzees In Uganda Locked In Vicious ‘Civil War’, Say Researchers

Published

on

Researchers say the world’s largest known wild chimpanzee community in Uganda fractured into rival factions and has been locked in a vicious “civil war” for the last eight years. “It is not clear exactly why the once close-knit community of Ngogo chimpanzees at Uganda’s Kibale National Park are at loggerheads, but since 2018 the scientists have recorded 24 killings, including 17 infants,” reports the BBC. From the report: [O]ver several decades, [lead author Aaron Sandel] said the nearly 200 Ngogo chimpanzees had lived in harmony. There were divided into two sets – known to researchers as Western and Central – but they had existed overall as a cohesive group. Sandel said he first noticed them polarizing in June 2015, when the Western chimpanzees ran away and were chased by the Central group. “Chimpanzees are sort of melodramatic,” he said, explaining that following arguments there would ordinarily be “screaming and chasing” and then later, they would grooming and co-operating.

But following the 2015 dispute, the researchers saw that there was a six-week avoidance period between the two sets, with interactions becoming more infrequent. When they did occur, Sandel said they were “a little more intense, a little more aggressive.” Following the emergence of the two distinct groups in 2018, members of the Western group started attacking the Central chimpanzees. In 24 targeted attacks since the split, at least seven adult males and 17 infants from the Central chimps have been killed, the study found, although the researchers believe the actual number of deaths are higher. The researchers believe many factors such as the group size and subsequent competition of resources, and “male-male competition” for reproducing may be to blame.

But they say there were three likely catalysts:
– The first, were the deaths of five adult males and one adult female — for reasons unknown — in 2014, which could have disrupted social networks and weakened social ties across the subgroups
– The following year, there was a change in the alpha male, which the study says coincided with the first period of separation between the Western and Central groups. “Changes in the dominance hierarchy can increase aggression and avoidance in chimpanzees,” it explained
– The third factor was the deaths of 25 chimpanzees, including four adult males and 10 adult females, as a result of a respiratory epidemic, in 2017, a year before the final separation. One of the adult males who died was “among the last individuals to connect the groups,” the research paper said. The study has been published in the journal Science.

Source link

Advertisement
Continue Reading

Tech

Investing in part of the workforce creates an AI skills gap, finds report

Published

on

Forrester’s research has shown that a failure to commit to long-term, inclusive AI education can greatly impact an organisation.

Research and advisory firm Forrester has published the results of a report in which it explored the ramifications for employers and their organisations, when there is a failure to promote AI education across the entirety of a company.  

The AIQ 2.0: Employees (Still) Aren’t Ready To Succeed With Workforce AI report found that while the majority of AI decision-makers and their organisations are using predictive and generative AI (GenAI), only half say they offer training in this area to non-technical employees. As a result, many companies are failing to invest in AI understanding, skills and ethics among the wider workforce. 

The report said: “Those that have tried to upskill haven’t been particularly successful, yet people remain central to the success of your AI strategy.”

Advertisement

Employer readiness

According to a previous report issued by Forrester, the State of AI 2025 survey, almost 70pc of AI decision-makers said they are using GenAI in deployed production applications, while 20pc use it to run experiments and among automation decision-makers. 81pc of automation decision-makers also said AI copilots that assist employees in their work are important applications. 

Forrester suggests that this is indicative of a growing problem in which there is a growing disconnect between the AI needs of a company and the actions being taken. 

“AI is becoming more important to the work lives of employees and employees must adapt,” said the organisation. “But adaptation isn’t coming quickly or easily. Many employers remain mired in an environment of low skills and employee fears that isn’t conducive to successfully adopting workforce AI or driving productivity from its use.”

Research found that the proportion of AI decision-makers across six countries who said their organisations offer internal training on AI to non-technical employees only grew from 47pc in 2024 to 51pc in 2025, an improvement of just 4pc. Also only growing by 4pc was the number of AI decision-makers who said that their organisations offer training on prompt engineering – which Forrester finds to be a key skill for using most workforce AI tools in the modern era – which grew from 19pc to 23pc. 

Advertisement

Fear factor

Forrester also noted that fears around ‘stunt adoption’ and AI-related job loss are hindering implementation, despite Forrester’s opinion that “very few jobs were lost to AI in 2025”. Data indicated that future job loss, while possible, will not constitute a job apocalypse, yet fears persist, due in part to a failure by organisations to correctly or consistently discuss and explain the process of introducing AI.  

The report said: “Forrester’s 2025 data shows that 43pc of employees fear that, in general, many people will lose their jobs to automation in the next five years, while 25pc fear it will impact their own job during that span. This creates an ambient environment of fear and mistrust.”

The organisation added that one business leader said some of their employees fear job less, which turns them away from AI “altogether”.

“Organisations that fail to frame workforce AI as an opportunity builder for employees and that don’t articulate the benefits from an employee perspective see fears of job loss magnified,” said Forrester.

Advertisement

So, how might fears and anxieties be reduced so employers and employees can better embrace the changing landscape?

According to Forrester’s research, comprehensive learning and engagement programmes are key, with the report noting that leading organisations move beyond formal training and invest in continuous, hands-on learning and peer-based approaches that drive real adoption and impact.

Commenting on the findings of the report, JP Gownder, a vice-president and principal analyst at Forrester said: “Employers aren’t giving their people the skills, understanding, or ethical grounding they need to succeed with AI and it’s becoming a clear bottleneck to productivity and ROI. Our research shows most organisations are rolling out AI tools without investing in employees’ ability to use them effectively.

“To close the gap, businesses must move beyond surface-level training and build continuous, hands-on learning that demystifies AI, addresses employee concerns and develops real capability. This isn’t about replacing workers, it’s about enabling them to work smarter with AI.

Advertisement

“The organisations that treat AI literacy as a strategic priority, not a box-ticking exercise, will be the ones that unlock meaningful productivity gains and long-term competitive advantage.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

NYT Strands hints and answers for Saturday, April 11 (game #769)

Published

on

Looking for a different day?

A new NYT Strands puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Friday’s puzzle instead then click here: NYT Strands hints and answers for Friday, April 10 (game #768).

Strands is the NYT’s latest word game after the likes of Wordle, Spelling Bee and Connections – and it’s great fun. It can be difficult, though, so read on for my Strands hints.

Advertisement

Source link

Advertisement
Continue Reading

Tech

I matched the upgraded Meta AI against ChatGPT, and you can really tell which AI has social media roots

Published

on

Meta debuted its upgraded Meta AI chatbot this week, showing off plenty of tricks made possible by the new Muse Spark model embedded in the chatbot. The social media giant’s AI can’t deny its origins, behaving more than a little like a social media influencer who’s not scrolling their feeds. It’s notable when compared to ChatGPT‘s endless equivocating in the name of fairness.

I tested some of Meta AI’s highlights against ChatGPT, and the split shows up fast when you ask for more than facts. The Muse Spark model has Meta AI reach for the social layer and all its opinions first, while ChatGPT keeps a cooler head.

Advertisement

Source link

Continue Reading

Tech

AI agent credentials live in the same box as untrusted code. Two new architectures show where the blast radius actually stops.

Published

on

Four separate RSAC 2026 keynotes arrived at the same conclusion without coordinating. Microsoft’s Vasu Jakkal told attendees that zero trust must extend to AI. Cisco’s Jeetu Patel called for a shift from access control to action control, saying in an exclusive interview with VentureBeat that agents behave “more like teenagers, supremely intelligent, but with no fear of consequence.” CrowdStrike’s George Kurtz identified AI governance as the biggest gap in enterprise technology. Splunk’s John Morgan called for an agentic trust and governance model. Four companies. Four stages. One problem.

Matt Caulfield, VP of Product for Identity and Duo at Cisco, put it bluntly in an exclusive VentureBeat interview at RSAC. “While the concept of zero trust is good, we need to take it a step further,” Caulfield said. “It’s not just about authenticating once and then letting the agent run wild. It’s about continuously verifying and scrutinizing every single action the agent’s trying to take, because at any moment, that agent can go rogue.”

Seventy-nine percent of organizations already use AI agents, according to PwC’s 2025 AI Agent Survey. Only 14.4% reported full security approval for their entire agent fleet, per the Gravitee State of AI Agent Security 2026 report of 919 organizations in February 2026. A CSA survey presented at RSAC found that only 26% have AI governance policies. CSA’s Agentic Trust Framework describes the resulting gap between deployment velocity and security readiness as a governance emergency.

Cybersecurity leaders and industry executives at RSAC agreed on the problem. Then two companies shipped architectures that answer the question differently. The gap between their designs reveals where the real risk sits.

Advertisement

The monolithic agent problem that security teams are inheriting

The default enterprise agent pattern is a monolithic container. The model reasons, calls tools, executes generated code, and holds credentials in one process. Every component trusts every other component. OAuth tokens, API keys, and git credentials sit in the same environment where the agent runs code it wrote seconds ago.

A prompt injection gives the attacker everything. Tokens are exfiltrable. Sessions are spawnable. The blast radius is not the agent. It is the entire container and every connected service.

The CSA and Aembit survey of 228 IT and security professionals quantifies how common this remains: 43% use shared service accounts for agents, 52% rely on workload identities rather than agent-specific credentials, and 68% cannot distinguish agent activity from human activity in their logs. No single function claimed ownership of AI agent access. Security said it was a developer’s responsibility. Developers said it was a security responsibility. Nobody owned it.

CrowdStrike CTO Elia Zaitsev, in an exclusive VentureBeat interview, said the pattern should look familiar. “A lot of what securing agents look like would be very similar to what it looks like to secure highly privileged users. They have identities, they have access to underlying systems, they reason, they take action,” Zaitsev said. “There’s rarely going to be one single solution that is the silver bullet. It’s a defense in depth strategy.”

Advertisement

CrowdStrike CEO George Kurtz highlighted ClawHavoc (a supply chain campaign targeting the OpenClaw agentic framework) at RSAC during his keynote. Koi Security named the campaign on February 1, 2026. Antiy CERT confirmed 1,184 malicious skills tied to 12 publisher accounts, according to multiple independent analyses of the campaign. Snyk’s ToxicSkills research found that 36.8% of the 3,984 ClawHub skills scanned contain security flaws at any severity level, with 13.4% rated critical. Average breakout time has dropped to 29 minutes. Fastest observed: 27 seconds. (CrowdStrike 2026 Global Threat Report)

Anthropic separates the brain from the hands

Anthropic’s Managed Agents, launched April 8 in public beta, split every agent into three components that do not trust each other: a brain (Claude and the harness routing its decisions), hands (disposable Linux containers where code executes), and a session (an append-only event log outside both).

Separating instructions from execution is one of the oldest patterns in software. Microservices, serverless functions, and message queues.

Credentials never enter the sandbox. Anthropic stores OAuth tokens in an external vault. When the agent needs to call an MCP tool, it sends a session-bound token to a dedicated proxy. The proxy fetches real credentials from the vault, makes the external call, and returns the result. The agent never sees the actual token. Git tokens get wired into the local remote at sandbox initialization. Push and pull work without the agent touching the credential. For security directors, this means a compromised sandbox yields nothing an attacker can reuse.

Advertisement

The security gain arrived as a side effect of a performance fix. Anthropic decoupled the brain from the hands so inference could start before the container booted. Median time to first token dropped roughly 60%. The zero-trust design is also the fastest design. That kills the enterprise objection that security adds latency.

Session durability is the third structural gain. A container crash in the monolithic pattern means total state loss. In Managed Agents, the session log persists outside both brain and hands. If the harness crashes, a new one boots, reads the event log, and resumes. No state lost turns into a productivity gain over time. Managed Agents include built-in session tracing through the Claude Console.

Pricing: $0.08 per session-hour of active runtime, idle time excluded, plus standard API token costs. Security directors can now model agent compromise cost per session-hour against the cost of the architectural controls.

Nvidia locks the sandbox down and monitors everything inside it

Nvidia’s NemoClaw, released March 16 in early preview, takes the opposite approach. It does not separate the agent from its execution environment. It wraps the entire agent inside four stacked security layers and watches every move. Anthropic and Nvidia are the only two vendors to have shipped zero-trust agent architectures publicly as of this writing; others are in development.

Advertisement

NemoClaw stacks five enforcement layers between the agent and the host. Sandboxed execution uses Landlock, seccomp, and network namespace isolation at the kernel level. Default-deny outbound networking forces every external connection through explicit operator approval via YAML-based policy. Access runs with minimal privileges. A privacy router directs sensitive queries to locally-running Nemotron models, cutting token cost and data leakage to zero. The layer that matters most to security teams is intent verification: OpenShell’s policy engine intercepts every agent action before it touches the host. The trade-off for organizations evaluating NemoClaw is straightforward. Stronger runtime visibility costs more operator staffing.

The agent does not know it is inside NemoClaw. In-policy actions return normally. Out-of-policy actions get a configurable denial.

Observability is the strongest layer. A real-time Terminal User Interface logs every action, every network request, every blocked connection. The audit trail is complete. The problem is cost: operator load scales linearly with agent activity. Every new endpoint requires manual approval. Observation quality is high. Autonomy is low. That ratio gets expensive fast in production environments running dozens of agents.

Durability is the gap nobody’s talking about. Agent state persists as files inside the sandbox. If the sandbox fails, the state goes with it. No external session recovery mechanism exists. Long-running agent tasks carry a durability risk that security teams need to price into deployment planning before they hit production.

Advertisement

The credential proximity gap

Both architectures are a real step up from the monolithic default. Where they diverge is the question that matters most to security teams: how close do credentials sit to the execution environment?

Anthropic removes credentials from the blast radius entirely. If an attacker compromises the sandbox through prompt injection, they get a disposable container with no tokens and no persistent state. Exfiltrating credentials requires a two-hop attack: influence the brain’s reasoning, then convince it to act through a container that holds nothing worth stealing. Single-hop exfiltration is structurally eliminated.

NemoClaw constrains the blast radius and monitors every action inside it. Four security layers limit lateral movement. Default-deny networking blocks unauthorized connections. But the agent and generated code share the same sandbox. Nvidia’s privacy router keeps inference credentials on the host, outside the sandbox. But messaging and integration tokens (Telegram, Slack, Discord) are injected into the sandbox as runtime environment variables. Inference API keys are proxied through the privacy router and not passed into the sandbox directly. The exposure varies by credential type. Credentials are policy-gated, not structurally removed.

That distinction matters most for indirect prompt injection, where an adversary embeds instructions in content the agent queries as part of legitimate work. A poisoned web page. A manipulated API response. The intent verification layer evaluates what the agent proposes to do, not the content of data returned by external tools. Injected instructions enter the reasoning chain as trusted context. With proximity to execution.

Advertisement

In the Anthropic architecture, indirect injection can influence reasoning but cannot reach the credential vault. In the NemoClaw architecture, injected context sits next to both reasoning and execution inside the shared sandbox. That is the widest gap between the two designs.

NCC Group’s David Brauchler, Technical Director and Head of AI/ML Security, advocates for gated agent architectures built on trust segmentation principles where AI systems inherit the trust level of the data they process. Untrusted input, restricted capabilities. Both Anthropic and Nvidia move in this direction. Neither fully arrives.

The zero-trust architecture audit for AI agents

The audit grid covers three vendor patterns across six security dimensions, five actions per row. It distills to five priorities:

AI Agent Credentials Live in the Same Box as Untrusted Code. Here's the Zero-Trust Architecture Audit That Shows What to Fix.

VentureBeat created with Imagen

Advertisement
  1. Audit every deployed agent for the monolithic pattern. Flag any agent holding OAuth tokens in its execution environment. The CSA data shows 43% use shared service accounts. Those are the first targets.

  2. Require credential isolation in agent deployment RFPs. Specify whether the vendor removes credentials structurally or gates them through policy. Both reduce risk. They reduce it by different amounts with different failure modes.

  3. Test session recovery before production. Kill a sandbox mid-task. Verify state survives. If it does not, long-horizon work carries a data-loss risk that compounds with task duration.

  4. Staff for the observability model. Anthropic’s console tracing integrates with existing observability workflows. NemoClaw’s TUI requires an operator-in-the-loop. The staffing math is different.

  5. Track indirect prompt injection roadmaps. Neither architecture fully resolves this vector. Anthropic limits the blast radius of a successful injection. NemoClaw catches malicious proposed actions but not malicious returned data. Require vendor roadmap commitments on this specific gap.

Zero trust for AI agents stopped being a research topic the moment two architectures shipped. The monolithic default is a liability. The 65-point gap between deployment velocity and security approval is where the next class of breaches will start.

Source link

Continue Reading

Tech

Wireless Network Turns Interference Into Computation

Published

on

Picture a highway with networked autonomous cars driving along it. On a serene, cloudless day, these cars need only exchange thimblefuls of data with one another. Now picture the same stretch in a sudden snow squall: The cars rapidly need to share vast amounts of essential new data about slippery roads, emergency braking, and changing conditions.

These two very different scenarios involve vehicle networks with very different computational loads. Eavesdropping on network traffic using a ham radio, you wouldn’t hear much static on the line on a clear, calm day. On the other hand, sudden whiteout conditions on a wintry day would sound like a cacophony of sensor readings and network chatter.

Normally this cacophony would mean two simultaneous problems: congested communications and a rising demand for computing power to handle all the data. But what if the network itself could expand its processing capabilities with every rising decibel of chatter and with every sensor’s chirp?

Traditional wireless networks treat communication as separate from computation. First you move data, then you process it. However, an emerging new paradigm called over-the-air computation (OAC) could fundamentally change the game. First proposed in 2005 and recently developed and prototyped by a number of teams around the world, including ours, OAC combines communication and computation into a single framework. This means that an OAC sensor network—whether shared among autonomous vehicles, Internet-of-Things sensors, smart-home devices, or smart-city infrastructure—can carry some of the network’s computing burden as conditions demand.

Advertisement

The idea takes advantage of a basic physical fact of electromagnetic radiation: When multiple devices transmit simultaneously, their wireless signals naturally combine in the air. Normally, such cross talk is seen as interference, which radios are designed to suppress—especially digital radios with their error-correcting schemes and inherent resistance to low-level noise.

But if we carefully design the transmissions, cross talk can enable a wireless network to directly perform some calculations, such as a sum or an average. Some prototypes today do this with analog-style signaling on otherwise digital radios—so that the superimposed waveforms represent numbers that have been added before digital signal processing takes place.

Researchers are also beginning to explore digital, over-the-air computation schemes, which embed the same ideas into digital formats, ultimately allowing the prototype schemes to coexist with today’s digital radio protocols. These various over-the-air computation techniques can help networks scale gracefully, enabling new classes of real-time, data-intensive services while making more efficient use of wireless spectrum.

OAC, in other words, turns signal interference from a problem into a feature, one that can help wireless systems support massive growth.

Advertisement

For decades, engineers designed radio communications protocols with one overriding goal: to isolate each signal and recover each message cleanly. Today’s networks face a different set of pressures. They must coordinate large groups of devices on shared tasks—such as AI model training or combining disparate sensor readings, also known as sensor fusion—while exchanging as little raw data as possible, to improve both efficiency and privacy. For these reasons, a new approach to transmitting and receiving data may be worth considering, one that doesn’t rely on collecting and storing every individual device’s contributions.

By turning interference into computation, OAC transforms the wireless medium from a contested battlefield into a collaborative workspace. This paradigm shift has far-reaching consequences: Signals no longer compete for isolation; they cooperate to achieve shared outcomes. OAC cuts through layers of digital processing, reduces latency, and lowers energy consumption.

Even very simple operations, such as addition, can be the building blocks of surprisingly powerful computations. Many complex processes can be broken down into combinations of simpler pieces, much like how a rich sound can be re-created by combining a few basic tones. By carefully shaping what devices transmit and how the result is interpreted at the receiver, the wireless channel running OAC can carry out other calculations beyond addition. In practice, this means that with the right design, wireless signals can compute a number of key functions that modern algorithms rely on.

THE PROBLEM (TRADITIONAL APPROACH) 

Advertisement

Diagram of cars at mixed speeds with complex dashed feedback loops between them

Consider five connected vehicles traveling within sight of one another. Each car reports its speed to the network. In this example, the speeds are slow, medium, and fast. Using existing standards, all five connected cars must independently track and count all incoming signals. Even in this very simplified case, the network is already congested.

Mark Montgomery

For instance, many key tasks in modern networks don’t require the logging and storage of every individual network transmission. Rather, the goal is instead to infer properties about aggregate patterns of network traffic—reaching agreement or identifying what matters most about the traffic. Consensus algorithms rely on majority voting to ensure reliable decisions, even when some devices fail. Artificial intelligence systems depend on matrix reduction and simplification operations such as “max pooling” (keeping only peak values) to extract the most useful signals from noisy data.

Advertisement

In smart cities and smart grids, what matters most is often not individual readings but distribution. How many devices report each traffic condition? What is the range of demand across neighborhoods? These are histogram questions—summaries of the device counts per category.

With type-based multiple access (TBMA), an over-the-air computation method we use, devices reporting a given condition transmit together over a shared channel. Their signals add up, and the receiver sees only the total signal strength per category. In a single transmission, the entire histogram emerges without ever identifying individual devices. And the more devices there are, the better the estimate. The result is greater spectrum efficiency, with lower latency and scalable, privacy-friendly operations—all from letting the wireless medium do the aggregating and counting.

It’s easy to imagine how analog values transmitted over the air could be summed via superposition. The amplitudes from different signals add together, so the values those amplitudes represent also simply add together. The more challenging question concerns preserving that additive magic, but with digital signals.

Here’s how OAC does it. Consider, for instance, one TBMA approach for a network of sensors that gives each possible sensor reading its own dedicated frequency channel. Every sensor on the network that reads “4” transmits on frequency four; every sensor that reads “7” transmits on frequency seven. When multiple devices share the same reading, their amplitudes combine. The stronger the combined signal at a given frequency, the more devices there are reporting that particular value.

Advertisement

A receiver equipped with a bank of filters tuned to each frequency reads out a count of votes for every possible sensor value. In a single, simultaneous transmission, the whole network has reported its state.

It might seem paradoxical—digital computation riding atop what appears to be an analog physical effect. But this is also true of all “digital” radio. A Wi-Fi transmitter does not launch ones and zeroes into the air; it modulates electromagnetic waves whose amplitudes and phases encode digital data. The “digital” label ultimately refers to the information layer, not the physics. What makes OAC digital, in the same sense, is that the values being computed—each sensor reading, each frequency-bin count—are discrete and quantized from the start. And because they are discrete, the same error-correction machinery that has made digital communications robust for decades can be applied here too.

Synchronization is where OAC’s demands diverge most sharply from digital wireless conventions. Many OAC variants today require something akin to a shared clock at nanosecond precision: Every signal’s phase must be synchronized, or the superposition runs the risk of collapsing into destructive interference. While TBMA relaxes this burden a bit—devices need only share a time window—real engineering challenges lie ahead regardless, before over-the-air computation is ready for the mobile world.

How will over-the-air computation work in the field?

Over-the-air computation has in recent years moved from theory to initial proofs-of-concept and network test runs. Our research teams in South Carolina and Spain have built working prototypes that deliver repeatable results—with no cables and no external timing sources such as GPS-locked references. All synchronization is handled within the radios themselves.

Advertisement

Our team at the University of South Carolina (led by Sahin) started with off-the-shelf software-defined radios—Analog Devices’ Adalm-Pluto. We modified the devices’ field-programmable gate array hardware inside each radio so it can respond to a trigger signal transmitted from another radio. This simple hack enabled simultaneous transmission, a core requirement for OAC. Our setup used five radios acting as edge devices and one acting as a base station. The task involved training a neural network to perform image recognition over the air. Our system, whose results we first reported in 2022, achieved a 95 percent accuracy in image recognition without ever moving raw data across the network.

THE OVER-THE-AIR COMPUTATION (OAC) APPROACH

Illustration of cars adjusting speed with colored dashed lines indicating traffic signal control.

Using over-the-air computation, all five cars transmit their speeds simultaneously. Vehicles reporting the same speed share the same channel; their signals merely combine over the air.

Advertisement

Mark Montgomery

We also demonstrated our initial OAC setup at a March 2025 IEEE 802.11 working group meeting, where an IEEE committee was studying AI and machine learning capabilities for future Wi-Fi standards. As we showed, OAC’s road ahead doesn’t necessarily require reinventing wireless technology. Rather, it can also build on and repurpose existing protocols already in Wi-Fi and 5G.

However, before OAC can become a routine feature of commercial wireless systems, networks must provide finer-tuned coordination of timing and signal power levels. Mobility is a difficult problem, too. When mobile devices move around, phase synchronization degrades quickly, and computational accuracy can suffer. Present-day OAC tests work in controlled lab environments. But making them robust in dynamic, real-world settings—vehicles on highways, sensors scattered across cities—remains a new frontier for this emerging technology.

Both of our teams are now scaling up our prototypes and demonstrations. We are together aiming to understand how over-the-air computation performs as the number of devices increases beyond lab-bench scales. Turning prototypes and test-beds into production systems for autonomous vehicles and smart cities will require anticipating tomorrow’s mobility and synchronization problems—and no doubt a range of other challenges down the road.

Advertisement

Where OAC goes from here

To realize the technological ambitions of over-the-air computation, nanosecond timing and exquisite RF signal design will be crucial. Fortunately, recent engineering advances have made substantial progress in both of these fields.

Because OAC demands waveform superposition, it benefits from tight coordination in time, frequency, phase, and amplitude among RF transmitters. Such requirements build naturally on decades of work in wireless communication systems designed for shared access. Modern networks already synchronize large numbers of devices using high-precision timing and uplink coordination.

OAC uses the same synchronization techniques already in cellular and Wi-Fi systems. But to actually run over-the-air computations, more precision still will be needed. Power control, gain adjustment, and timing calibration are standard tools today. We expect that engineers will further refine these existing methods to begin to meet OAC’s more stringent accuracy demands.

THE OAC RESULT 

Advertisement

OAC result bar chart: slow 1 (blue), medium 3 (green), fast 1 (red).

One transmission yields the full picture: One car is going slow; three are traveling at medium speed; and one vehicle is moving fast. The majority condition is immediately identified—with no individual vehicle data shared or processed.

Mark Montgomery

In some cases, in fact, imperfect timing standards may be all that’s needed. Designs and emerging standards in 5G and 6G wireless systems today use clever encoding that tolerates imperfect synchronization. Minor timing errors, frequency drift, and signal overlap can in some cases still work capably within an OAC protocol, we anticipate. Instead of fighting messiness, over-the-air computation may sometimes simply be able to roll with it.

Advertisement

Another challenge ahead concerns shifting processing to the transmitter. Instead of the receiver trying to clean up overlapping signals, a better and more efficient approach would involve each transmitter fixing its own signal before sending. Such “pre-compensation” techniques are already used in MIMO technology (multi-antenna systems in modern Wi-Fi and cellular networks). OAC would just be repurposing techniques that have already been developed for 5G and 6G technologies.

Materials science can also help OAC efforts ahead. New generations of reconfigurable intelligent surfaces shape signals via tiny adjustable elements in the antenna. The surfaces catch radio signals and reshape them as they bounce around. Reconfigurable surfaces can strengthen useful signals, eliminate interference, and synchronize wavefront arrivals that would otherwise be out of sync. OAC stands to benefit from these and other emerging capabilities that intelligent surfaces will provide.

At the system level, OAC will represent a fundamental shift in wireless network system design. Wireless engineers have traditionally tried to avoid designing devices that transmit at the same time. But over-the-air systems will flip the old, familiar design standards on their head.

One might object that OAC stands to upend decades of existing wireless signal standards that have always presumed data pipes to be data pipes only—not microcomputers as well. Yet we do not anticipate much difficulty merging OAC with existing wireless standards. In a sense, in fact, the IEEE 802.11 and 3GPP (3rd Generation Partnership Project) standards bodies have already shown the way.

Advertisement

A network can set aside certain brief time windows or narrow slices of bandwidth for over‑the‑air computation, and use the rest for ordinary data. From the radio’s point of view, OAC just becomes another operating mode that is turned on when needed and left off the rest of the time.

Over the past decade, both the IEEE and 3GPP have integrated once-experimental technologies into their wireless standards—for example, millimeter-wave mobile communications, multiuser MIMO, beamforming, and network slicing—by defining each new technological advance as an optional feature. OAC, we suggest, can also operate alongside conventional wireless data traffic as an optional service. Because OAC places high demands on timing and accuracy, networks will need the ability to enable or disable over‑the‑air computation on a per‑application basis.

With continued progress, OAC will evolve from lab prototype to standardized wireless capability through the 2020s and into the decade ahead. In the process, the wireless medium will transform from a passive data carrier into an active computational partner—providing essential infrastructure for the real-time intelligent systems that future wireless technologies will demand.

So on that snowy highway sometime in the 2030s, vehicles and sensors won’t wait for permission to think together. Using the emerging over-the-air computation protocols that we’re helping to pioneer, simultaneous computation will be the new default. The networks will work as one.

Advertisement

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Cry til you laugh: Chris Pirillo vibe codes his job-search frustrations into brutally honest apps

Published

on

Chris Pirillo tears up one of his fake rejection letters. (Photo courtesy of Chris Pirillo)

At a time when finding a job in tech has turned into a frustrating cycle of rejections, ghostings or worse, Chris Pirillo‘s work speaks for itself — in that it makes a mockery of the whole process.

Pirillo, the longtime tech enthusiast and entrepreneur, has been showing off his skills by illustrating how hard it is to get anyone to pay attention to his skills. Two of his latest vibe-coded creations include a Resume Analyzer and a pre-rejection letter generator called Dear Applicant.

Each is sure to hit home with job seekers dealing with a seemingly hopeless process, and the recruiters and employers who are hopefully trying to fill roles with some shred of humanity.

The Resume Analyzer is exactly what it sounds like — and nothing like what you’d hope. Users are invited to paste in a job description and upload their CV for what’s billed as a “semantic scan” that generates a “personalized, actionable gap-analysis report.” The punchline, of course, is that no matter what you submit, the verdict is the same: “Nah, you’re f**ked, mate.”

Pirillo said the idea started as a half-joke on social media earlier this week. The response was immediate and resonant enough to convince him the joke was actually a mirror. A few hours later, the app was live.

Advertisement

The Dear Applicant generator arrived shortly after, born from a comment on Threads suggesting the logical next step: a rejection letter that arrives before you’ve even applied. “Imagine all the efficiency gains,” the commenter wrote. Pirillo obliged.

I tested both and left laughing both times, especially at the methodology fine print on the Resume Analyzer, which read, in part: “No resumes were analyzed in the production of this report. No data left your browser. The job market is, in fact, a burning dumpster. This tool confirms what you already suspected. Have you considered goat farming?”

“These apps are funny because they aren’t,” Pirillo told GeekWire via email.

The frustration Pirillo is spoofing is well-documented — and the timing is no coincidence. A report last month from pre-employment testing company Criteria found that more than half of job seekers had been ghosted by an employer in the past year, a three-year high. It comes at a time when tech layoffs have remained brutal: more than 178,000 tech workers were cut in 2025 alone, flooding an already strained job market with qualified candidates competing for fewer openings — and hearing nothing back.

Advertisement
Dear Applicant allows job seekers to pre-generate a rejection letter when applying for a job. (Image via Dear Applicant)

The tension — between the gag and the genuine grievance underneath it — is what gives both tools their edge. Pirillo, who describes himself as more than qualified for positions he applies to, said he’s given up on the traditional job search in favor of fractional and contract work, not because he wants to, but because the alternative feels like shouting into a void.

“That behavior and expectation has been normalized,” he said of ghosting by employers. “The entire process that few of us are in control of teeters on abusive.”

Building the apps, he noted, took less than an hour each — a pointed contrast to the hours job seekers routinely sink into tailoring resumes and cover letters that often vanish without acknowledgment. Pirillo has now shipped more than 300 of these “mini-products” on his Vibe Arcade website and is actively teaching others, technical and non-technical alike, to do the same.

The question he’s asking, implicitly, is whether building things is now a better use of a job-seeker’s time than applying for jobs. For him, the answer is yes — and he’s leaning into what he calls an emerging archetype: the “product developer,” someone who shows their work rather than curates a resume.

“I believe I put more thought into making these apps than any company has in considering my application,” he said. “Probably more than all of them combined — even when someone made a personal referral to the hiring manager.”

Advertisement

He’s even considering attaching Dear Applicant’s pre-rejection letter to his own future applications — partly as an experiment, partly because he says he has nothing to lose.

“Worst that could happen is I get ignored differently,” he said.

Pirillo isn’t pretending the apps are activism — or that they’ll change anything. He knows HR won’t find it funny. But that’s not really who he’s talking to.

“If there was an intent behind these specific apps, it’s not just to evoke a sense of ‘you’re not alone’ but to laugh at the absurdity of the situation some of us find ourselves forced into,” he said.

Advertisement

Previously: Vibe-coding a new reality: Chris Pirillo on the rise of AI-powered apps, features, and founders

Source link

Continue Reading

Trending

Copyright © 2025