Connect with us
DAPA Banner

Tech

NUR Headphones Debut at AXPONA 2026: Italian Craft Meets High-End Sound in Mimic Audio Showcase

Published

on

Among the global brands at AXPONA 2026, Mimic Audio did not have the biggest booth or the loudest presence, but it ended up being one of the more worthwhile stops in the EarGear section. The Chicago dealer, owned by TJ Cook, was positioned between Campfire Audio and Austrian Audio and only a few steps from the always swamped ZMF booth, which made it easy to overlook in the rush. That would have been a mistake. Mimic first caught my attention before the show when it supplied the AudioByte components for the Von Schweikert pre-event, paired with NUR Audio’s Harmonia.

My initial listen there was promising, but with the Von Schweikert VR.thrity or Ultra 7 commanding the room and the Harmonia’s open-back design letting all of that noise pour in, it was impossible to draw more than a few early conclusions. That made a return visit at AXPONA essential, where I sat down with all three NUR models on display for a longer listen and a better sense of what this Italian headphone brand is actually bringing to the table.

nur-harmonia-headphones-side

NUR Audio Headphones: Italian Design, Planar Magnetic Ambitions

NUR Audio is not some legacy brand trading on decades of goodwill. It was founded just northeast of Rome by Angelo De Mattia and feels very much like a passion project finding its footing in a crowded category. Right now, the Harmonia open back is the only model you can actually buy, priced at $3,750, while the Shanti open-back reference and Miah closed back are still listed as coming soon with pricing to be determined. That split matters because NUR is already drawing a line between audiences. The Harmonia is built for listening at home, while the Shanti and Miah mark the start of a professional series aimed at engineers who need precision more than romance.

The two open-back designs share a lot of DNA. Similar materials, similar construction, and very similar planar magnetic drivers. The Miah goes a different route with a dynamic driver inside a closed back design, which should make it the more practical option for studio work or less than ideal environments. All three, however, are physically imposing. Think Audeze LCD-4 sized ear cups and the kind of weight that can turn a long session into a short one if the ergonomics are off. Early impressions suggest NUR understands the problem. The suspension system is well padded, the clamp feels reasonable, and the weight distribution does not immediately raise red flags.

Advertisement
nur-harmonia-headphones-angle-headband

The real test, as always, will be whether that comfort holds up after a few hours rather than a few tracks.

Using the AudioByte stack (more on that soon), I was able to spend time with all three NUR models and come away with a clearer sense of how each is voiced. With both the Shanti and Miah still in prototype form, nothing here should be considered final, but the direction is already apparent.

The NUR Harmonia is a large-format open-back planar magnetic headphone built around a 105mm PEEK diaphragm and a double-sided toroidal magnet system using high-grade N52 neodymium magnets. That combination is designed to deliver fast transient response, low distortion, and wide bandwidth, which is reflected in the rated 8Hz to 55kHz frequency response.

nur-harmonia-headband

With a 48 ohm impedance and 107 dB/mW sensitivity, it should be relatively easy to drive for a planar of this size, though it will still benefit from a capable amplifier. The dual 3.5mm cup connections allow for balanced operation out of the box, with either 4.4mm or XLR cables included, along with a 6.35mm adapter for single-ended use. At 630 grams, it is firmly in the heavyweight category, making the suspension system and overall ergonomics critical for longer listening sessions.

Advertisement

The Harmonia leans toward a clean, controlled presentation with a touch of warmth that you don’t always get from planar magnetic designs. Bass has solid presence without sounding pushed, the midrange comes across as slightly lush with very good detail retrieval, and the treble extends well past what my ears are willing to admit at this point. It strikes a balance that feels intentional rather than trying to impress on first listen.

The Shanti prototype shifts gears toward a more analytical presentation. It is crisper, more forward in its detail, and less forgiving overall. The name was a bit of a clue, but the tuning confirms it. This feels like the model aimed squarely at those who want to dissect recordings rather than relax into them.

The Miah, as the closed-back option, moves in a different direction. It is warmer and a bit thicker sounding than the two open-back models, which is not surprising given the design. Detail is still present across most of the range, but the top end has slightly less extension and sparkle. That trade-off is typical for closed-back headphones, especially ones that appear to be targeting studio use rather than chasing an artificially boosted sense of air.

Advertisement. Scroll to continue reading.
Advertisement
nur-harmonia-headphones-angle

The Bottom Line

I came away impressed enough to spend a good amount of time talking with TJ Cook about getting all three NUR models in for proper review once they hit the market. That says more than any quick show impression. AXPONA has no shortage of big names pulling crowds, and it is easy to fall into the trap of chasing logos instead of sound. The problem is that you end up walking right past booths like Mimic Audio and missing some of the more interesting listens of the weekend.

The NUR lineup, paired with the AudioByte components, proved to be far more than a curiosity. It was one of those setups that rewarded anyone willing to sit down, block out the noise, and actually listen. Not perfect, not finished in two cases, but clearly headed somewhere worth paying attention to.

Expect a deeper dive once review samples land. In the meantime, NUR Audio is a brand to keep on your radar, and if you happen to be in the Chicago area, Mimic Audio is absolutely worth a visit.

Where to buy: $3,750 at Mimic Audio

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Unauthorized group has gained access to Anthropic’s exclusive cyber tool Mythos, report claims

Published

on

A group of unauthorized users has reportedly gained access to Mythos, the cybersecurity tool recently announced by Anthropic.

Much has been made of Mythos and its purported power — an AI product designed for enterprise security that, in the wrong hands, could become a potent hacking tool, according to the company. Now Bloomberg has reported that a “private online forum,” the members of which have not been publicly identified, has managed to gain access to the tool through a third-party vendor.

“We’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments,” an Anthropic spokesperson told TechCrunch. The company said that, so far, it has found no evidence that the supposedly unauthorized activity has impacted Anthropic’s systems in any way.

The unauthorized group tried a number of different strategies to gain access to the model, including using “access” enjoyed by the person who was interviewed by Bloomberg. That person is currently employed at a third-party contractor that works for Anthropic, the outlet reported.

Advertisement

Members of the group are part of a Discord channel that seeks out information about unreleased AI models, the outlet reported. The group has been using Mythos regularly since gaining access to it, and provided evidence to Bloomberg in the form of screenshots and a live demonstration of the software.

Bloomberg reports that the group, which supposedly gained access to the tool on the same day it was publicly announced, “made an educated guess about the model’s online location based on knowledge about the format Anthropic has used for other models.” The group in question is “interested in playing around with new models, not wreaking havoc with them,” the source told the outlet.

Mythos was released to a select number of vendors, including big names like Apple, as part of an initiative called Project Glasswing. The limited release of the model was designed to prevent its use by bad actors. The tool could be weaponized against corporate security instead of bolstering it, Anthropic said.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

If true, unauthorized use of Mythos could spell trouble for Anthropic, which provided the exclusive release to allay the company’s concern for enterprise security.

Advertisement

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Source link

Continue Reading

Tech

48 days of exposed projects, closed bug reports, & the structural failure of vibe coding security

Published

on

Summary: Lovable, the $6.6 billion vibe coding platform with eight million users, has faced three documented security incidents exposing source code, database credentials, and thousands of user records, with the most recent BOLA vulnerability left open for 48 days after the company closed a bug bounty report without escalation. The incidents are representative of a structural problem across vibe coding: 40-62% of AI-generated code contains vulnerabilities, 91.5% of vibe-coded apps had at least one AI hallucination-related flaw in Q1 2026, and the market’s incentive structure rewards growth over security at a moment when 60% of all new code is projected to be AI-generated by year end.

Lovable, the vibe coding platform valued at $6.6 billion with eight million users, has spent the past two months dealing with security incidents that collectively exposed source code, database credentials, AI chat histories, and the personal data of thousands of users across projects built on its platform. The most recent disclosure, published on 20 April by a security researcher, revealed a broken object-level authorisation vulnerability in Lovable’s API that allowed anyone with a free account to access another user’s profile, public projects, source code, and database credentials in as few as five API calls. The researcher reported the flaw to Lovable’s bug bounty programme on 3 March. Lovable patched it for new projects but never fixed it for existing ones, marked a follow-up report as a duplicate, and closed it. As of reporting, the vulnerability had been open for 48 days.

Lovable’s response followed a pattern that security researchers found more telling than the vulnerability itself. The company first posted on X that it “did not suffer a data breach,” calling the exposed data “intentional behaviour.” It then blamed its own documentation, saying that what “public” implies “was unclear.” It then blamed its bug bounty partner HackerOne, saying reports were “closed without escalation because our HackerOne partners thought that seeing public projects’ chats was the intended behaviour.” Later that day, it issued a partial apology acknowledging that “pointing to documentation issues alone was not enough.” Cybernews headlined its coverage: “Lovable goes on ego trip denying vulnerability, then blames others for said vulnerability.

What was exposed

The April incident affected projects created before November 2025. The researcher demonstrated that extracting a user’s source code from Lovable’s API also yielded hardcoded Supabase database credentials embedded in that code. One affected project belonged to Connected Women in AI, a Danish nonprofit. Its exposed data contained real user records including names, job titles, LinkedIn profiles, and Stripe customer IDs, with records linked to individuals at Accenture Denmark and Copenhagen Business School. Employees at Nvidia, Microsoft, Uber, and Spotify reportedly have Lovable accounts tied to affected projects.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

This was the third documented security incident involving the platform. In February, a tech entrepreneur named Taimur Khan found 16 vulnerabilities, six of them critical, in a single app hosted on Lovable and featured on its own Discover page with more than 100,000 views. The most severe was an inverted authentication logic that granted anonymous users full access while blocking authenticated users. The app, an AI-powered EdTech tool, exposed 18,697 user records including 4,538 student accounts from institutions including UC Berkeley and UC Davis, with minors likely on the platform. Khan reported his findings through Lovable’s support channel. His ticket was closed without a response.

An earlier study in May 2025 found that 170 out of 1,645 sampled Lovable-created applications had issues allowing personal information to be accessed by anyone. Approximately 70% of Lovable apps had row-level security disabled entirely.

Advertisement

The structural problem

Lovable is not uniquely insecure. It is representatively insecure. The platform generates full-stack applications using React, Tailwind, and Supabase in response to natural language prompts, a process the industry calls vibe coding after Andrej Karpathy coined the term in February 2025. The approach lets anyone describe an application and have it built by an AI model without writing or reviewing code. Collins English Dictionary named it Word of the Year for 2025. Gartner forecasts that 60% of all new code will be AI-generated by the end of this year.

The security data across the entire category is consistent. Between 40 and 62% of AI-generated code contains security vulnerabilities, depending on the study. AI-written code produces flaws at 2.74 times the rate of human-written code, according to an analysis of 470 GitHub pull requests. A first-quarter 2026 assessment of more than 200 vibe-coded applications found that 91.5% contained at least one vulnerability traceable to AI hallucination. More than 60% exposed API keys or database credentials in public repositories. The vulnerability classes are the same across every major vibe coding platform: disabled row-level security, hardcoded secrets, missing webhook verification, injection flaws, and broken access controls.

Bolt.new ships with row-level security off by default. Cursor has had multiple CVEs patched, including a case-sensitivity bypass enabling persistent remote code execution. Researchers at Pillar Security demonstrated a “rules file backdoor” attack in which hackers inject hidden malicious instructions into configuration files used by Cursor and GitHub Copilot. A separate “Agent Commander” attack in March showed that prompt injection into AI coding agents could convert autonomous coding tools into remotely controlled malware delivery platforms. In January, the vibe-coded social network Moltbook was breached within three days of launch, exposing 1.5 million API authentication tokens and 35,000 email addresses through a misconfigured Supabase database with no row-level security.

The economic incentive problem

Security firms are raising money specifically to address the gap. Escape raised $18 million to replace manual penetration testing with AI agents that scan vibe-coded applications, citing over 2,000 high-impact vulnerabilities and hundreds of exposed secrets found in live production systems. Lovable itself partnered with Aikido to bring automated pentesting to its platform. But the fundamental incentive structure of the market works against security.

Advertisement

Lovable hit $4 million in annual recurring revenue in its first four weeks and $10 million in two months with a team of 15 people. It raised $200 million at a $1.8 billion valuation in July 2025 and $330 million at $6.6 billion in December, more than tripling its valuation in five months. Enterprise adoption of vibe coding grew 340% year over year. Non-technical user adoption surged 520%. Eighty-seven percent of Fortune 500 companies have adopted at least one vibe coding platform. The market rewards speed and accessibility. Security is a cost centre that slows both.

The result is a category in which the dominant platforms generate code that is insecure by default, the users generating that code lack the expertise to identify the vulnerabilities, and the platforms themselves have financial incentives to prioritise growth over remediation. Lovable’s handling of the March and April incidents illustrates the dynamic precisely: a bug bounty report was closed without escalation, a vulnerability affecting thousands of projects was patched for new users but not existing ones, and the public response cycled through denial, deflection, and a partial apology within a single day.

The regulatory gap

The EU AI Act’s high-risk obligations take effect on 2 August, requiring transparency, human oversight, and data governance for AI systems. California’s S.B. 53 and New York’s RAISE Act require frontier AI developers to publish safety frameworks and report incidents. But none of these regulations specifically address the security of code generated by AI models for end users, and the adoption data suggests the market is moving faster than regulators can respond. Financial services and healthcare, the two most regulated sectors, show the lowest vibe coding adoption rates at 34 and 28% respectively, which indicates that the market itself recognises the compliance gap even if regulations have not yet caught up.

As Trend Micro framed it: “The real risk of vibe coding isn’t AI writing insecure code. It’s humans shipping code they never had a chance to secure.” The 84% surge in App Store submissions driven by vibe coding tools suggests the volume of unreviewed code entering production is accelerating. Thirty-five CVEs were disclosed in March alone from AI-generated code, up from six in January, and Georgia Tech estimates the actual figure is five to ten times higher than what is detected.

Advertisement

Lovable is the fastest-growing software startup in history by several measures. It is also a company that closed a critical vulnerability report without reading it, left thousands of projects exposed for 48 days, and responded to public disclosure by denying a breach, blaming its documentation, blaming its bug bounty partner, and then apologising for the apology. The pattern is not unique to Lovable. It is the pattern of a category that has built extraordinary tools for creating software and almost nothing for securing it.

Source link

Advertisement
Continue Reading

Tech

ZenTimings provides a detailed view of RAM timings on Ryzen systems

Published

on

ZenTimings is a Windows utility designed for AMD Ryzen platforms that displays real-time memory configuration data and timings, frequency, and Infinity Fabric clocks. It’s primarily used to verify BIOS or XMP/EXPO settings, offering a straightforward way to check how system memory is running.

Read Entire Article
Source link

Continue Reading

Tech

Honor 600 series takes aim at the affordable flagship crown with Snapdragon power and a 7,000mAh battery

Published

on

Honor has finally lifted the covers off its latest N-series devices, and the new models bring several key upgrades over the Honor 400 series, which was the last N-series lineup to launch outside China. The Pro model in the new Honor 600 series is especially noteworthy because it’s positioned as a legitimate “accessible flagship” that pairs a top-tier Snapdragon SoC with a stunning display and a massive battery at an attractive price.

Flagship specs without the flagship price

The new lineup pushes Honor’s N-series further into flagship territory, with both the standard Honor 600 and the more premium Honor 600 Pro offering features typically reserved for more expensive phones. The Honor 600 Pro packs Qualcomm’s Snapdragon 8 Elite chip and is positioned as a serious option for gaming and demanding workloads. The standard variant, meanwhile, runs on the Snapdragon 7 Gen 4, offering a more balanced mix of performance and efficiency.

On the display front, both phones feature a 6.57-inch OLED panel with a 120Hz refresh rate, peak brightness of 8,000 nits, and 3,840Hz PWM dimming. Battery life is another major highlight, with the two devices packing a 7,000mAh silicon-carbon battery that supports 80W wired fast charging and 27W reverse charging. The Pro model even includes 50W wireless charging support, a feature that’s often omitted on affordable flagships.

The Honor 600 series’ camera hardware is no slouch either, with both devices featuring a 200MP main shooter with a large 1/1.4-inch sensor size. On the Pro model, it’s paired with a 50MP telephoto lens with up to 120x zoom and CIPA 6.5 image stabilization, a 12MP ultrawide camera, and a 50MP selfie shooter. The standard version has the same ultrawide and selfie cameras, but skips the telephoto camera.

As announced earlier, Honor is also debuting its upgraded AI Image to Video 2.0 feature with the lineup, which allows users to generate short videos from still images using natural language prompts. Durability has also been upgraded, and both models feature an IP69K rating along with enhanced drop resistance certification.

Pricing and availability

Honor has launched the 600 series in Malaysia today, with the Pro model priced at RM3,099 (~$784) for the 12GB+256GB configuration and RM3,299 (~$835) for the 512GB storage option. The standard Honor 600 comes in a single 12GB+512GB configuration priced at RM2,599 (~$658). Both models come in Orange, Golden White, and Black color options.

Advertisement

Honor has confirmed that the devices will roll out to additional global markets, but regional pricing and availability details have not yet been announced.

Source link

Advertisement
Continue Reading

Tech

FBI Looks Into Dead or Missing Scientists Tied To Sensitive US Research

Published

on

Federal authorities are now reviewing a string of deaths and disappearances involving scientists tied to sensitive U.S. aerospace and nuclear work, though officials have not established any confirmed link between the cases. The FBI says it “is spearheading the effort to look for connections into the missing and deceased scientists,” adding that it “is working with the Department of Energy, Department of War, and with our state … and local law enforcement partners to find answers.” The Republican-led House Oversight Committee also announced an investigation into the reports. CNN reports: A nuclear physicist and MIT professor fatally shot outside his Massachusetts residence. A retired Air Force general missing from his New Mexico home. An aerospace engineer who disappeared during a hike in Los Angeles. These are among at least 10 individuals connected to sensitive US nuclear and aerospace research who have died or disappeared in recent years, prompting concerns whether they are connected and fueling speculation online about the possibility of nefarious activity. […]

The Defense Department said only that it would respond to the committee directly, and the Department of Energy referred questions to the White House. In a post on X, NASA said it is “coordinating and cooperating with the relevant agencies” in relation to the scientists. “At this time, nothing related to NASA indicates a national security threat,” NASA spokesperson Bethany Stevens said.

The cases vary widely in circumstance. Some involve unsolved homicides, while others are missing persons cases with no signs of foul play. In at least two instances, families have pointed to preexisting medical conditions or personal struggles as explanations. Authorities have not established any links between the cases. The White House said last week it is also working with federal agencies to probe any potential links between the deaths and disappearances, with President Donald Trump referring to the matter as “pretty serious stuff.” “The United States has thousands of nuclear scientists and nuclear experts,” said Rep. James Walkinshaw, a Democrat who also serves on the Oversight Committee. “It’s not the kind of nuclear program that potentially a foreign adversary could significantly impact by targeting 10 individuals.”

Source link

Advertisement
Continue Reading

Tech

Vercel breach exposes the OAuth gap most security teams cannot detect, scope or contain

Published

on

One employee at Vercel adopted an AI tool. One employee at that AI vendor got hit with an infostealer. That combination created a walk-in path to Vercel’s production environments through an OAuth grant that nobody had reviewed.

Vercel, the cloud platform behind Next.js and its millions of weekly npm downloads, confirmed on Sunday that attackers gained unauthorized access to internal systems. Mandiant was brought in. Law enforcement was notified. Investigations remain active. An update on Monday confirmed that Vercel collaborated with GitHub, Microsoft, npm, and Socket to verify that no Vercel npm packages were compromised. Vercel also announced it is now defaulting environment variable creation to “sensitive.” Next.js, Turbopack, AI SDK, and all Vercel-published npm packages remain uncompromised after a coordinated audit with GitHub, Microsoft, npm, and Socket.

Context.ai was the entry point. OX Security’s analysis found that a Vercel employee installed the Context.ai browser extension and signed into it using a corporate Google Workspace account, granting broad OAuth permissions. When Context.ai was breached, the attacker inherited that employee’s Workspace access, pivoted into Vercel environments, and escalated privileges by sifting through environment variables not marked as “sensitive.” Vercel’s bulletin states that variables marked sensitive are stored in a manner that prevents them from being read. Variables without that designation were accessible in plaintext through the dashboard and API, and the attacker used them as the escalation path.

CEO Guillermo Rauch described the attacker as “highly sophisticated and, I strongly suspect, significantly accelerated by AI.” Jaime Blasco, CTO of Nudge Security, independently surfaced a second OAuth grant tied to Context.ai’s Chrome extension, matching the client ID from Vercel’s published IOC to Context.ai’s Google account before Rauch’s public statement. The Hacker News reported that Google removed Context.ai’s Chrome extension from the Chrome Web Store on March 27. Per The Hacker News and Nudge Security, that extension embedded a second OAuth grant enabling read access to users’ Google Drive files.

Advertisement

Patient zero. A Roblox cheat and a Lumma Stealer infection

Hudson Rock published forensic evidence on Monday, reporting that the breach origin traces to a February 2026 Lumma Stealer infection on a Context.ai employee’s machine. According to Hudson Rock, browser history showed the employee downloading Roblox auto-farm scripts and game exploit executors. Harvested credentials included Google Workspace logins, Supabase keys, Datadog tokens, Authkit credentials, and the support@context.ai account. Hudson Rock identified the infected user as a core member of “context-inc,” Context.ai’s tenant on the Vercel platform, with administrative access to production environment variable dashboards.

Context.ai published its own bulletin on Sunday (updated Monday), disclosing that the breach affects its deprecated AI Office Suite consumer product, not its enterprise Bedrock offering (Context.ai’s agent infrastructure product, unrelated to AWS Bedrock). Context.ai says it detected unauthorized access to its AWS environment in March, hired CrowdStrike to investigate, and shut down the environment. Its updated bulletin then disclosed that the scope was broader than initially understood: the attacker also compromised OAuth tokens for consumer users, and one of those tokens opened the door to Vercel’s Google Workspace.

Dwell time is the detail that should concern security directors. Nearly a month separated Context.ai’s March detection from the Vercel disclosure on Sunday. A separate Trend Micro analysis references an intrusion beginning as early as June 2024 — a finding that, if confirmed, would extend the dwell time to roughly 22 months. VentureBeat could not independently reconcile that timeline with Hudson Rock’s February 2026 dating; Trend Micro did not respond to a request for comment before publication.

Where detection goes blind

Security directors can use this table to benchmark their own detection stack against the four-hop kill chain this breach exploited.

Advertisement

Kill Chain Hop

What Happened

Who Should Detect

Typical Coverage

Advertisement

Gap

1. Infostealer on employee device

Context.ai employee downloaded Roblox cheat scripts; Lumma Stealer harvested Workspace creds, Supabase/Datadog/Authkit keys.

EDR on endpoint; credential exposure monitoring.

Advertisement

Low. Device likely under-monitored. No stealer log monitoring at most orgs.

Most enterprises do not subscribe to infostealer intelligence feeds or correlate stealer logs against employee email domains.

2. AWS compromise at Context.ai

Attacker used harvested credentials to access Context.ai’s AWS. Detected in March.

Advertisement

Context.ai cloud security; AWS CloudTrail.

Partially detected. Context.ai stopped AWS access but missed OAuth token exfiltration.

Initial investigation did not identify OAuth token exfiltration. Scope was underestimated until Vercel disclosure.

3. OAuth token theft into Vercel Workspace

Advertisement

Compromised OAuth token used to access a Vercel employee’s Google Workspace. Employee had granted “Allow All” permissions via Chrome extension.

Google Workspace audit logs; OAuth app monitoring; CASB.

Very low. Most orgs do not monitor third-party OAuth token usage patterns.

No approval workflow intercepted the grant. No anomaly detection on OAuth token use from a compromised third party. This is the hop no one saw.

Advertisement

4. Lateral movement into Vercel production

Attacker enumerated non-sensitive env vars (accessible via dashboard/API), harvested customer credentials.

Vercel platform audit logs; behavioral analytics.

Moderate. Vercel detected the intrusion after the attacker accessed customer credentials.

Advertisement

Detection occurred after exfiltration, not before. Env var access by a compromised Workspace account did not trigger real-time alerting.

What’s confirmed vs. what’s claimed

Vercel’s bulletin confirms unauthorized access to internal systems, a limited subset of affected customers, and two IOCs tied to Context.ai’s Google Workspace OAuth apps. Rauch confirmed that Next.js, Turbopack, and Vercel’s open-source projects are unaffected.

Separately, a threat actor using the ShinyHunters name posted on BreachForums claiming to hold Vercel’s internal database, employee accounts, and GitHub and NPM tokens, with a $2M asking price. Austin Larsen, principal threat analyst at Google Threat Intelligence, assessed the claimant as “likely an imposter.” Actors previously linked to ShinyHunters have denied involvement. None of these claims has been independently verified.

Six governance failures the Vercel breach exposed

1. AI tool OAuth scopes go unaudited. Context.ai’s own bulletin states that a Vercel employee granted “Allow All” permissions using a corporate account. Most security teams have no inventory of which AI tools their employees have granted OAuth access to.

Advertisement

CrowdStrike CTO Elia Zaitsev put it bluntly at RSAC 2026: “Don’t give an agent access to everything just because you’re lazy. Give it access to only what it needs to get the job done.” Jeff Pollard, VP and principal analyst at Forrester, told Cybersecurity Dive that the attack is a reminder about third-party risk management concerns and AI tool permissions.

2. Environment variable classification is doing real security work. Vercel distinguishes between variables marked “sensitive” (stored in a manner that prevents reading) and those without that designation (accessible in plaintext through the dashboard and API). Attackers used the accessible variables as the escalation path. A developer convenience toggle determined the blast radius. Vercel has since changed its default: new environment variables now default to sensitive.

“Modern controls get deployed, but if legacy tokens or keys aren’t retired, the system quietly favors them,” Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS, told VentureBeat.

3. Infostealer-to-SaaS-to-supply-chain escalation chains lack detection coverage. Hudson Rock’s reporting reveals a kill chain that crossed four organizational boundaries. No single detection layer covers that chain. Context.ai’s updated bulletin acknowledged that the scope extended beyond what was initially identified during its CrowdStrike-led investigation.

Advertisement

4. Dwell time between vendor detection and customer notification exceeds attacker timelines. Context.ai detected the AWS compromise in March. Vercel disclosed on Sunday. Every CISO should ask their vendors: what is your contractual notification window after detecting unauthorized access that could affect downstream customers?

5. Third-party AI tools are the new shadow IT. Vercel’s bulletin describes Context.ai as “a small, third-party AI tool.” Grip Security’s March 2026 analysis of 23,000 SaaS environments found a 490% year-over-year increase in AI-related attacks. Vercel is the latest enterprise to learn this the hard way.

6. AI-accelerated attackers compress response timelines. Rauch’s assessment of AI acceleration comes from what his IR team observed. CrowdStrike’s 2026 Global Threat Report puts the baseline at a 29-minute average eCrime breakout time, 65% faster than 2024.

Security director action plan

Attack Surface

Advertisement

What Failed

Recommended Action

Owner

OAuth governance

Advertisement

Context.ai held broad “Allow All” Workspace permissions. No approval workflow intercepted.

Inventory every AI tool OAuth grant org-wide. Revoke scopes exceeding least privilege. Check both Vercel IOCs now.

Identity / IAM

Env var classification

Advertisement

Variables not marked “sensitive” remained accessible. Accessibility became the escalation path.

Default to non-readable. Require a security sign-off to downgrade any variable to accessible.

Platform eng + security

Infostealer-to-supply-chain

Advertisement

Kill chain spanned Lumma Stealer, Context.ai AWS, OAuth tokens, Vercel Workspace, and production environments.

Correlate Infostealer intel feeds against employee domains. Automate credential rotation when creds surface in stealer logs.

Threat intel + SOC

Vendor notification lag

Advertisement

Nearly a month between Context.ai detection and Vercel disclosure.

Require 72-hour notification clauses in all contracts involving OAuth or identity integration.

Third-party risk / legal

Shadow AI adoption

Advertisement

One employee’s unapproved AI tool became the breach vector for hundreds of orgs.

Extend shadow IT discovery to AI agent platforms. Treat unapproved adoption as a security event.

Security ops + procurement

Lateral movement speed

Advertisement

Rauch suspects AI acceleration. Attacker compressed the access-to-escalation window.

Cut detection-to-containment SLAs below 29-minute eCrime average.

SOC + IR team

Run both IoC checks today

Search your Google Workspace admin console (Security > API Controls > Manage Third-Party App Access) for two OAuth App IDs.

Advertisement

The first is 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com, tied to Context.ai’s Office Suite.

The second is 110671459871-f3cq3okebd3jcg1lllmroqejdbka8cqq.apps.googleusercontent.com, tied to Context.ai’s Chrome extension and granting Google Drive read access.

If either touched your environment, you are in the blast radius regardless of what Vercel discloses next.

What this means for security directors

Forget the Vercel brand name for a moment. What happened here is the first major proof case that AI agent OAuth integrations create a breach class that most enterprise security programs cannot detect, scope, or contain. A Roblox cheat download in February led to production infrastructure access in April. Four organizational boundaries, two cloud providers, and one identity perimeter. No zero-day required.

Advertisement

For most enterprises, employees have connected AI tools to corporate Google Workspace, Microsoft 365 or Slack instances with broad OAuth scopes — without security teams knowing. The Vercel breach is the case study for what that exposure looks like when an attacker finds it first.

Source link

Continue Reading

Tech

Starbucks cuts tech jobs as new CTO reshapes organization

Published

on

Starbucks is cutting an unspecified number of tech jobs. (GeekWire File Photo)

Starbucks is cutting jobs in its technology organization, restructuring the team under a new chief technology officer who joined the coffee giant from Amazon four months ago.

Several affected employees posted about the cuts on LinkedIn on Tuesday afternoon, including people in program and product management and other technology-related roles. Starbucks declined to comment, and the number of people impacted is unclear as of now. 

The Seattle Times reported on the cuts earlier today, citing an internal message in which the company told employees it was “making structural changes to move faster, sharpen focus, and ensure we are set up to deliver on our most important priorities.”  

Anand Varadarajan joined Starbucks as chief technology officer in January after 19 years at Amazon, where he most recently ran tech and supply chain for its global grocery business. 

The restructuring comes as Starbucks pushes ahead with a broader turnaround under CEO Brian Niccol, who joined in 2024. It includes a series of technology initiatives from an AI-powered drink-ordering assistant to an algorithm that manages mobile order timing

Advertisement

The cuts appear to be unrelated to the company’s Nashville expansion. Following up on a prior announcement, Starbucks said Tuesday that it will invest $100 million in the new corporate office in Tennessee that will eventually employ up to 2,000 people.

Source link

Continue Reading

Tech

Home Depot Dropped LG Refrigerator Prices Up To 53% During Spring Black Friday Sale

Published

on





We may receive a commission on purchases made from links.

Home Depot’s Spring Black Friday Sale ends on April 22, but there is still time to make some last-minute splurges on a variety of appliances and tools — including Home Depot’s extensive collection of LG refrigerators. There are currently over 40 fridges on sale, with massive discounts up to 53% on popular models. 

Advertisement

The biggest sale is on the Energy Star-certified LG Counter-Depth Max, with the 53% discount bringing it down from $3,399 to $1,599. That’s $1,800 off. This model has 26 cubic feet of room with various compartments for storing food, a large 12.6-inch-tall ice and water dispenser, and LG’s ThinQ app to control temperature, track energy, and check the filter status. The latter is a handy extra, making LG one of SlashGear’s favorite smart fridge brands

Customers love the hidden handles that create an extra-sleek look, the fridge’s bright lighting, and spacious freezer — although some find the lack of door handles and shelf heights a bit awkward. It currently has a 4.5-star rating, which could make it a good candidate for a heavily discounted fridge if you’re in the market for one. It’s not the only LG fridge for sale, either: other highlights include a 29-cubic-foot Standard-Depth Max fridge that’s 42% off, and a 28-cubic-foot three-door French door fridge at 48% off.

Advertisement

What is Home Depot’s Spring Black Friday Sale?

Home Depot’s Spring Black Friday Sale ran a lot longer than just Friday — the sale started April 9th and will end April 22nd, the same 14-day timeframe the retailer has used for its spring sale for over a decade. It’s not really a “Black Friday” sale considering the length, but anyone with spring projects will most definitely welcome all the deals. 

Spring is a popular time for deals across a wide range of retailers, with the likes of Harbor Freight, Lowe’s, and Amazon also running their own seasonal discounts. This makes a lot of sense, since it’s often when people start DIY renovation projects and refresh their living space. Home Depot’s sale covers a wide range of products from major brands like DeWalt, Ryobi, Samsung, LG, Whirlpool, and Frigidaire, to name a few. Products on sale range from lawn and garden equipment — including useful gardening gadgets for spring — to patio furniture, kitchen appliances, and storage.

Advertisement



Source link

Advertisement
Continue Reading

Tech

Latest 'Star Wars' movie cut unnecessary costs by using Apple Vision Pro

Published

on

Director Jon Favreau says a specialized app let him better frame IMAX shots using a virtual theater environment in Apple Vision Pro. He cites it as one method to cut back on reshoots and reduce costs.

White Apple Vision Pro headset with dark reflective visor resting on a wooden table, connected by a thin cable to a small rectangular battery pack in soft indoor light
Apple Vision Pro could become a useful tool in filmmaking

Filmmaking has only become more and more expensive even as commercialized tools make the medium more accessible. It’s easier than ever to grab a smartphone and shoot some footage, but reaching Hollywood calibre isn’t so simple.
In an interview conducted by The Town podcast during Cinemacon, Jon Favreau discussed ways that technology was helping reduce costs in filmmaking. One of the tools he mentioned was Apple Vision Pro.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

My Smartwatch Gives Me Health Anxiety. Experts Explain How to Make It Stop

Published

on

I’m a wellness writer with health anxiety. Also known as hypochondria or illness anxiety disorder, health anxiety is a condition that makes me worry I am or may become ill even when I’m perfectly healthy. One minute, I have a headache, and the next, I think I’ve got a deadly brain tumor.

What’s ironic is that part of my job involves testing health-monitoring wearables, including fitness trackers and smart rings. While I love exploring this technology and do think it can help you learn more about your body, I have to be careful about how I use it so my anxiety isn’t triggered. I know I’m not alone

“Healthy adults and individuals with pre-existing medical conditions are increasingly using these devices to manage their health,” says Dr. Lindsey Rosman, assistant professor of medicine in the Division of Cardiology and co-director of the Cardiovascular Device and Data Science Lab at the University of North Carolina School of Medicine. “Whether 24/7 access to health information from a wearable actually helps or potentially harms people is really unclear.”

Advertisement

When you add in the ability to search your symptoms online or ask an AI chatbot in your wearable’s app every health question under the sun, it becomes even more difficult to discern between what’s helpful and harmful. 

To help myself and others with health anxiety navigate the world of wearables so we can either enjoy using them or know when it’s time to stop, I reached out to experts for their advice.

1. Turn off anxiety-inducing health alerts

Rosman has observed clinically that it can be beneficial to either scale back or turn off the features that make you anxious. This can be especially helpful for people with pre-existing conditions that are already being treated, such as atrial fibrillation (AFib, an irregular heartbeat), as your wearable’s irregular heart rhythm notifications will only make you anxious and can prompt you to see your doctor when it’s not medically necessary.

Plus, certain medications can affect the accuracy of wearable sensors, provoking false alarms. 

Advertisement

“We published a case report on a patient who performed over 900 EKGs [electrocardiograms or ECGs, which measure the heart’s electrical activity] on her smartwatch in a single year,” says Rosman. While most of the EKGs were normal, inconclusive alerts fueled her anxiety, leading to multiple ER visits, spousal conflict and the need for therapy to reclaim her daily life. The patient had no psychiatric history prior to getting a smartwatch.

An Apple Watch 11 showing the "Possible Hypertension" alert

When you get an unexpected health alert on your device, it can understandably cause panic.

Cole Kan/CNET/Apple

Dr. Karen Cassiday, author of Freedom from Health Anxiety and owner and managing director of the Anxiety Treatment Center of Greater Chicago, says that even patients who don’t have health anxiety can find wearables to be intrusive when they get too many alerts. “They discover they want to be less aware of every moment of their body’s functioning,” she says.

Advertisement

Thankfully, most wearable health features can be turned off completely or customized. 

For instance, Shyamal Patel, SVP of science at Oura, maker of the Oura Ring, shares that the device’s Personalized Activity Goals allow you to choose to see steps instead of calories, adjust your daily activity goal or hide calories completely, which can be necessary for anyone who finds calorie counting triggering or overly rigid. 

Advertisement

2. Avoid compulsively checking your smart device

Referring to a 2024 study she worked on that examined the impact of wearables on the psychological well-being of patients with AFib, Rosman says that about half of the participants were checking their heart rate every day out of habit, not because they felt symptoms. 

Cassidy explains that while people with health anxiety may initially find wearables helpful, compulsively checking to make sure their vitals are normal can accidentally become a form of negative reinforcement that further propels the anxiety.

“Often when I work with anxious people, we try to cut back or eliminate the need to compulsively check for reassurance on their wearables, as well as with ChapGPT or other digital ‘doctors,’” says Cassiday. 

When people refrain from compulsively checking, wearables can provide useful feedback that counters the false belief that something terrible will happen to their health.  

Advertisement

If checking your health metrics causes anxiety, try reducing how often you view them on your device or in its app. Setting an alert to check weekly, at a minimum, could help — especially since it’ll give you a broader picture, making you less likely to hyperfocus on a single data point that seems off. 

You should also avoid checking your wearable’s health information right after you wake up or before you go to bed, as this can set the tone for an anxious day or make it harder to fall asleep. 

If having a screen on your wrist makes it difficult for you to stop checking, a screenless smart ring or fitness tracker such as the Whoop 5.0 may be a better option, since they rely on apps instead of screens.

Advertisement
A close-up of the silver Oura Ring 4 on a pointer finger in front of a white wall.

A screenless smart ring may help you stop compusively checking your device.

Anna Gragert/CNET

“You choose how much or how little you engage with the app, which gives those who might be anxious about their health the option to limit the amount of time they spend with their data,” says Patel.

3. Focus on trends, not one-off metrics

When I asked both Patel and Dr. Jacqueline Shreibati, head of clinical for platforms and devices at Google, how people who wear their devices can reduce health anxiety, they emphasized the importance of tracking trends — not individual metrics.  

“We focus on long-term trends (rather than isolated metrics) to help users maintain a balanced relationship with their data,” says Shreibati. “What being healthy means differs for everyone, and we encourage users to consult their physician if they have any concerns.”

Advertisement

Patel points to the Tags and Trends features in the Oura app. Tags lets you tag lifestyle factors such as travel, alcohol, meditation or late meals, which you can then view in Trends to see how your behavior affects your recovery and sleep over weeks, rather than looking at a single score that may one day seem abnormal.

Sleet tracking Apple Watch Series 11

Instead of viewing a single sleep or stress score, consider looking at that data weekly or monthly.

Vanessa Hand Orellana/CNET

4. Remember: Your smartwatch can’t replace your doctor

“Most consumer wearables were originally developed as personal wellness devices, which are not required to demonstrate safety and efficacy like traditional medical devices (e.g., a blood pressure cuff or pacemaker),” Rosman explains. 

Advertisement

Yet we’ve begun using these wearables to monitor our health, using metrics such as heart rate and rhythm, blood oxygen, stress, sleep and physical activity. Now, some of these devices have medical-grade sensors, software and algorithms approved by the US Food and Drug Administration to detect irregular heart rhythms, hypertension and sleep apnea.

Despite FDA approval, wearables are simply not doctors, and they cannot provide medical diagnoses or treatment. That’s why it’s essential to understand what your device actually measures.

The ECG feature on many smartwatches is just one example of this. FDA-cleared as it may be, a single-lead ECG that only uses one electrode to record your heart’s electrical activity from your wrist is not the same as the 12-lead, hospital-grade ECG a cardiologist would use. 

While your wearable’s ECG can surface a potential symptom worth investigating with your doctor, it can’t replace a professional or their medical-grade equipment.

Advertisement
apple watch ultra 3 ecg

Performing an ECG on your smartwatch is not the same as having that same measurement taken in a doctor’s office.

Viva Tung/CNET/Apple

The gap is even wider for features including stress and sleep scores, which haven’t been clinically validated because there’s no one single gold standard to validate against. These numerical scores are calculated from bodily signals such as heart rate, temperature, movement and heart rate variability, which tend to correlate with your stress and sleep states. But the translation from raw signal to “your stress score is 74” is more of an educated estimate.

“What you’re seeing is a rough indicator of how your nervous system is functioning, not a medical diagnosis,” Rosman emphasizes.

Advertisement

Patel adds that not all physiological stress is inherently negative. “Some forms of short-term physiological stress can be healthy and adaptive,” he says. “That’s why we aim to pair data with in-app context and insights, so members can better understand what they’re seeing rather than receiving that information in a vacuum.” 

Nonetheless, when you don’t know exactly what your wearable is measuring, a “bad” stress or sleep score can seem scary when it isn’t necessarily a cause for alarm, but rather a sign that you may want to have a deeper conversation with your doctor.

5. Get a temperature check

Just like you should talk to your doctor before starting a new medication or diet, you should get their thoughts on whether you could benefit from using a wearable.

“Education is probably the most underused tool we have,” Rosman says. 

Advertisement

When you don’t know what a healthy heart rate or ECG looks like, one seemingly atypical reading can send you into a panic. That’s why it’s essential to speak with your doctor so you understand your own baseline and if a wearable makes sense for your current health condition.

As a guide, Rosman provides the following questions you can ask your doctor:

Advertisement
  • What type of wearable should I use? 
  • How often should I check this data? 
  • What are healthy numbers for me? 
  • What do I do when I get an alert? 
  • When should I call the clinic or seek emergency care versus waiting? 

“A fast heart rate after climbing stairs is not the same as a dangerous arrhythmia, but without that context, a notification can feel terrifying,” Rosman adds. “So much wearable-related anxiety comes not from the data itself, but from not knowing what to do with it.”

6. Know when it’s time to remove that device and get help

When asked when someone should consider parting with their wearable or seeing a professional for health anxiety, Cassiday says that it’s similar to what many notice when they keep checking their smartphone for the next text, TikTok or other digital data.  

“If you find yourself interrupting pleasurable activities or your free time to check, or if you feel anxious about not checking, you have a problem,” Cassiday states. 

For instance, if you only stop thinking that you’ll have a heart attack when you check your wearable and see your resting heart rate. Or, put simply, if you only feel at peace after someone or something, such as a wearable reassures you that you’re in good health, it’s time to get professional support. 

Advertisement
An aerial view of a version with blonde hair, a yellow shirt and light-wash jeans talking to a therapist while on a gray couch.

If health anxiety is making it difficult for you to enjoy your life, then it’s time to talk to a professional.

Constantinis/Getty Images

To find help, Cassiday recommends using the resources provided by the Anxiety and Depression Association of America or the International OCD Foundation, as health anxiety can be related to obsessive-compulsive disorder. 

7. Consider cognitive behavioral therapy 

When you have health anxiety, the gold standard for care is cognitive behavioral therapy. It involves exposure to health-related worries without any form of reassurance and learning to accept the uncertainty that comes with not knowing our future health status, manner of death or time of death.  

“People need to learn that all the vague symptoms that trigger their health anxiety are just normal variations of normal body functioning and aging,” Cassiday explains. “They have to reframe the symptoms they notice as nothing to examine, discuss or manage and instead trust the facts of their other evidence of good health.”

CBT can help you live in the present instead of spiraling into the anxiety-inducing “What if?” of the future.

Who should and shouldn’t use health-tracking wearables

Wearables can be great for people who like tracking their fitness to motivate them toward their goals, or for patients and their care teams when medically necessary. Though they usually cost hundreds of dollars, wearables can be less expensive than medical tests. Some are even HSA- or FSA-eligible

Advertisement

“In AFib specifically, being able to correlate your symptoms with actual rhythm data can be genuinely empowering,” Rosman says. She’s observed that the patients who thrive with wearables are those who use the data as information — not as something to fear — and those who don’t participate in 24/7 surveillance.

In Rosman’s 2024 study, two-thirds of AFib patients said their wearable made them feel safer and more in control. Even so, there is still the risk of unintended consequences.

Two fitness tracker watches and a gold Oura Ring on a wrist and finger.

While they can be beneficial, wearables can also come with risks — especially since there isn’t enough research on the subject.

Advertisement

Giselle Castro-Sloboda/CNET

Just as doctors would never prescribe a medication without knowing the potential benefits, risks and how to manage them, wearables should be no different. “The technology has moved so much faster than the science, and we need the scientific evidence from clinical trials to catch up,” Rosman explains. 

Since the evidence isn’t there yet, Rosman is hesitant to say anyone should categorically avoid wearables. 

Despite that, people who are highly anxious about their heart or prone to obsessive symptom monitoring should approach with caution. The same goes for those with conditions involving unpredictable, abrupt symptoms, such as paroxysmal AFib and POTS, because the uncertainty of not knowing when the next episode will hit is stressful enough, and constant monitoring can make it worse.

A note on the science (or lack thereof)

Rosman has conducted research on the connection between wearables and anxiety, including a 2025 review describing the psychological effects of wearables on patients with cardiovascular disease and a 2024 study examining their impact on the psychological well-being of patients with AFib. 

The 2025 review found that while wearables can help promote healthy behaviors and provide data for diagnosis and treatment, they also pose risks, such as adverse psychological reactions. 

In the 2024 study, it was concluded that wearables were connected with higher rates of patients becoming preoccupied with their symptoms, being concerned about their treatments and using both formal and informal health care resources.

Advertisement

On the other hand, a 2021 study that analyzed the 2019 and 2020 US-based Health Information National Trends Survey found that using wearable devices for self-tracking can indirectly reduce psychological distress. Still, misinterpretation of wearable data may cause unnecessary panic and anxiety. 

A 2020 qualitative interview study featuring patients with chronic heart disease also found that while wearables’ data may be a resource for self-care, it can create uncertainty, fear and anxiety.

Ultimately, more studies are needed. 

“Honestly, we don’t have good scientific evidence in this area yet,” says Rosman. “Despite widespread use, there have been no clinical trials I’m aware of that have looked at the benefits and potential health risks of specific wearable health features.”

Advertisement

Rosman’s team plans to be the first to investigate this in patients with pre-existing heart conditions.

Wearables’ impact on our health care system

When wearables cause health anxiety, they can prompt healthy individuals to schedule unnecessary doctor’s appointments. This places a burden on our health care system, which is already experiencing shortages, making it difficult for people who actually require medical attention to access care. 

Rosman’s 2024 study found that those using a wearable sent nearly twice as many patient portal messages to their doctors. Responding to these messages from patients takes time, isn’t reimbursed by insurance and can contribute to burnout.

Advertisement
A person in blue scrubs with long brown hair checking messages on a desktop computer.

When health anxiety caused by wearables prompts people to message their doctors, it can put a strain on the health care system.

MoMo Productions/Getty Images

As a result, Rosman believes we need better systems for managing wearable data in clinical settings before we scale it further: “Wearables are changing how we deliver care in ways we haven’t fully prepared for.”

Wearables can further widen health care inequity due to their cost. 

“These devices are expensive, they were mostly designed and tested in young healthy people and they’re marketed toward higher-income consumers,” Rosman explains. “If we’re not thoughtful about access, wearables could actually widen health disparities rather than close them. That’s the opposite of what we want.”

The bottom line

While wearables have their benefits, there are also risks to consider, especially given the limited research on the subject.

If you purchase a wearable and it triggers health anxiety, you don’t have to use every available feature, wear it constantly or continue to wear it at all. Before you even buy that device, you can arm yourself with anxiety-reducing knowledge by getting your doctor’s expert opinion.  

Advertisement

However, if health anxiety continues to take over your life, it may be time to remove your wearable and seek professional help. 

As for me, writing this piece has been a necessary reminder that, while there’s a lot we can’t control in life, the power is in our hands (or on our wrists or fingers) when it comes to the technology we put on our bodies or invite into our homes. Just like an itchy sweater or a lumpy armchair, we can send the technology that doesn’t serve us packing.  

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025