Connect with us
DAPA Banner

Tech

Meta will record employees’ keystrokes and use it to train its AI models

Published

on

Meta has found a new source of training data for its AI models: its own employees. The company plans to use data culled from the mouse movements and keystrokes of its own staff in its pursuit to build more capable and efficient artificial intelligence.

The story, which was first reported by Reuters, shows the lengths to which tech companies are going to find new sources of training data — the lifeblood of AI models that helps the programs learn how to more effectively carry out tasks and respond to user queries.

When reached for comment by TechCrunch, a Meta spokesperson provided the following statement: “If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them — things like mouse movements, clicking buttons, and navigating dropdown menus. To help, we’re launching an internal tool that will capture these kinds of inputs on certain applications to help us train our models. There are safeguards in place to protect sensitive content, and the data is not used for any other purpose.”

This trend reveals a troublesome privacy dimension of the AI industry. Last week it was reported that old startups are being scavenged for their corporate communications (like Slack archives and Jira tickets), and converted into AI training data.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Invincible season 4 episode 8 ending explained: does Eve [spoiler], will there be a season 5, and more on the Prime Video show’s latest finale

Published

on

Invincible season 4 episode 8 has landed on Prime Video — and, hoo boy, if Mark Grayson thought he had it bad already, nothing can prepare him (or you, for that matter) about what’s to come after that decision he’s just made.

If you’re here, I’m guessing you’ve seen the Amazon TV Original’s latest finale and have big questions about what you just watched. Luckily for you, I’m a huge Invincible nerd, so I’m perfectly placed to answer them.

Source link

Advertisement
Continue Reading

Tech

Unauthorized group has gained access to Anthropic’s exclusive cyber tool Mythos, report claims

Published

on

A group of unauthorized users has reportedly gained access to Mythos, the cybersecurity tool recently announced by Anthropic.

Much has been made of Mythos and its purported power — an AI product designed for enterprise security that, in the wrong hands, could become a potent hacking tool, according to the company. Now Bloomberg has reported that a “private online forum,” the members of which have not been publicly identified, has managed to gain access to the tool through a third-party vendor.

“We’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments,” an Anthropic spokesperson told TechCrunch. The company said that, so far, it has found no evidence that the supposedly unauthorized activity has impacted Anthropic’s systems in any way.

The unauthorized group tried a number of different strategies to gain access to the model, including using “access” enjoyed by the person who was interviewed by Bloomberg. That person is currently employed at a third-party contractor that works for Anthropic, the outlet reported.

Advertisement

Members of the group are part of a Discord channel that seeks out information about unreleased AI models, the outlet reported. The group has been using Mythos regularly since gaining access to it, and provided evidence to Bloomberg in the form of screenshots and a live demonstration of the software.

Bloomberg reports that the group, which supposedly gained access to the tool on the same day it was publicly announced, “made an educated guess about the model’s online location based on knowledge about the format Anthropic has used for other models.” The group in question is “interested in playing around with new models, not wreaking havoc with them,” the source told the outlet.

Mythos was released to a select number of vendors, including big names like Apple, as part of an initiative called Project Glasswing. The limited release of the model was designed to prevent its use by bad actors. The tool could be weaponized against corporate security instead of bolstering it, Anthropic said.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

If true, unauthorized use of Mythos could spell trouble for Anthropic, which provided the exclusive release to allay the company’s concern for enterprise security.

Advertisement

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Source link

Continue Reading

Tech

48 days of exposed projects, closed bug reports, & the structural failure of vibe coding security

Published

on

Summary: Lovable, the $6.6 billion vibe coding platform with eight million users, has faced three documented security incidents exposing source code, database credentials, and thousands of user records, with the most recent BOLA vulnerability left open for 48 days after the company closed a bug bounty report without escalation. The incidents are representative of a structural problem across vibe coding: 40-62% of AI-generated code contains vulnerabilities, 91.5% of vibe-coded apps had at least one AI hallucination-related flaw in Q1 2026, and the market’s incentive structure rewards growth over security at a moment when 60% of all new code is projected to be AI-generated by year end.

Lovable, the vibe coding platform valued at $6.6 billion with eight million users, has spent the past two months dealing with security incidents that collectively exposed source code, database credentials, AI chat histories, and the personal data of thousands of users across projects built on its platform. The most recent disclosure, published on 20 April by a security researcher, revealed a broken object-level authorisation vulnerability in Lovable’s API that allowed anyone with a free account to access another user’s profile, public projects, source code, and database credentials in as few as five API calls. The researcher reported the flaw to Lovable’s bug bounty programme on 3 March. Lovable patched it for new projects but never fixed it for existing ones, marked a follow-up report as a duplicate, and closed it. As of reporting, the vulnerability had been open for 48 days.

Lovable’s response followed a pattern that security researchers found more telling than the vulnerability itself. The company first posted on X that it “did not suffer a data breach,” calling the exposed data “intentional behaviour.” It then blamed its own documentation, saying that what “public” implies “was unclear.” It then blamed its bug bounty partner HackerOne, saying reports were “closed without escalation because our HackerOne partners thought that seeing public projects’ chats was the intended behaviour.” Later that day, it issued a partial apology acknowledging that “pointing to documentation issues alone was not enough.” Cybernews headlined its coverage: “Lovable goes on ego trip denying vulnerability, then blames others for said vulnerability.

What was exposed

The April incident affected projects created before November 2025. The researcher demonstrated that extracting a user’s source code from Lovable’s API also yielded hardcoded Supabase database credentials embedded in that code. One affected project belonged to Connected Women in AI, a Danish nonprofit. Its exposed data contained real user records including names, job titles, LinkedIn profiles, and Stripe customer IDs, with records linked to individuals at Accenture Denmark and Copenhagen Business School. Employees at Nvidia, Microsoft, Uber, and Spotify reportedly have Lovable accounts tied to affected projects.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

This was the third documented security incident involving the platform. In February, a tech entrepreneur named Taimur Khan found 16 vulnerabilities, six of them critical, in a single app hosted on Lovable and featured on its own Discover page with more than 100,000 views. The most severe was an inverted authentication logic that granted anonymous users full access while blocking authenticated users. The app, an AI-powered EdTech tool, exposed 18,697 user records including 4,538 student accounts from institutions including UC Berkeley and UC Davis, with minors likely on the platform. Khan reported his findings through Lovable’s support channel. His ticket was closed without a response.

An earlier study in May 2025 found that 170 out of 1,645 sampled Lovable-created applications had issues allowing personal information to be accessed by anyone. Approximately 70% of Lovable apps had row-level security disabled entirely.

Advertisement

The structural problem

Lovable is not uniquely insecure. It is representatively insecure. The platform generates full-stack applications using React, Tailwind, and Supabase in response to natural language prompts, a process the industry calls vibe coding after Andrej Karpathy coined the term in February 2025. The approach lets anyone describe an application and have it built by an AI model without writing or reviewing code. Collins English Dictionary named it Word of the Year for 2025. Gartner forecasts that 60% of all new code will be AI-generated by the end of this year.

The security data across the entire category is consistent. Between 40 and 62% of AI-generated code contains security vulnerabilities, depending on the study. AI-written code produces flaws at 2.74 times the rate of human-written code, according to an analysis of 470 GitHub pull requests. A first-quarter 2026 assessment of more than 200 vibe-coded applications found that 91.5% contained at least one vulnerability traceable to AI hallucination. More than 60% exposed API keys or database credentials in public repositories. The vulnerability classes are the same across every major vibe coding platform: disabled row-level security, hardcoded secrets, missing webhook verification, injection flaws, and broken access controls.

Bolt.new ships with row-level security off by default. Cursor has had multiple CVEs patched, including a case-sensitivity bypass enabling persistent remote code execution. Researchers at Pillar Security demonstrated a “rules file backdoor” attack in which hackers inject hidden malicious instructions into configuration files used by Cursor and GitHub Copilot. A separate “Agent Commander” attack in March showed that prompt injection into AI coding agents could convert autonomous coding tools into remotely controlled malware delivery platforms. In January, the vibe-coded social network Moltbook was breached within three days of launch, exposing 1.5 million API authentication tokens and 35,000 email addresses through a misconfigured Supabase database with no row-level security.

The economic incentive problem

Security firms are raising money specifically to address the gap. Escape raised $18 million to replace manual penetration testing with AI agents that scan vibe-coded applications, citing over 2,000 high-impact vulnerabilities and hundreds of exposed secrets found in live production systems. Lovable itself partnered with Aikido to bring automated pentesting to its platform. But the fundamental incentive structure of the market works against security.

Advertisement

Lovable hit $4 million in annual recurring revenue in its first four weeks and $10 million in two months with a team of 15 people. It raised $200 million at a $1.8 billion valuation in July 2025 and $330 million at $6.6 billion in December, more than tripling its valuation in five months. Enterprise adoption of vibe coding grew 340% year over year. Non-technical user adoption surged 520%. Eighty-seven percent of Fortune 500 companies have adopted at least one vibe coding platform. The market rewards speed and accessibility. Security is a cost centre that slows both.

The result is a category in which the dominant platforms generate code that is insecure by default, the users generating that code lack the expertise to identify the vulnerabilities, and the platforms themselves have financial incentives to prioritise growth over remediation. Lovable’s handling of the March and April incidents illustrates the dynamic precisely: a bug bounty report was closed without escalation, a vulnerability affecting thousands of projects was patched for new users but not existing ones, and the public response cycled through denial, deflection, and a partial apology within a single day.

The regulatory gap

The EU AI Act’s high-risk obligations take effect on 2 August, requiring transparency, human oversight, and data governance for AI systems. California’s S.B. 53 and New York’s RAISE Act require frontier AI developers to publish safety frameworks and report incidents. But none of these regulations specifically address the security of code generated by AI models for end users, and the adoption data suggests the market is moving faster than regulators can respond. Financial services and healthcare, the two most regulated sectors, show the lowest vibe coding adoption rates at 34 and 28% respectively, which indicates that the market itself recognises the compliance gap even if regulations have not yet caught up.

As Trend Micro framed it: “The real risk of vibe coding isn’t AI writing insecure code. It’s humans shipping code they never had a chance to secure.” The 84% surge in App Store submissions driven by vibe coding tools suggests the volume of unreviewed code entering production is accelerating. Thirty-five CVEs were disclosed in March alone from AI-generated code, up from six in January, and Georgia Tech estimates the actual figure is five to ten times higher than what is detected.

Advertisement

Lovable is the fastest-growing software startup in history by several measures. It is also a company that closed a critical vulnerability report without reading it, left thousands of projects exposed for 48 days, and responded to public disclosure by denying a breach, blaming its documentation, blaming its bug bounty partner, and then apologising for the apology. The pattern is not unique to Lovable. It is the pattern of a category that has built extraordinary tools for creating software and almost nothing for securing it.

Source link

Advertisement
Continue Reading

Tech

ZenTimings provides a detailed view of RAM timings on Ryzen systems

Published

on

ZenTimings is a Windows utility designed for AMD Ryzen platforms that displays real-time memory configuration data and timings, frequency, and Infinity Fabric clocks. It’s primarily used to verify BIOS or XMP/EXPO settings, offering a straightforward way to check how system memory is running.

Read Entire Article
Source link

Continue Reading

Tech

Honor 600 series takes aim at the affordable flagship crown with Snapdragon power and a 7,000mAh battery

Published

on

Honor has finally lifted the covers off its latest N-series devices, and the new models bring several key upgrades over the Honor 400 series, which was the last N-series lineup to launch outside China. The Pro model in the new Honor 600 series is especially noteworthy because it’s positioned as a legitimate “accessible flagship” that pairs a top-tier Snapdragon SoC with a stunning display and a massive battery at an attractive price.

Flagship specs without the flagship price

The new lineup pushes Honor’s N-series further into flagship territory, with both the standard Honor 600 and the more premium Honor 600 Pro offering features typically reserved for more expensive phones. The Honor 600 Pro packs Qualcomm’s Snapdragon 8 Elite chip and is positioned as a serious option for gaming and demanding workloads. The standard variant, meanwhile, runs on the Snapdragon 7 Gen 4, offering a more balanced mix of performance and efficiency.

On the display front, both phones feature a 6.57-inch OLED panel with a 120Hz refresh rate, peak brightness of 8,000 nits, and 3,840Hz PWM dimming. Battery life is another major highlight, with the two devices packing a 7,000mAh silicon-carbon battery that supports 80W wired fast charging and 27W reverse charging. The Pro model even includes 50W wireless charging support, a feature that’s often omitted on affordable flagships.

The Honor 600 series’ camera hardware is no slouch either, with both devices featuring a 200MP main shooter with a large 1/1.4-inch sensor size. On the Pro model, it’s paired with a 50MP telephoto lens with up to 120x zoom and CIPA 6.5 image stabilization, a 12MP ultrawide camera, and a 50MP selfie shooter. The standard version has the same ultrawide and selfie cameras, but skips the telephoto camera.

As announced earlier, Honor is also debuting its upgraded AI Image to Video 2.0 feature with the lineup, which allows users to generate short videos from still images using natural language prompts. Durability has also been upgraded, and both models feature an IP69K rating along with enhanced drop resistance certification.

Pricing and availability

Honor has launched the 600 series in Malaysia today, with the Pro model priced at RM3,099 (~$784) for the 12GB+256GB configuration and RM3,299 (~$835) for the 512GB storage option. The standard Honor 600 comes in a single 12GB+512GB configuration priced at RM2,599 (~$658). Both models come in Orange, Golden White, and Black color options.

Advertisement

Honor has confirmed that the devices will roll out to additional global markets, but regional pricing and availability details have not yet been announced.

Source link

Advertisement
Continue Reading

Tech

FBI Looks Into Dead or Missing Scientists Tied To Sensitive US Research

Published

on

Federal authorities are now reviewing a string of deaths and disappearances involving scientists tied to sensitive U.S. aerospace and nuclear work, though officials have not established any confirmed link between the cases. The FBI says it “is spearheading the effort to look for connections into the missing and deceased scientists,” adding that it “is working with the Department of Energy, Department of War, and with our state … and local law enforcement partners to find answers.” The Republican-led House Oversight Committee also announced an investigation into the reports. CNN reports: A nuclear physicist and MIT professor fatally shot outside his Massachusetts residence. A retired Air Force general missing from his New Mexico home. An aerospace engineer who disappeared during a hike in Los Angeles. These are among at least 10 individuals connected to sensitive US nuclear and aerospace research who have died or disappeared in recent years, prompting concerns whether they are connected and fueling speculation online about the possibility of nefarious activity. […]

The Defense Department said only that it would respond to the committee directly, and the Department of Energy referred questions to the White House. In a post on X, NASA said it is “coordinating and cooperating with the relevant agencies” in relation to the scientists. “At this time, nothing related to NASA indicates a national security threat,” NASA spokesperson Bethany Stevens said.

The cases vary widely in circumstance. Some involve unsolved homicides, while others are missing persons cases with no signs of foul play. In at least two instances, families have pointed to preexisting medical conditions or personal struggles as explanations. Authorities have not established any links between the cases. The White House said last week it is also working with federal agencies to probe any potential links between the deaths and disappearances, with President Donald Trump referring to the matter as “pretty serious stuff.” “The United States has thousands of nuclear scientists and nuclear experts,” said Rep. James Walkinshaw, a Democrat who also serves on the Oversight Committee. “It’s not the kind of nuclear program that potentially a foreign adversary could significantly impact by targeting 10 individuals.”

Source link

Advertisement
Continue Reading

Tech

Vercel breach exposes the OAuth gap most security teams cannot detect, scope or contain

Published

on

One employee at Vercel adopted an AI tool. One employee at that AI vendor got hit with an infostealer. That combination created a walk-in path to Vercel’s production environments through an OAuth grant that nobody had reviewed.

Vercel, the cloud platform behind Next.js and its millions of weekly npm downloads, confirmed on Sunday that attackers gained unauthorized access to internal systems. Mandiant was brought in. Law enforcement was notified. Investigations remain active. An update on Monday confirmed that Vercel collaborated with GitHub, Microsoft, npm, and Socket to verify that no Vercel npm packages were compromised. Vercel also announced it is now defaulting environment variable creation to “sensitive.” Next.js, Turbopack, AI SDK, and all Vercel-published npm packages remain uncompromised after a coordinated audit with GitHub, Microsoft, npm, and Socket.

Context.ai was the entry point. OX Security’s analysis found that a Vercel employee installed the Context.ai browser extension and signed into it using a corporate Google Workspace account, granting broad OAuth permissions. When Context.ai was breached, the attacker inherited that employee’s Workspace access, pivoted into Vercel environments, and escalated privileges by sifting through environment variables not marked as “sensitive.” Vercel’s bulletin states that variables marked sensitive are stored in a manner that prevents them from being read. Variables without that designation were accessible in plaintext through the dashboard and API, and the attacker used them as the escalation path.

CEO Guillermo Rauch described the attacker as “highly sophisticated and, I strongly suspect, significantly accelerated by AI.” Jaime Blasco, CTO of Nudge Security, independently surfaced a second OAuth grant tied to Context.ai’s Chrome extension, matching the client ID from Vercel’s published IOC to Context.ai’s Google account before Rauch’s public statement. The Hacker News reported that Google removed Context.ai’s Chrome extension from the Chrome Web Store on March 27. Per The Hacker News and Nudge Security, that extension embedded a second OAuth grant enabling read access to users’ Google Drive files.

Advertisement

Patient zero. A Roblox cheat and a Lumma Stealer infection

Hudson Rock published forensic evidence on Monday, reporting that the breach origin traces to a February 2026 Lumma Stealer infection on a Context.ai employee’s machine. According to Hudson Rock, browser history showed the employee downloading Roblox auto-farm scripts and game exploit executors. Harvested credentials included Google Workspace logins, Supabase keys, Datadog tokens, Authkit credentials, and the support@context.ai account. Hudson Rock identified the infected user as a core member of “context-inc,” Context.ai’s tenant on the Vercel platform, with administrative access to production environment variable dashboards.

Context.ai published its own bulletin on Sunday (updated Monday), disclosing that the breach affects its deprecated AI Office Suite consumer product, not its enterprise Bedrock offering (Context.ai’s agent infrastructure product, unrelated to AWS Bedrock). Context.ai says it detected unauthorized access to its AWS environment in March, hired CrowdStrike to investigate, and shut down the environment. Its updated bulletin then disclosed that the scope was broader than initially understood: the attacker also compromised OAuth tokens for consumer users, and one of those tokens opened the door to Vercel’s Google Workspace.

Dwell time is the detail that should concern security directors. Nearly a month separated Context.ai’s March detection from the Vercel disclosure on Sunday. A separate Trend Micro analysis references an intrusion beginning as early as June 2024 — a finding that, if confirmed, would extend the dwell time to roughly 22 months. VentureBeat could not independently reconcile that timeline with Hudson Rock’s February 2026 dating; Trend Micro did not respond to a request for comment before publication.

Where detection goes blind

Security directors can use this table to benchmark their own detection stack against the four-hop kill chain this breach exploited.

Advertisement

Kill Chain Hop

What Happened

Who Should Detect

Typical Coverage

Advertisement

Gap

1. Infostealer on employee device

Context.ai employee downloaded Roblox cheat scripts; Lumma Stealer harvested Workspace creds, Supabase/Datadog/Authkit keys.

EDR on endpoint; credential exposure monitoring.

Advertisement

Low. Device likely under-monitored. No stealer log monitoring at most orgs.

Most enterprises do not subscribe to infostealer intelligence feeds or correlate stealer logs against employee email domains.

2. AWS compromise at Context.ai

Attacker used harvested credentials to access Context.ai’s AWS. Detected in March.

Advertisement

Context.ai cloud security; AWS CloudTrail.

Partially detected. Context.ai stopped AWS access but missed OAuth token exfiltration.

Initial investigation did not identify OAuth token exfiltration. Scope was underestimated until Vercel disclosure.

3. OAuth token theft into Vercel Workspace

Advertisement

Compromised OAuth token used to access a Vercel employee’s Google Workspace. Employee had granted “Allow All” permissions via Chrome extension.

Google Workspace audit logs; OAuth app monitoring; CASB.

Very low. Most orgs do not monitor third-party OAuth token usage patterns.

No approval workflow intercepted the grant. No anomaly detection on OAuth token use from a compromised third party. This is the hop no one saw.

Advertisement

4. Lateral movement into Vercel production

Attacker enumerated non-sensitive env vars (accessible via dashboard/API), harvested customer credentials.

Vercel platform audit logs; behavioral analytics.

Moderate. Vercel detected the intrusion after the attacker accessed customer credentials.

Advertisement

Detection occurred after exfiltration, not before. Env var access by a compromised Workspace account did not trigger real-time alerting.

What’s confirmed vs. what’s claimed

Vercel’s bulletin confirms unauthorized access to internal systems, a limited subset of affected customers, and two IOCs tied to Context.ai’s Google Workspace OAuth apps. Rauch confirmed that Next.js, Turbopack, and Vercel’s open-source projects are unaffected.

Separately, a threat actor using the ShinyHunters name posted on BreachForums claiming to hold Vercel’s internal database, employee accounts, and GitHub and NPM tokens, with a $2M asking price. Austin Larsen, principal threat analyst at Google Threat Intelligence, assessed the claimant as “likely an imposter.” Actors previously linked to ShinyHunters have denied involvement. None of these claims has been independently verified.

Six governance failures the Vercel breach exposed

1. AI tool OAuth scopes go unaudited. Context.ai’s own bulletin states that a Vercel employee granted “Allow All” permissions using a corporate account. Most security teams have no inventory of which AI tools their employees have granted OAuth access to.

Advertisement

CrowdStrike CTO Elia Zaitsev put it bluntly at RSAC 2026: “Don’t give an agent access to everything just because you’re lazy. Give it access to only what it needs to get the job done.” Jeff Pollard, VP and principal analyst at Forrester, told Cybersecurity Dive that the attack is a reminder about third-party risk management concerns and AI tool permissions.

2. Environment variable classification is doing real security work. Vercel distinguishes between variables marked “sensitive” (stored in a manner that prevents reading) and those without that designation (accessible in plaintext through the dashboard and API). Attackers used the accessible variables as the escalation path. A developer convenience toggle determined the blast radius. Vercel has since changed its default: new environment variables now default to sensitive.

“Modern controls get deployed, but if legacy tokens or keys aren’t retired, the system quietly favors them,” Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS, told VentureBeat.

3. Infostealer-to-SaaS-to-supply-chain escalation chains lack detection coverage. Hudson Rock’s reporting reveals a kill chain that crossed four organizational boundaries. No single detection layer covers that chain. Context.ai’s updated bulletin acknowledged that the scope extended beyond what was initially identified during its CrowdStrike-led investigation.

Advertisement

4. Dwell time between vendor detection and customer notification exceeds attacker timelines. Context.ai detected the AWS compromise in March. Vercel disclosed on Sunday. Every CISO should ask their vendors: what is your contractual notification window after detecting unauthorized access that could affect downstream customers?

5. Third-party AI tools are the new shadow IT. Vercel’s bulletin describes Context.ai as “a small, third-party AI tool.” Grip Security’s March 2026 analysis of 23,000 SaaS environments found a 490% year-over-year increase in AI-related attacks. Vercel is the latest enterprise to learn this the hard way.

6. AI-accelerated attackers compress response timelines. Rauch’s assessment of AI acceleration comes from what his IR team observed. CrowdStrike’s 2026 Global Threat Report puts the baseline at a 29-minute average eCrime breakout time, 65% faster than 2024.

Security director action plan

Attack Surface

Advertisement

What Failed

Recommended Action

Owner

OAuth governance

Advertisement

Context.ai held broad “Allow All” Workspace permissions. No approval workflow intercepted.

Inventory every AI tool OAuth grant org-wide. Revoke scopes exceeding least privilege. Check both Vercel IOCs now.

Identity / IAM

Env var classification

Advertisement

Variables not marked “sensitive” remained accessible. Accessibility became the escalation path.

Default to non-readable. Require a security sign-off to downgrade any variable to accessible.

Platform eng + security

Infostealer-to-supply-chain

Advertisement

Kill chain spanned Lumma Stealer, Context.ai AWS, OAuth tokens, Vercel Workspace, and production environments.

Correlate Infostealer intel feeds against employee domains. Automate credential rotation when creds surface in stealer logs.

Threat intel + SOC

Vendor notification lag

Advertisement

Nearly a month between Context.ai detection and Vercel disclosure.

Require 72-hour notification clauses in all contracts involving OAuth or identity integration.

Third-party risk / legal

Shadow AI adoption

Advertisement

One employee’s unapproved AI tool became the breach vector for hundreds of orgs.

Extend shadow IT discovery to AI agent platforms. Treat unapproved adoption as a security event.

Security ops + procurement

Lateral movement speed

Advertisement

Rauch suspects AI acceleration. Attacker compressed the access-to-escalation window.

Cut detection-to-containment SLAs below 29-minute eCrime average.

SOC + IR team

Run both IoC checks today

Search your Google Workspace admin console (Security > API Controls > Manage Third-Party App Access) for two OAuth App IDs.

Advertisement

The first is 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com, tied to Context.ai’s Office Suite.

The second is 110671459871-f3cq3okebd3jcg1lllmroqejdbka8cqq.apps.googleusercontent.com, tied to Context.ai’s Chrome extension and granting Google Drive read access.

If either touched your environment, you are in the blast radius regardless of what Vercel discloses next.

What this means for security directors

Forget the Vercel brand name for a moment. What happened here is the first major proof case that AI agent OAuth integrations create a breach class that most enterprise security programs cannot detect, scope, or contain. A Roblox cheat download in February led to production infrastructure access in April. Four organizational boundaries, two cloud providers, and one identity perimeter. No zero-day required.

Advertisement

For most enterprises, employees have connected AI tools to corporate Google Workspace, Microsoft 365 or Slack instances with broad OAuth scopes — without security teams knowing. The Vercel breach is the case study for what that exposure looks like when an attacker finds it first.

Source link

Continue Reading

Tech

Starbucks cuts tech jobs as new CTO reshapes organization

Published

on

Starbucks is cutting an unspecified number of tech jobs. (GeekWire File Photo)

Starbucks is cutting jobs in its technology organization, restructuring the team under a new chief technology officer who joined the coffee giant from Amazon four months ago.

Several affected employees posted about the cuts on LinkedIn on Tuesday afternoon, including people in program and product management and other technology-related roles. Starbucks declined to comment, and the number of people impacted is unclear as of now. 

The Seattle Times reported on the cuts earlier today, citing an internal message in which the company told employees it was “making structural changes to move faster, sharpen focus, and ensure we are set up to deliver on our most important priorities.”  

Anand Varadarajan joined Starbucks as chief technology officer in January after 19 years at Amazon, where he most recently ran tech and supply chain for its global grocery business. 

The restructuring comes as Starbucks pushes ahead with a broader turnaround under CEO Brian Niccol, who joined in 2024. It includes a series of technology initiatives from an AI-powered drink-ordering assistant to an algorithm that manages mobile order timing

Advertisement

The cuts appear to be unrelated to the company’s Nashville expansion. Following up on a prior announcement, Starbucks said Tuesday that it will invest $100 million in the new corporate office in Tennessee that will eventually employ up to 2,000 people.

Source link

Continue Reading

Tech

Home Depot Dropped LG Refrigerator Prices Up To 53% During Spring Black Friday Sale

Published

on





We may receive a commission on purchases made from links.

Home Depot’s Spring Black Friday Sale ends on April 22, but there is still time to make some last-minute splurges on a variety of appliances and tools — including Home Depot’s extensive collection of LG refrigerators. There are currently over 40 fridges on sale, with massive discounts up to 53% on popular models. 

Advertisement

The biggest sale is on the Energy Star-certified LG Counter-Depth Max, with the 53% discount bringing it down from $3,399 to $1,599. That’s $1,800 off. This model has 26 cubic feet of room with various compartments for storing food, a large 12.6-inch-tall ice and water dispenser, and LG’s ThinQ app to control temperature, track energy, and check the filter status. The latter is a handy extra, making LG one of SlashGear’s favorite smart fridge brands

Customers love the hidden handles that create an extra-sleek look, the fridge’s bright lighting, and spacious freezer — although some find the lack of door handles and shelf heights a bit awkward. It currently has a 4.5-star rating, which could make it a good candidate for a heavily discounted fridge if you’re in the market for one. It’s not the only LG fridge for sale, either: other highlights include a 29-cubic-foot Standard-Depth Max fridge that’s 42% off, and a 28-cubic-foot three-door French door fridge at 48% off.

Advertisement

What is Home Depot’s Spring Black Friday Sale?

Home Depot’s Spring Black Friday Sale ran a lot longer than just Friday — the sale started April 9th and will end April 22nd, the same 14-day timeframe the retailer has used for its spring sale for over a decade. It’s not really a “Black Friday” sale considering the length, but anyone with spring projects will most definitely welcome all the deals. 

Spring is a popular time for deals across a wide range of retailers, with the likes of Harbor Freight, Lowe’s, and Amazon also running their own seasonal discounts. This makes a lot of sense, since it’s often when people start DIY renovation projects and refresh their living space. Home Depot’s sale covers a wide range of products from major brands like DeWalt, Ryobi, Samsung, LG, Whirlpool, and Frigidaire, to name a few. Products on sale range from lawn and garden equipment — including useful gardening gadgets for spring — to patio furniture, kitchen appliances, and storage.

Advertisement



Source link

Advertisement
Continue Reading

Tech

Latest 'Star Wars' movie cut unnecessary costs by using Apple Vision Pro

Published

on

Director Jon Favreau says a specialized app let him better frame IMAX shots using a virtual theater environment in Apple Vision Pro. He cites it as one method to cut back on reshoots and reduce costs.

White Apple Vision Pro headset with dark reflective visor resting on a wooden table, connected by a thin cable to a small rectangular battery pack in soft indoor light
Apple Vision Pro could become a useful tool in filmmaking

Filmmaking has only become more and more expensive even as commercialized tools make the medium more accessible. It’s easier than ever to grab a smartphone and shoot some footage, but reaching Hollywood calibre isn’t so simple.
In an interview conducted by The Town podcast during Cinemacon, Jon Favreau discussed ways that technology was helping reduce costs in filmmaking. One of the tools he mentioned was Apple Vision Pro.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Trending

Copyright © 2025