One employee at Vercel adopted an AI tool. One employee at that AI vendor got hit with an infostealer. That combination created a walk-in path to Vercel’s production environments through an OAuth grant that nobody had reviewed.
Vercel, the cloud platform behind Next.js and its millions of weekly npm downloads, confirmed on Sunday that attackers gained unauthorized access to internal systems. Mandiant was brought in. Law enforcement was notified. Investigations remain active. An update on Monday confirmed that Vercel collaborated with GitHub, Microsoft, npm, and Socket to verify that no Vercel npm packages were compromised. Vercel also announced it is now defaulting environment variable creation to “sensitive.” Next.js, Turbopack, AI SDK, and all Vercel-published npm packages remain uncompromised after a coordinated audit with GitHub, Microsoft, npm, and Socket.
Context.ai was the entry point. OX Security’s analysis found that a Vercel employee installed the Context.ai browser extension and signed into it using a corporate Google Workspace account, granting broad OAuth permissions. When Context.ai was breached, the attacker inherited that employee’s Workspace access, pivoted into Vercel environments, and escalated privileges by sifting through environment variables not marked as “sensitive.” Vercel’s bulletin states that variables marked sensitive are stored in a manner that prevents them from being read. Variables without that designation were accessible in plaintext through the dashboard and API, and the attacker used them as the escalation path.
CEO Guillermo Rauch described the attacker as “highly sophisticated and, I strongly suspect, significantly accelerated by AI.” Jaime Blasco, CTO of Nudge Security, independently surfaced a second OAuth grant tied to Context.ai’s Chrome extension, matching the client ID from Vercel’s published IOC to Context.ai’s Google account before Rauch’s public statement. The Hacker News reported that Google removed Context.ai’s Chrome extension from the Chrome Web Store on March 27. Per The Hacker News and Nudge Security, that extension embedded a second OAuth grant enabling read access to users’ Google Drive files.
Advertisement
Patient zero. A Roblox cheat and a Lumma Stealer infection
Hudson Rock published forensic evidence on Monday, reporting that the breach origin traces to a February 2026 Lumma Stealer infection on a Context.ai employee’s machine. According to Hudson Rock, browser history showed the employee downloading Roblox auto-farm scripts and game exploit executors. Harvested credentials included Google Workspace logins, Supabase keys, Datadog tokens, Authkit credentials, and the support@context.ai account. Hudson Rock identified the infected user as a core member of “context-inc,” Context.ai’s tenant on the Vercel platform, with administrative access to production environment variable dashboards.
Context.ai published its own bulletin on Sunday (updated Monday), disclosing that the breach affects its deprecated AI Office Suite consumer product, not its enterprise Bedrock offering (Context.ai’s agent infrastructure product, unrelated to AWS Bedrock). Context.ai says it detected unauthorized access to its AWS environment in March, hired CrowdStrike to investigate, and shut down the environment. Its updated bulletin then disclosed that the scope was broader than initially understood: the attacker also compromised OAuth tokens for consumer users, and one of those tokens opened the door to Vercel’s Google Workspace.
Dwell time is the detail that should concern security directors. Nearly a month separated Context.ai’s March detection from the Vercel disclosure on Sunday. A separate Trend Micro analysis references an intrusion beginning as early as June 2024 — a finding that, if confirmed, would extend the dwell time to roughly 22 months. VentureBeat could not independently reconcile that timeline with Hudson Rock’s February 2026 dating; Trend Micro did not respond to a request for comment before publication.
Where detection goes blind
Security directors can use this table to benchmark their own detection stack against the four-hop kill chain this breach exploited.
Initial investigation did not identify OAuth token exfiltration. Scope was underestimated until Vercel disclosure.
3. OAuth token theft into Vercel Workspace
Advertisement
Compromised OAuth token used to access a Vercel employee’s Google Workspace. Employee had granted “Allow All” permissions via Chrome extension.
Google Workspace audit logs; OAuth app monitoring; CASB.
Very low. Most orgs do not monitor third-party OAuth token usage patterns.
No approval workflow intercepted the grant. No anomaly detection on OAuth token use from a compromised third party. This is the hop no one saw.
Advertisement
4. Lateral movement into Vercel production
Attacker enumerated non-sensitive env vars (accessible via dashboard/API), harvested customer credentials.
Vercel platform audit logs; behavioral analytics.
Moderate. Vercel detected the intrusion after the attacker accessed customer credentials.
Advertisement
Detection occurred after exfiltration, not before. Env var access by a compromised Workspace account did not trigger real-time alerting.
What’s confirmed vs. what’s claimed
Vercel’s bulletin confirms unauthorized access to internal systems, a limited subset of affected customers, and two IOCs tied to Context.ai’s Google Workspace OAuth apps. Rauch confirmed that Next.js, Turbopack, and Vercel’s open-source projects are unaffected.
Separately, a threat actor using the ShinyHunters name posted on BreachForums claiming to hold Vercel’s internal database, employee accounts, and GitHub and NPM tokens, with a $2M asking price. Austin Larsen, principal threat analyst at Google Threat Intelligence, assessed the claimant as “likely an imposter.” Actors previously linked to ShinyHunters have denied involvement. None of these claims has been independently verified.
Six governance failures the Vercel breach exposed
1. AI tool OAuth scopes go unaudited. Context.ai’s own bulletin states that a Vercel employee granted “Allow All” permissions using a corporate account. Most security teams have no inventory of which AI tools their employees have granted OAuth access to.
Advertisement
CrowdStrike CTO Elia Zaitsev put it bluntly at RSAC 2026: “Don’t give an agent access to everything just because you’re lazy. Give it access to only what it needs to get the job done.” Jeff Pollard, VP and principal analyst at Forrester, told Cybersecurity Dive that the attack is a reminder about third-party risk management concerns and AI tool permissions.
2. Environment variable classification is doing real security work. Vercel distinguishes between variables marked “sensitive” (stored in a manner that prevents reading) and those without that designation (accessible in plaintext through the dashboard and API). Attackers used the accessible variables as the escalation path. A developer convenience toggle determined the blast radius. Vercel has since changed its default: new environment variables now default to sensitive.
“Modern controls get deployed, but if legacy tokens or keys aren’t retired, the system quietly favors them,” Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS, told VentureBeat.
3. Infostealer-to-SaaS-to-supply-chain escalation chains lack detection coverage. Hudson Rock’s reporting reveals a kill chain that crossed four organizational boundaries. No single detection layer covers that chain. Context.ai’s updated bulletin acknowledged that the scope extended beyond what was initially identified during its CrowdStrike-led investigation.
Advertisement
4. Dwell time between vendor detection and customer notification exceeds attacker timelines. Context.ai detected the AWS compromise in March. Vercel disclosed on Sunday. Every CISO should ask their vendors: what is your contractual notification window after detecting unauthorized access that could affect downstream customers?
5. Third-party AI tools are the new shadow IT. Vercel’s bulletin describes Context.ai as “a small, third-party AI tool.” Grip Security’s March 2026 analysis of 23,000 SaaS environments found a 490% year-over-year increase in AI-related attacks. Vercel is the latest enterprise to learn this the hard way.
6. AI-accelerated attackers compress response timelines. Rauch’s assessment of AI acceleration comes from what his IR team observed. CrowdStrike’s 2026 Global Threat Report puts the baseline at a 29-minute average eCrime breakout time, 65% faster than 2024.
Security director action plan
Attack Surface
Advertisement
What Failed
Recommended Action
Owner
OAuth governance
Advertisement
Context.ai held broad “Allow All” Workspace permissions. No approval workflow intercepted.
Inventory every AI tool OAuth grant org-wide. Revoke scopes exceeding least privilege. Check both Vercel IOCs now.
Identity / IAM
Env var classification
Advertisement
Variables not marked “sensitive” remained accessible. Accessibility became the escalation path.
Default to non-readable. Require a security sign-off to downgrade any variable to accessible.
Platform eng + security
Infostealer-to-supply-chain
Advertisement
Kill chain spanned Lumma Stealer, Context.ai AWS, OAuth tokens, Vercel Workspace, and production environments.
Correlate Infostealer intel feeds against employee domains. Automate credential rotation when creds surface in stealer logs.
Threat intel + SOC
Vendor notification lag
Advertisement
Nearly a month between Context.ai detection and Vercel disclosure.
Require 72-hour notification clauses in all contracts involving OAuth or identity integration.
Third-party risk / legal
Shadow AI adoption
Advertisement
One employee’s unapproved AI tool became the breach vector for hundreds of orgs.
Extend shadow IT discovery to AI agent platforms. Treat unapproved adoption as a security event.
Security ops + procurement
Lateral movement speed
Advertisement
Rauch suspects AI acceleration. Attacker compressed the access-to-escalation window.
Search your Google Workspace admin console (Security > API Controls > Manage Third-Party App Access) for two OAuth App IDs.
Advertisement
The first is 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com, tied to Context.ai’s Office Suite.
The second is 110671459871-f3cq3okebd3jcg1lllmroqejdbka8cqq.apps.googleusercontent.com, tied to Context.ai’s Chrome extension and granting Google Drive read access.
If either touched your environment, you are in the blast radius regardless of what Vercel discloses next.
What this means for security directors
Forget the Vercel brand name for a moment. What happened here is the first major proof case that AI agent OAuth integrations create a breach class that most enterprise security programs cannot detect, scope, or contain. A Roblox cheat download in February led to production infrastructure access in April. Four organizational boundaries, two cloud providers, and one identity perimeter. No zero-day required.
Advertisement
For most enterprises, employees have connected AI tools to corporate Google Workspace, Microsoft 365 or Slack instances with broad OAuth scopes — without security teams knowing. The Vercel breach is the case study for what that exposure looks like when an attacker finds it first.
Charging is done via the same type of power adapter that the Sora 70 uses: a proprietary, blocklike connector that slides into a hatch on the rear of the device. A hinged port cover opens automatically when you slide the adapter into it and snaps shut when it’s removed. It’s not as convenient as a plugless charging dock, but it’s close, obviating the need for screw-on port covers or other waterproofing systems that have to be manually manipulated.
ScreenshotBeatbot app via Chris Null
In the water, the unit offers a scant three operational modes—floor mode, standard mode (which handles floor, wall, and waterline), and eco mode (which runs a floor-only cleaning for 45 minutes every 48 hours). Both floor and standard mode offer three running-time options: two hours, three hours, or max (i.e., run until the battery’s almost dead). These can all be selected through the Beatbot app, which is available via Bluetooth or either 2.4 GHz or 5 GHz Wi-Fi. You’ll also need to set up Wi-Fi for firmware updates.
A Capable Cleaner
I spent the better part of a week testing the Sora 30 with both organic and synthetic debris and found the robot to be quite capable. Contrary to expectations, I encountered no issues with even heavier debris days, and the Sora 30 was able to suck up leaves and dirt with an average 95 percent coverage rate. It worked reasonably well on steps and platforms and is rated to run in water as shallow as 8 inches. Note that there’s no artificial intelligence or a camera that can detect debris on the fly here. This robot just goes back and forth the best it can, which turns out to be pretty good.
The only performance struggles I witnessed were in a single sharp corner area near the pool’s steps, where debris seemed to be pushed aside, unable to be effectively collected. In fact, all of the uncollected material in my test runs would inevitably end up in this one location. (The good news is that this was in the shallow end, making it easy to scoop up with a net.) It’s tough to say whether truly massive amounts of debris or larger items like twigs and branches would impact its operation to the degree the box suggests, but nothing I saw suggested this poolbot was significantly less powerful than most other devices on the market, especially in its price band.
In 2021, I was a demoralized educator: not burnt out, but demoralized. As I shared in my first article for EdSurge, demoralization occurs when teachers “encounter consistent and pervasive challenges to enacting the values that motivate their work.”
That year, the pervasive challenges seemed obvious and communal. We were all navigating online platforms, figuring out how to replicate student services virtually and struggling to make up for lost time in instruction, social-skill development and relationship-building for when students returned to in-person schooling.
A crisis is not merely an event: it’s the context in which an event takes place and the response to that event.” The global pandemic has ended, but how much has the context changed and did the response meet the needs?
Right now, I believe teaching is the most important thing we can do. When the world is on fire, what feels most pressing is teaching students to claim their humanity and helping educators understand how much the communal learning experience matters. Five years later, I have come full circle.
Advertisement
This time, I return to that same claim with a broader and deeper understanding of what makes a school. We use that old adage, “It takes a village…” More and more, I see that we, as school communities, are the village and the villagers that we need right now. What really makes a school more human is not just the principals and teachers, but the child welfare staff, paraeducators, campus supervisors, guidance counselors, cafeteria workers, coaches, librarians, custodians and secretaries. The list is long, but it feels necessary to name the people on campus who make students feel like they belong, support them and have their backs when students need it. These are the colleagues who have shown me what it is like to truly model humanity to our students.
The truth is that the onus is on all of us to create an environment in which mutual respect and empathy are the baseline expectations. So, as an instructional coach, as a leader and as a voice of change in this context, what can I do? How do I communicate to teachers that, while they have been beaten down and blamed for society’s ills, they also have the herculean task of helping students learn how to be human together?
In 2021, I said that I was demoralized. In 2026, I am revitalized and committed to my role as an educator, instructional coach and teacher advocate.
Since participating in the inaugural cohort of the Voices of Change fellowship, I have contributed essays to The California Educator, Edutopia and EdSurge. I have joined podcast panels to talk about social-emotional learning, culturally responsive teaching and civil discourse in the classroom.
Advertisement
This fellowship showed me the power of personal writing for representation and advocacy. I have started to write children’s books about my own neurodivergent children. I have presented at local and state conferences and will continue to use my voice and my words to advocate for students, for educators, for quality professional development and schools that model the best of humanity. Writing for the Voices of Change fellowship has helped me claim my voice, my humanity and my power.
BrianFagioli writes: Mozilla says it used an early version of Anthropic’s Claude Mythos Preview to comb through Firefox’s code, and the results were hard to ignore. In Firefox 150, the team fixed 271 vulnerabilities identified during this effort, a number that would have been unthinkable not long ago. Instead of relying only on fuzzing tools or human review, the AI was able to reason through code and surface issues that typically require highly specialized expertise.
The bigger implication is less about one release and more about where this is heading. Security has long favored attackers, since they only need to find a single flaw while defenders have to protect everything. If AI can scale vulnerability discovery for defenders, that dynamic could start to shift. It does not mean zero days disappear overnight, but it suggests a future where bugs are found and fixed faster than attackers can weaponize them. “Computers were completely incapable of doing this a few months ago, and now they excel at it,” says Mozilla in a blog post. “We have many years of experience picking apart the work of the world’s best security researchers, and Mythos Preview is every bit as capable. So far we’ve found no category or complexity of vulnerability that humans can find that this model can’t.”
The company concluded: “The defects are finite, and we are entering a world where we can finally find them all.”
It could enable larger AI models on lower-powered devices
Anker is getting into the silicon business, specifically building a CIM (Compute In Memory) solution that will support onboard large model processing inside tiny, low-powered Bluetooth earbuds.
THUS is Anker’s first step in a long-term plan to bring local, large-model AI to mobile, wearable, and IoT technologies. Anker’s chip technology relies on Neural network-style computing, eschewing the traditional compute architecture in which the CPU processes the commands based on data and instructions it derives from memory. The transit from one to the other is an energy-intensive process. Neural Networks, like the human brain, don’t really respect that division. Letting it all work in one place saves considerable energy. That’s why CIM is attractive to Anker as a solution for bringing more powerful AI to its small-battery, lower-powered devices.
Basically, THUS, which is being fabbed in Germany, performs its computations inside NOR flash memory cells, which are known for their low-power operation; they’re slower than traditional memory for writing data but actually faster than NAND memory for reading operations.
Article continues below
Advertisement
By putting the models the AI need in the same spot as computation, THUS could not only conceivably lower power consumption, but also, Anker claims, make it possible to put larger models in devices that normally cannot house them because of their tiny batteries (at least based on traditional energy needs).
The first platform will be a pair of as-of-yet-unnamed Bluetooth earbuds where THUS will support more powerful environmental noise cancellation than was possible with traditional on-board AIU platforms. A larger on-bud model means the AI can more effectively cut out unwanted noise for better call clarity. Anker will call the feature, naturally, Clear Calls.
The chip will also add a pair of other features, “Signature Sound” and “Voice Control,” though Anker didn’t offer any further details on these features in our briefing. What we do know is that Anker will reveal all the details about its first THUS-bearing headphones on May 21, 2026.
Advertisement
Thinking in memory
CIM (also known as “in-Memory Compute”) isn’t a new concept, and it’s been widely ignored by most chip designers (some wonder if “it’s still alive”) and certainly by most people building ever-larger models, for bigger, more powerful, and more agentic AI operations.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Still, if Anker, which says it’s not becoming a chip company, succeeds, it could be a big moment for all kinds of low-powered devices, which have traditionally relied on cloud-based AI and the larger models they can house there.
Advertisement
Imagine smarter smart watches. Even smartphones could be impacted if other companies, like say, Apple, adopt CIM technologies for future Apple Silicon builds.
Oppo has just unveiled its flagship smartphone that’s “engineered to be your next camera”.
The Oppo Find X9 Ultra is fitted with “groundbreaking” lenses and “industry-leading hardware”, but how does it compare to the 4.5-star Oppo Find X9 Pro? Considering we concluded that the latter delivers a top-notch camera experience and has a spot on our best camera phones list, how does the X9 Ultra look to compare?
We’ve assessed the specs of the Oppo Find X9 Ultra to the Find X9 Pro, and highlighted the key differences between the two below.
The Find X9 Ultra is the first of Oppo’s Ultra models to launch globally.
Advertisement
The Find X9 Pro is available to buy now, and has a starting price of £1099. However, we have seen the phone’s price drop over the last few months, so it’s worth keeping an eye out for deals.
Oppo Find X9 Ultra has five rear lenses
Oppo explains that the Find X9 Ultra is fitted with a new-generation Hasselblad Master Camera System, which promises to deliver a versatile and high-quality framing that spans from 14mm to 460mm.
Advertisement
Made up of five rear lenses, the Find X9 Pro sports dual Hasselblad 200MP lenses. The first of the two is the Ultra-Sensing main lens which features the new 1/1.12-inch Sony LYTIA 901 sensor while the second is a 3x ultra-sensing telephoto which boasts the largest sensor of its type (1/1.28-inch), and doubles as a macro lens.
Oppo Find X9 Ultra. Image Credit (Oppo)
The two 200MP lenses are supported by two 50MP cameras: an ultrawide and a 10x Ultra-Sensing Optical-Zoom telephoto. In fact, the latter benefits from an industry-first 20x optical zoom too.
Finally, the four lenses are flanked by a new-gen True Color Camera which promises natural colour rendition.
Advertisement
In comparison, the Find X9 Pro is equipped with a 50MP main lens which, sure sounds pretty measly when compared to a 200MP alternative, but is able to capture plenty of detail and offers an impressive low-light performance too.
Advertisement
Oppo Find X9 Pro. Image Credit (Trusted Reviews)
This is paired with a 200MP telephoto lens that can reach up to a whopping 120x zoom. While this will come at the expense of detail, using Oppo’s Hasselblad Teleconverter attachment aims to fix this issue – and we’ll explain more below.
Both have supporting teleconverter attachments – but there’s a difference
Both the Find X9 Ultra and X9 Pro can be equipped with their own teleconverter attachments. With the Pro, the Teleconverter twists onto the 200MP telephoto lens and enables impressive zoom without compromising on quality. While it’s certainly not the most subtle of accessories, we were still impressed by its performance.
Oppo Find X9 Ultra attachment. Image Credit (Oppo)
Oppo has also created a similar 300m Teleconverter lens for the X9 Ultra edition, that mounts to the 200MP, 3x telephoto sensor. According to Oppo, this attachment will allow photographers to retain sharp detail at “30x and beyond”. That’s a bold claim, and one we’re keen to try out for ourselves.
Advertisement
Oppo Find X9 Ultra runs on Snapdragon 8 Elite Gen 5
Photography ability aside, one of the key differences between the Find X9 Ultra and X9 Pro is with their respective chips. While the latter Pro model runs on MediaTek’s Dimensity 9500, the Ultra is powered by Qualcomm’s Snapdragon 8 Elite Gen 5.
We found during our review of the Find X9 Pro that its Dimensity 9500 chip, combined with Oppo’s Luminous Rendering Engine, enabled the flagship to fly through everyday use while feeling rapid and responsive too. In addition, although it isn’t a dedicated gaming phone, it still had no issue running titles such as Call of Duty Mobile.
Advertisement
Oppo Find X9 Pro. Image Credit (Trusted Reviews)
However, Snapdragon 8 Elite Gen 5 is a tough competitor to beat. The chip is not only behind many of the best Android phones, but it can handle everything from casual tasks to generative AI tasks and gaming with ease. Having said that, we’d argue that most users will be unlikely to notice much of a difference between the chips in everyday use.
Oppo Find X9 Pro has a larger battery
With a mighty 7500mAh cell, the Find X9 Pro has one of the largest batteries found in any smartphone. This translates to comfortably being a two-day handset, although remember this will depend on your own usage. For example, we found that on days where we really pushed the phone’s limits, the handset couldn’t quite make it through a full second day.
Although it’s not quite as large, the Find X9 Ultra is still fitted with a whopping 7050mAh battery, which promises to ensure “reliable, all-day content creation”.
Advertisement
It’s worth pointing out that, although the Find X9 Pro’s battery is larger than the X9 Ultra’s own, both do boast pretty generous capacities. Considering the likes of the Samsung Galaxy S26 Ultra and Google Pixel 10 Pro XL max out at 5000mAh and 5200mAh respectively, Oppo’s Find series are certainly not to be sniffed at.
Advertisement
Oppo Find X9 Ultra comes in a familiar orange shade
Although both only come in a choice between two shades, they differ with their exact offerings. While the Find X9 Pro comes as Titanium Carbon or Silk White, the Find X9 Ultra is available in either Tundra Umber or Canyon Orange.
Regardless of the colour you choose, both the X9 Ultra and X9 Pro sport IP66, IP68 and IP69 ratings which means the handsets can withstand water submersion and even high pressure and high temperature water jets too.
Early Verdict
Although the Oppo Find X9 Pro is easily one of the best camera phones we’ve reviewed, the Find X9 Pro looks like a promising alternative for those who need even more versatility and shooting modes to play with. With a whopping five rear cameras and Qualcomm’s Snapdragon 8 Elite Gen 5 chip at play, the Oppo Find X9 Ultra is undoubtedly a promising handset for the keen photographer.
Microsoft has released out-of-band (OOB) security updates to patch a critical ASP.NET Core privilege escalation vulnerability.
The security flaw (tracked as CVE-2026-40372) was found in the ASP.NET Core Data Protection cryptographic APIs, and it could allow unauthenticated attackers to gain SYSTEM privileges on affected devices by forging authentication cookies.
Microsoft discovered the flaw following user reports that decryption was failing in their applications after installing the .NET 10.0.6 update release during this month’s Patch Tuesday.
“A regression in the Microsoft.AspNetCore.DataProtection 10.0.0-10.0.6 NuGet packages causes the managed authenticated encryptor to compute its HMAC validation tag over the wrong bytes of the payload and then discard the computed hash in some cases,” Microsoft says in the .NET 10.0.7 release notes.
“In these cases, the broken validation could allow an attacker to forge payloads that pass DataProtection’s authenticity checks, and to decrypt previously-protected payloads in auth cookies, antiforgery tokens, TempData, OIDC state, etc.
Advertisement
“If an attacker used forged payloads to authenticate as a privileged user during the vulnerable window, they may have induced the application to issue legitimately-signed tokens (session refresh, API key, password reset link, etc.) to themselves. Those tokens remain valid after upgrading to 10.0.7 unless the DataProtection key ring is rotated.”
As Microsoft further explained in a Tuesday security advisory, this vulnerability can also enable attackers to disclose files and modify data, but they cannot impact the system’s availability.
On Tuesday, senior program manager Rahul Bhandari warned all customers whose applications use ASP.NET Core Data Protection to update the Microsoft.AspNetCore.DataProtection package to 10.0.7 as soon as possible, then redeploy to fix the validation routine and ensure that any forged payloads are rejected automatically.
More information regarding affected platforms, packages, and application configuration can be found in the original announcement.
Advertisement
In October, Microsoft also patched an HTTP request smuggling bug (CVE-2025-55315) in the Kestrel web server that was flagged with the “highest ever” severity rating for an ASP.NET Core security flaw.
Successful exploitation of CVE-2025-55315 enables authenticated attackers to either hijack other users’ credentials, bypass front-end security controls, or crash the server.
AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.
At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.
Invincible season 4 episode 8 has landed on Prime Video — and, hoo boy, if Mark Grayson thought he had it bad already, nothing can prepare him (or you, for that matter) about what’s to come after that decision he’s just made.
If you’re here, I’m guessing you’ve seen the Amazon TV Original’s latest finale and have big questions about what you just watched. Luckily for you, I’m a huge Invincible nerd, so I’m perfectly placed to answer them.
Haven’t watched this chapter, titled ‘Don’t Leave Me Hanging Here’, but somehow stumbled upon this article? Consider this your one and only warning: full spoilers immediately follow forInvincible season 4‘s finale.
Advertisement
Article continues below
Advertisement
Do Thragg and his fellow Viltrumites go to Earth in the Invincible season 4 finale?
Season 4 episode 8’s cold open isn’t what you think it is (Image credit: Prime Video)
In short: yes — but the revelation that they’re now secretly living among Earth’s population is withheld until this episode’s final minutes.
Initially, it seems they’ve arrived on Earth with the sole intention of doing what Mark and the Coalition of Planets (CoP) did to the Viltrumites’ home world — that being, completely destroying it. You can remind yourself about that in my Invincible season 4 episode 7 ending explainer.
Anyway, season 4 episode 8 opens with a cataclysmic event that sees Thragg and the remaining Viltrumites attack Earth, and leave countless dead and wanton destruction in their wake.
It’s soon revealed, though, to be a misdirect. Indeed, said scenario is just a nightmarish vision that a traumatized Mark creates in his own mind aboard the interstellar starship he’s travelling home on. He imagines similar incidents throughout this episode, too, including Thragg killing his mom Debbie, girlfriend Eve, and Global Defence Agency (GDA) chief Cecil Stedman. All of them contribute to him having panic attacks and, eventually, seeking therapy through the GDA.
Advertisement
Sign up for breaking news, reviews, opinion, top tech deals, and more.
But I digress. Upon the space vessel’s arrival near Earth, Mark’s dad Nolan and Zoe/Tech Jacket start to formulate a plan in case Thragg and his forces are present on Earth. However, fearing for his mom and Eve, Mark impulsively leaves the ship and heads planetside alone. One quick but fear-fuelled recon mission later, though, and it appears that the Viltrumites haven’t traveled to Mark’s home planet. Phew!
Why does Mark let Thragg and the Viltrumites stay on Earth in Invincible season 4’s finale?
Err, nice to see you again, Thragg? (Image credit: Prime Video)
Or so we’re led to believe. In the final minutes of the Prime Video show’s latest finale, Eve persuades Mark to take a flight to clear his head.
Advertisement
However, upon said excursion, he’s stopped in his tracks by what he thinks is a hallucination of Thragg. Closing his eyes, Mark takes some deep breaths to compose himself but, upon opening them again, realizes Thragg isn’t a figment of his imagination — he’s really there.
This didn’t work last time, Mark, so why would it work now? (Image credit: Prime Video)
Mark launches himself at Thragg, but, just like Invincible season 4 episode 7, his punches do no damage. Thragg soon starts dodging Mark’s increasingly wild attacks with ease before effortlessly pushes him away.
Mark lines up another strike, but Thragg bellows at him to stop, which Mark does. In the incredibly tense chat that follows, Thragg informs Mark that he hasn’t done anything to Earth… yet. Mark angrily asks what Thragg wants, to which the Viltrumites’ Grand Regent replies that, upon his coronation, he was tasked with leading his people out of the darkness and thrive among the stars — a mission that, still clearly weighing heavy on Thragg, he admits hasn’t been easy.
Advertisement
I wouldn’t keep hollering at Anissa if I were you, guys… (Image credit: Prime Video)
Then comes the kicker. Thragg reveals only 37 Viltrumites remain, but even that miniscule number would be enough to “tear Earth apart” and be “fair payment” for Viltrum’s own destruction.
Continuing, Thragg gives Mark an ultimatum. In a voiceover accompanying scenes of Luccan, Anissa, and Krieg secretly living among humanity, Thragg tells Mark to let the remaining Viltrumites stay and breed with humans to prevent the Viltrumite race’s extinction. Do so, and Earth and its inhabitants won’t be harmed. However, if Mark or the CoP get in their way, billions will die and those who survive will be forced to eek out a miserable life under Thragg’s authoritarian rule.
Until next time, Thragg… (Image credit: Prime Video)
An indignant Mark starts to say he’ll never accept Thragg’s truce but, as the previously mentioned hallucinations, plus as a soul-calming memory of Eve smiling at him, flash before his eyes, he reluctantly agrees to Thragg’s proposal.
Surprised, Thragg admits it’s strange how the universe works, adding that, “willing or not,” he didn’t expect Mark to be his species’ savior. As Thragg prepares to leave, Mark says himself “what have I done?”. Thragg hears him and, turning back to Mark, says “you just saved the lives of every person on this planet”. Thragg departs, leaving a despairing Mark floating alone in the sky.
Advertisement
Did Eve get an abortion in Invincible season 4?
Eve finally tells Mark she was pregnant in the season 4 finale (Image credit: Prime Video)
Yes, but not straight away — and there’s an emotionally devastating addendum to this storyline.
After visiting Debbie to tell her that the severely injured Oliver is being treated back on Talescria, Mark leaves to see Eve, who tearfully greets him because she’d started to think he’d died. It’s been months since he left to take part in the Viltrumite War, so I don’t blame her.
Anyway, following some long-overdue, erm, lovemaking, Eve mentions that, as Mark (and, by proxy, viewers) can see, she’s put on some weight. She blames that on living with her parents and overeating in his absence, and Mark replies that he couldn’t care less about her weight gain. Eve also reveals that her powers have miraculously returned, though she’s constantly worried that they’ll stop working again.
Advertisement
Anybody else well up during this scene, too? (Image credit: Prime Video)
Later when the pair are sitting on the Grayson household’s roof, though, Eve comes clean. She tells him that she knows why she lost her matter manipulation abilities — it was, as we learned in season 4 episode 3, that she’d fallen pregnant. Tearing up, she adds that, without Mark around, she felt so alone and, if he had died, she’d have been scared about potentially raising a child on her own. Long story short: she had an abortion.
Visibly moved, Mark tries to process everything Eve’s just told him. However, upon realizing that she’s had to carry this burden alone for months, he quickly turns his attentions to Eve and, while crying himself, hugs his clearly distraught girlfriend to reassure her that everything will be okay.
Does Debbie forgive Nolan in Invincible’s season 4 finale? And why does she go to space?
Season 4 episode 8 indicated that, one day, Debbie might finally forgive Nolan (Image credit: Prime Video)
Let’s start with the first question: no, but there’s a clear hint that her stance has started to soften and that she might one day forgive Nolan.
Advertisement
Before Nolan heads back to Talescria to be with Oliver and aid the CoP’s efforts to find the remaining Viltrumites, he visits Debbie again. He tells her how brave Oliver was, and that he and Mark did her proud. Replying, Debbie, rebukes Nolan for letting Oliver get hurt before chastizing him once more for trying to make amends for what he did in Invincible‘s season 1 finale.
As she prepares to head back into her home, Nolan flies in front of her. Re-expressing his deep regret for the devastation he caused in season 1 episode 8, he also reiterates he’s trying to change and begs Debbie to let him show her that he deserves a second chance — something she’s long believed everyone is entitled to. Somewhat taken aback, Debbie re-composes herself, tells Nolan he can’t stay in the same house as her, and walks away.
Will the Graysons’ rift be healed by this space adventure? (Image credit: Prime Video)
Later, Debbie complains to Paul who, it’s revealed, she’s no longer romantically involved with. Despite the pair’s separation, he surprisingly advises her to go with Nolan to Talescria to be with Oliver, adding that this world of superheroes, villains, and extraterrestrial worlds is her life as much as Mark, Nolan, and Oliver’s — she just “doesn’t see it yet.”
Fast-forward to Nolan’s departure, and Debbie shocks him, Mark, and Eve by saying she’s decided to follow Paul’s advice to be at Oliver’s side. She tells Nolan to call down the spaceship, but he informs her that it can’t land anywhere. Reclutantly, Debbie agrees to let Nolan fly her to said space vessel where they share a tender moment looking out onto planet Earth. D’aww!
Advertisement
Invincible season 4 episode 8’s mid-credits scene: what does it tell us about the Scourge Virus?
Don’t do it, Allen… (Image credit: Prime Video)
In this episode’s one and only post-credits scene, Telia hands Allen, who’s now the CoP’s leader, a tablet with a post-humous video message from Thaedus. In the event of the latter’s death, said footage was to be passed on Allen, so he hits play.
He’s probably wishing he didn’t. As the recording progresses, Thaedus shockingly reveals that he created a perfected form of the Scourge Virus. That’s the pathogen he made to wipe out the Viltrumites decades ago but, while it killed billions, didn’t eradicate them all
With the tyrannical species surviving the Viltrumite War, Thaedus gives Allen a single mission: no matter the cost, use the far deadlier strain to kill every single living Viltrumite. Unfortunately for Allen, that would include Mark, Nolan, and Oliver, who’ve allied themselves with the CoP, and who he considers close friends.
Advertisement
Not exactly the dilemma that Allen probably wanted as he tries to get his feet under the leadership table, but will he go through with it? We’ll have to wait until next season to find out. Speaking of which…
Has Invincible season 5 been announced yet?
Don’t look so surprised, Mark — season 5 was an inevitability (Image credit: Prime Video)
Oh, haven’t you heard? Invincible season 5 is expected to be released sometime in early 2027, with co-creator Robert Kirkman indicating it could drop between February and April next year. All but one of the adult animated series’ installments — Invincible season 2 part 1 — have come out around March, so don’t be stunned if next season does likewise.
A group of unauthorized users has reportedly gained access to Mythos, the cybersecurity tool recently announced by Anthropic.
Much has been made of Mythos and its purported power — an AI product designed for enterprise security that, in the wrong hands, could become a potent hacking tool, according to the company. Now Bloomberg has reported that a “private online forum,” the members of which have not been publicly identified, has managed to gain access to the tool through a third-party vendor.
“We’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments,” an Anthropic spokesperson told TechCrunch. The company said that, so far, it has found no evidence that the supposedly unauthorized activity has impacted Anthropic’s systems in any way.
The unauthorized group tried a number of different strategies to gain access to the model, including using “access” enjoyed by the person who was interviewed by Bloomberg. That person is currently employed at a third-party contractor that works for Anthropic, the outlet reported.
Advertisement
Members of the group are part of a Discord channel that seeks out information about unreleased AI models, the outlet reported. The group has been using Mythos regularly since gaining access to it, and provided evidence to Bloomberg in the form of screenshots and a live demonstration of the software.
Bloomberg reports that the group, which supposedly gained access to the tool on the same day it was publicly announced, “made an educated guess about the model’s online location based on knowledge about the format Anthropic has used for other models.” The group in question is “interested in playing around with new models, not wreaking havoc with them,” the source told the outlet.
Mythos was released to a select number of vendors, including big names like Apple, as part of an initiative called Project Glasswing. The limited release of the model was designed to prevent its use by bad actors. The tool could be weaponized against corporate security instead of bolstering it, Anthropic said.
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
If true, unauthorized use of Mythos could spell trouble for Anthropic, which provided the exclusive release to allay the company’s concern for enterprise security.
Advertisement
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Summary: Lovable, the $6.6 billion vibe coding platform with eight million users, has faced three documented security incidents exposing source code, database credentials, and thousands of user records, with the most recent BOLA vulnerability left open for 48 days after the company closed a bug bounty report without escalation. The incidents are representative of a structural problem across vibe coding: 40-62% of AI-generated code contains vulnerabilities, 91.5% of vibe-coded apps had at least one AI hallucination-related flaw in Q1 2026, and the market’s incentive structure rewards growth over security at a moment when 60% of all new code is projected to be AI-generated by year end.
Lovable, the vibe coding platform valued at $6.6 billion with eight million users, has spent the past two months dealing with security incidents that collectively exposed source code, database credentials, AI chat histories, and the personal data of thousands of users across projects built on its platform. The most recent disclosure, published on 20 April by a security researcher, revealed a broken object-level authorisation vulnerability in Lovable’s API that allowed anyone with a free account to access another user’s profile, public projects, source code, and database credentials in as few as five API calls. The researcher reported the flaw to Lovable’s bug bounty programme on 3 March. Lovable patched it for new projects but never fixed it for existing ones, marked a follow-up report as a duplicate, and closed it. As of reporting, the vulnerability had been open for 48 days.
Lovable’s response followed a pattern that security researchers found more telling than the vulnerability itself. The company first posted on X that it “did not suffer a data breach,” calling the exposed data “intentional behaviour.” It then blamed its own documentation, saying that what “public” implies “was unclear.” It then blamed its bug bounty partner HackerOne, saying reports were “closed without escalation because our HackerOne partners thought that seeing public projects’ chats was the intended behaviour.” Later that day, it issued a partial apology acknowledging that “pointing to documentation issues alone was not enough.” Cybernews headlined its coverage: “Lovable goes on ego trip denying vulnerability, then blames others for said vulnerability.”
What was exposed
The April incident affected projects created before November 2025. The researcher demonstrated that extracting a user’s source code from Lovable’s API also yielded hardcoded Supabase database credentials embedded in that code. One affected project belonged to Connected Women in AI, a Danish nonprofit. Its exposed data contained real user records including names, job titles, LinkedIn profiles, and Stripe customer IDs, with records linked to individuals at Accenture Denmark and Copenhagen Business School. Employees at Nvidia, Microsoft, Uber, and Spotify reportedly have Lovable accounts tied to affected projects.
Advertisement
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
This was the third documented security incident involving the platform. In February, a tech entrepreneur named Taimur Khan found 16 vulnerabilities, six of them critical, in a single app hosted on Lovable and featured on its own Discover page with more than 100,000 views. The most severe was an inverted authentication logic that granted anonymous users full access while blocking authenticated users. The app, an AI-powered EdTech tool, exposed 18,697 user records including 4,538 student accounts from institutions including UC Berkeley and UC Davis, with minors likely on the platform. Khan reported his findings through Lovable’s support channel. His ticket was closed without a response.
An earlier study in May 2025 found that 170 out of 1,645 sampled Lovable-created applications had issues allowing personal information to be accessed by anyone. Approximately 70% of Lovable apps had row-level security disabled entirely.
Advertisement
The structural problem
Lovable is not uniquely insecure. It is representatively insecure. The platform generates full-stack applications using React, Tailwind, and Supabase in response to natural language prompts, a process the industry calls vibe coding after Andrej Karpathy coined the term in February 2025. The approach lets anyone describe an application and have it built by an AI model without writing or reviewing code. Collins English Dictionary named it Word of the Year for 2025. Gartner forecasts that 60% of all new code will be AI-generated by the end of this year.
The security data across the entire category is consistent. Between 40 and 62% of AI-generated code contains security vulnerabilities, depending on the study. AI-written code produces flaws at 2.74 times the rate of human-written code, according to an analysis of 470 GitHub pull requests. A first-quarter 2026 assessment of more than 200 vibe-coded applications found that 91.5% contained at least one vulnerability traceable to AI hallucination. More than 60% exposed API keys or database credentials in public repositories. The vulnerability classes are the same acrossevery major vibe coding platform: disabled row-level security, hardcoded secrets, missing webhook verification, injection flaws, and broken access controls.
Bolt.new ships with row-level security off by default. Cursor has had multiple CVEs patched, including a case-sensitivity bypass enabling persistent remote code execution. Researchers at Pillar Security demonstrated a “rules file backdoor” attack in which hackers inject hidden malicious instructions into configuration files used by Cursor and GitHub Copilot. A separate “Agent Commander” attack in March showed that prompt injection into AI coding agents could convert autonomous coding tools into remotely controlled malware delivery platforms. In January, the vibe-coded social network Moltbook was breached within three days of launch, exposing 1.5 million API authentication tokens and 35,000 email addresses through a misconfigured Supabase database with no row-level security.
The economic incentive problem
Security firms are raising moneyspecifically to address the gap. Escape raised $18 million to replace manual penetration testing with AI agents that scan vibe-coded applications, citing over 2,000 high-impact vulnerabilities and hundreds of exposed secrets found in live production systems. Lovable itself partnered with Aikido to bring automated pentesting to its platform. But the fundamental incentive structure of the market works against security.
Advertisement
Lovable hit $4 million in annual recurring revenue in its first four weeks and $10 million in two months with a team of 15 people. It raised $200 million at a $1.8 billion valuation in July 2025 and $330 million at $6.6 billion in December, more than tripling its valuation in five months. Enterprise adoption of vibe coding grew 340% year over year. Non-technical user adoption surged 520%. Eighty-seven percent of Fortune 500 companies have adopted at least one vibe coding platform. The market rewards speed and accessibility. Security is a cost centre that slows both.
The result is a category in which the dominant platforms generate code that is insecure by default, the users generating that code lack the expertise to identify the vulnerabilities, and the platforms themselves have financial incentives to prioritise growth over remediation. Lovable’s handling of the March and April incidents illustrates the dynamic precisely: a bug bounty report was closed without escalation, a vulnerability affecting thousands of projects was patched for new users but not existing ones, and the public response cycled through denial, deflection, and a partial apology within a single day.
The regulatory gap
TheEU AI Act’s high-risk obligationstake effect on 2 August, requiring transparency, human oversight, and data governance for AI systems. California’s S.B. 53 and New York’s RAISE Act require frontier AI developers to publish safety frameworks and report incidents. But none of these regulations specifically address the security of code generated by AI models for end users, and the adoption data suggests the market is moving faster than regulators can respond. Financial services and healthcare, the two most regulated sectors, show the lowest vibe coding adoption rates at 34 and 28% respectively, which indicates that the market itself recognises the compliance gap even if regulations have not yet caught up.
As Trend Micro framed it: “The real risk of vibe coding isn’t AI writing insecure code. It’s humans shipping code they never had a chance to secure.” The84% surge in App Store submissionsdriven by vibe coding tools suggests the volume of unreviewed code entering production is accelerating. Thirty-five CVEs were disclosed in March alone from AI-generated code, up from six in January, and Georgia Tech estimates the actual figure is five to ten times higher than what is detected.
Advertisement
Lovable is thefastest-growing software startup in historyby several measures. It is also a company that closed a critical vulnerability report without reading it, left thousands of projects exposed for 48 days, and responded to public disclosure by denying a breach, blaming its documentation, blaming its bug bounty partner, and then apologising for the apology. The pattern is not unique to Lovable. It is the pattern of a category that has built extraordinary tools for creating software and almost nothing for securing it.
ZenTimings is a Windows utility designed for AMD Ryzen platforms that displays real-time memory configuration data and timings, frequency, and Infinity Fabric clocks. It’s primarily used to verify BIOS or XMP/EXPO settings, offering a straightforward way to check how system memory is running.
You must be logged in to post a comment Login